Flickypedia, a simplified Wikipedia concept

Aleksandar Arsovski
4 min readFeb 4, 2022

My 10yo daughter and I found ourselves chatting about Wikipedia two weeks ago. She thought the articles were too long, too dull and un-engaging — a reasonable perspective given the fast and fluent Web / App experiences we are accustomed to nowadays.

Case in point: to the left, a 2006 snapshot of the Wikipedia entry for Shanghai; to the right — the Wikipedia entry for Shanghai today, 15 years later:

To the left: Feb 2006 Wikipedia snapshot for Shanghai, source Wayback Machine; To the right: Feb 2022 Wikipedia entry for Shanghai

Chances are, most people — and certainly 10 year olds — would find both the 2006 and 2022 articles uninviting; younger children would find them impossible.

While 10 year-olds might not be Wikipedia's target audience, I think it is worth to do an experiment in reshaping the available content and delivering it in a way it is easier to consume.

The assumptions

#1: Simpler content (shorter text, less stats) is easier to remember

My first assumption is that most of the time we aren’t doing research, rather reading something out of curiosity. We aren’t looking for hard facts and numbers, rather a summary we could remember.

#2: Slick interaction featuring high quality photos is more inviting UX

My second working assumption is that most of us prefer high quality photos/art with captions than scrolling through long, text-only paragraphs.

Short swipes Left/Right/Up/Down — to switch articles or see more photos is more engaging than just scrolling through a long table of text.

The experiment: automated content modernizing pipeline

  • Ingest articles using the Wikipedia API and the Oxford Dictionary API
  • Simplify the content using the OpenAI “Summarise for 2nd grader” API
  • Fetch high quality, modern photography using the Unsplash API
  • Render the content through in a modern Web and/or App experience
100% automated pipeline: Wikipedia + Oxford Dictionary => Open AI Summary + Unsplash => Web App

Simplifying the content

To achieve a simplified narrative, I'm running the ingested articles through the Open AI “Summarize for a 2nd grader” API:

From the OpenAI playground: Summarize for a 2nd grader

Fetching high quality photography

To fetch accompanying photos to the articles, I integrated the Unsplash API into the article ingestion pipeline. The same API is used by the Unsplash web:

Looking up Shanghai in the Unsplash photo archive

The web app

To make the articles accessible, I coded a web app featuring simple swipe Up/Down Left/Right interaction. Respects your privacy and doesn’t track you.

Putting it all together

The automated pipeline ingests, simplifies and augments articles with modern, high quality photos, then persists in Elastic in the cloud. A simple progressive web app makes the photos shine.

To make the articles accessible to younger children, the web app features instant translation and text to speech voiceover.

Working Flickypedia prototype with 100% automated content

In the demo above, both the pictures and the text are 100% from the automated pipeline — no content has been manually edited. Below the surface, there is way more data ingested than currently visible in the UI, so, the same article might look differently when I tweak the web app next week.

Head to https://flickypedia.com to try the prototype out.

The logical next step in elevating the concept is introduction of curated content (concept 2), where a human would open up the raw articles into an editor/UI, then manually decide which texts and photos to keep or remove, as well as potentially introducing more channels, e.g. YouTube videos. Part 2 follows.

PS, The tech stack

Node, Elastic, Redis on Heroku, React on Netlify, both on free tier. Cloud Text To Speech from GCP.

Ingesting an article takes less than 2s, making it is feasible to enable dynamic search / instant articles on demand in the future.

--

--