A Journey into Production with Qlik Core
A little over a year ago I received an email from an old Qlik friend of mine (Murray Grigo-McMahon), entitled “Qlik Core project”. At this point, I’d been dabbling with Qlik Core but nothing too serious. These days time doesn’t permit me to explore anything with any vigour unless a project demands it. And here was that project. (Naturally I’d said ‘yes’ to doing it before even opening the email.)
If you’re not familiar with Qlik, it’s a data discovery tool that allows you to combine multiple data sets from any number of sources and explore them in a very natural way to ask questions and gain insights. Most commonly, Qlik is used to build dashboards and analytics but the reality is, it’s just very good with data.
At Websy, we typically deliver Qlik projects using the APIs which often means creating solutions which are a step (or country mile) away from a traditional dashboard and/or analysis tool. This was to be such a project. In fact, there is no “traditional” analysis required whatsoever. This would be a tool for finding and exploring. Analysis is still very much a possibility in this solution but it’s not about numbers and performance.
The project is for Letterform Archive. A non-profit organisation that houses a “curated collection of over 50,000 items related to lettering, typography, calligraphy, and graphic design, spanning thousands of years of history”. One of their on-going tasks is taking this collection, digitising it and exposing it online, in a tool that allows you to explore and find items with ease. That’s where Qlik comes in.
I should mention at this point that the project was already well underway before I joined the party. Murray had already laid out a clear vision of what the Online Archive could look like and developed a fully featured prototype. So, what was my part in all of this? The prototype, albeit well built, was just that, a prototype. Built on top of the Qlik Capability API and as a multi-page application it suffered from a few performance issues and, as Murray put it, the code needed a good clean.
Why Qlik? The answer to this, on some level, always comes down to the sheer power of Qlik’s associative technology. The mechanism for finding content in the prototype is focused on 2 things. Facets and a free form search. Facets are very much a part of that associative technology and so finding content in the Online Archive is as simple as choosing which Decade, Creator and Format, for example, that you’re interested in or any combination of the available filters. The search, in this solution, is an amalgamation of all of those facets, so you can type instead of browse.
For the first 12 months of my involvement the solution would continue to use a traditional Qlik implementation. I switched it to be a single page application and migrated the Qlik logic to use the Engine API. A step closer to Qlik Core but still not quite there.
Something for the more seasoned Qlik developer — It really put my Set Analysis skills to the test. The item pages are built using multiple HyperCubes and leverage all kinds of Qlik functions in order to get the right data out of the model. I’ve combined sets, used P() and P() inside of P() and techniques only available via the APIs. I feel like a Qlik deity and yet, I’m still not using Qlik Core under the hood.
Why all this talk of Qlik Core? Qlik as a product comes in many shapes and sizes and typically an implementation consists of a client tool for developing and consuming apps, along with a collection of management tools and services. An “on-premise” version of this requires windows. The key component in all of this, however, is the Qlik Engine. That’s the clever part and the piece that we’re really interested in. Qlik Core is essentially just this one important piece. All of the good stuff without any of the unnecessary baggage. Unnecessary in this project because it’s a public site, all users are anonymous and we only have one dataset to worry about. It also runs on Docker, which means we can use Linux, which means cheaper hosting costs.
Any developer that takes pride in their work should be fearful of “Production”.
If I’m honest, the reason for delaying the inevitable move, was fear. Developing a solution on top of Qlik Core is essentially no different to working with the Qlik Engine API. Deploying Qlik Core, however, is entirely different. By this point, I’ve been teaching a class with Speros Kokenes at the Masters Summit for Qlik that’s all about Qlik Core and the APIs. I’m comfortable with Docker and although not an expert I’m even comfortable with Linux. My fear is not of Qlik Core itself. My fear, is moving to — “Production” with Qlik Core.
Let me add some context. I put things into production weekly and I’m scared every time. Any developer that takes pride in their work should be fearful of “Production”. One stray comma is enough to bring a project to its knees. Oh the shame! This would be a first for me with Qlik Core which significantly adds to the degree of fear. At some point I would have to hide the tears, come out of the dark and face the fear.
Yesterday (April 7th, 2020) we went live.
So, how did it go? Better than I could have hoped! And, breathe…..A lot of fuss over nothing really!
I’ll give you some numbers. Over a 3 hour period, various social media outlets were targeted, reaching thousands of members and subscribers. Throughout that period the site averaged around 400 concurrent sessions at any given time, with close to 700 being the peak in the hours that followed. We saw over 4000 sessions in that 3 hour window. As I write this, 16 hours later, there are currently over 700 people on the site and we’ve seen 17,000 sessions in less than 24 hours.
How does Qlik Core handle it? Like a dream. Great performance, no errors and no crashes. What more could I ask for?
It’s been an absolute pleasure working with Murray and the team at Letterform Archive and I can’t wait to see what the next phase of the project brings.
Enough from me, try it out for yourself — Online Archive
Read more about it here.