How we built a web to print system into our graphics toolbox Q

When the tool becomes a change agent

Beni Buess
NZZ Open
Published in
11 min readOct 30, 2019

--

Four years ago we started implementing a toolbox that the editorial staff at NZZ can use to create visual elements for their reporting within a few minutes.

We rolled out our basic chart tools first and over time added more tools and features. We worked hard to show people the possibilities to help them start using them.

For the web.

For print, the graphics team at NZZ Visuals had to redo all the graphics by hand. Every day one person spent the whole day doing this.

As a newspaper with a very long tradition of printing content on paper this also had an effect on the quality of the graphics produced with Q. The print product is where people invested most of their time to polish the result. The graphics made with Q didn’t have to be perfect, because they were not used for the product that stands in the spotlight.

We have changed that now. In September, we started printing graphics on paper that have already been produced with Q and published online at NZZ.ch. The newsroom is in the midst of a transition to shift the focus towards the digital product. We are just in time.

The core team working on this project consisted of Project Manager Cordula Braun, Head of Graphics Anna Wiederkehr, my boss Tom Schneider and myself, Head of Editorial Tech. We had help from David Bauer, Head of the Visuals Department, Reto Althaus, Art Director for the print product and Balz Rittmeyer, designer in the graphics team and Philip Küng, developer in the Editorial Tech team.

Here is how the project went from a technical perspective. Anna Wiederkehr wrote about the organisational challenges and how we tackled them.

Constraints

We had three constraints which come from the organizational structure we work in:

  1. There should be no need to touch the graphics again after export. They should not cross a designers desk to be opened in Illustrator again for tweaks or conversions.
  2. The pre-press process cannot be changed as the people working on these systems have no resources to invest in this project.
  3. The knowledge about how exactly the pre-press systems work and how the different systems could be used was not accessible for this project. We do not know and nobody could tell us what exactly happens with the graphic before ink hits the paper.

The first one we set ourselves because we were sure that the system will have a much bigger effect on the digital transformation if that goal is achieved.

The others come from the fact that the project happened pretty fast. There was no time to create an understanding of the effect and write business cases to get the attention and the money needed to change these constraints.

This means we needed to develop a process which resulted in graphics with a very similar quality to those currently created by hand in Illustrator. This is a PDF in the correct CMYK color space, WAN-IFRA ISO Newspaper 26v4.

We settled to ship a working system within 3 months. And do it while the Editorial Tech team was also hard at work to implement a new tool in Q to create locator maps (that was long overdue) and implement all the graphics for the national election day October 20th. This was another constraint that contributed value to the final product, but it required us to make the tech simple because there was little time or resources.

Proving the technical pitch

There was a good technical base in place to build on top.

  • Q already had a system to deliver graphics with different configurations and stylesheets towards different output targets.
  • Q already had a system to create PNGs of graphics using a headless chrome browser taking screenshots driven by puppeteer.
  • All the graphics in Q use the same CSS classes to define different types of text. This is already in use to render graphics with the correct typeface when embedded in different digital products.

The tech pitch went like this: We create PNG screenshots using the print typeface and RGB colors and transform them to a TIFF in the correct CMYK color space in order to print them.

Some people were skeptical in the past about that approach, since they feared a raster graphic would lead to subpar quality. So we implemented a prototype on top of the screenshot system that transformed the RGB PNGs to a CMYK TIFF using imagemagick and printed a proof.

The printing factory has an inkjet printer that produces very similar colors as the offset machines used to print the newspaper. We sent them a PDF of 4 pages laid out like a newspaper page containing lots of graphics since we wanted to compare the quality of the text in the raster graphics with the text defined as a vector font.

The quality was good, so we could move on. Having that proof on actual paper also helped to explain what we are doing to other people on the way.

Another thing to test was a digital product that is derived from the PDF of the newspaper sent to the presses. The so called E-Paper a lot of media companies have in their portfolio. For the E-Paper at NZZ there is a pipeline in place that produces JPGs from the PDF graphics. As you know already, we couldn’t change that pipeline. So we had to make sure we could slip into the very same process with the graphics exported from Q. It became clear that we couldn’t use our TIFFs directly but had to put them in a PDF for this to work. Imagemagick can do that. The pipeline failed nonetheless. Imagemagick produced a PDF v1.7. We learned from the error messages that we needed v1.3. There were only 2 hours left on the time budget of the people helping with the E-Paper test. We tried the tiff2pdf tool from libtiff. It worked. The graphics appeared in the E-Paper.

Knowing that we have a working prototype to deal with the things we can’t control gave us enough confidence to move on.

The finish line

The first proof we printed didn’t contain graphics using the typeface we use in print. As we had to keep the actual overall workload low, we defined print equivalents of all the fonts in use for the different graphics. And some more. We changed small details here and there in several tools to bring them in line. This step alone improved the quality of the graphics produced with Q and made them more consistent overall.

This allowed us to not deliver anything different from the tools, but make them look correct by loading different CSS files than we do for the digital product into the headless chrome to take the screenshots for the print output. The system to configure this in a declarative way was already in place in Q.

Since the prototype we built earlier was already nicely integrated in the Q system the only thing left to do was to expose it in a user interface. We worked with people being involved in planning and actually producing the print product to figure out what the essential settings are that need to be available. We were able to boil it down within a 1 hour workshop from many ideas to:

  • Define how many columns the graphic should span.
  • Define the style of the title or remove it completely to allow for combinations of multiple graphics
  • Define a special title for the graphic that is different from the one used on the website.
  • Define a special subtitle for the graphic

Since we saw potential in using that very same export functionality in the Q user interface for other targets such as different social media as well, we built it in an abstract manner by extending some of the concepts already in place in Q.

As mentioned earlier, Q can already deliver the same graphic to different targets using different stylesheets. Until now, this was only used by other computers. When the CMS asks for the contents of a graphic it tells Q if this gets to be shown on NZZ.ch or NZZaS.nzz.ch (the Sunday edition), when the native app systems asks for a graphic, it tells Q that this will be shown in the app and Q can respond with the correct information for these systems.

Now we extended that so that we can define which targets are exposed in the Q user interface as well to let humans tell Q that they need a graphic for a certain system. We can define a JSON schema that describes what things users can give as an input. This schema is then rendered to a HTML form using what Q uses otherwise to render the forms for all the tools (they define their data model in a JSON schema as well). There is a preview that shows what the user will get. This was also in place already.

So we basically had to put one button on a screen that triggers a modal showing a form with a preview and a download button.

The user interface for Q to print

All in all we probably changed or added a few 100 lines of code to make the whole Q to print system a thing. We wanted to keep the resulting system simple, by reusing a lot of concepts already in place we achieved that. We even made these concepts better on the way.

From here we worked with the the print production team to layout a few pages we wanted to print using the offset technology on actual newspaper paper. Going through the exact same process as the paper every day does was the final test. If the result would be good, we could start to roll out the Q-to-print system.

The result was good. The differences printed from Q in comparison to those done by hand are negligible. Mission accomplished 🚀.

If you haven’t already, now would be good time to head over to the story written by Anna Wiederkehr, she has more to tell on how we tackled the next organisational obstacles and what we learned during the first week of rolling this out.

Or you can stay for some more technical details and read on below.

graphic made with Q on the frontpage

Was it really that easy?

No. I left out some technical obstacles before. For you, dear reader who made it until here, let me explain some of them as you might have better ideas you want to share or just want to learn something.

Converting RGB PNG to a CMYK TIFF

I have only a basic understanding of colors and printing. So the following is probably partly wrong or at least imprecise. Please bear with me.

RGB is used for additive systems, additive means that the more color (light) you add the brighter (more light) the result. That is the case for screens. CMYK is used for subtractive systems, the more color (ink) the darker the result (less reflection).

CMYK stands for Cyan Magenta Yellow Key. Key is the name of the channel that is printed using black ink. Perceived black color can be achieved by using only the K channel or by using so called rich black that contains also other colors besides black. Rich black looks more deep. But there is a problem with offset printing on cheap newspaper. The different plates used for the 4 colors do not fit the exact same spot of the paper. So the result of printing text in rich black is not more depth in the black but a magenta colored text shadow. That is not something we want.

So the text need to be K only with no other colors in these pixels. Adobe products have some options to achieve that: Preserve Black and Promote Gray to CMYK Black. Imagemagick doesn’t have this feature readily at hand (or I could not find it).

But these pixels are all waiting in the TIFF ready to get changed. Reading the TIFF spec to understand to file format lead to using UTIF, a TIFF decoder and encoder by the people building the Photopea image editor.

Our pretty primitive function loops over all the pixels, if the K channel contains the most ink, we remove all the ink from the others. Then we add some more ink to the K channel in case it already has a lot to make it darker again after removing some color, we assume those pixels belong to text glyphs. This could be improved probably by making the amount of additional of black color depending on the removed color.

How many pixels?

Someone said, that there is no need for more than 300 dpi. We accepted that without really proofing it. After rolling out, we changed this to 1200. The effect on the printed graphics is not really visible. But there are a lot of pixels now to improve the result by changing things in systems that sit between Q and the paper.

As you may have seen, the export dialog allows to define the number of colums the graphic should span. We need to calculate the width in pixels and the devicePixelRatio (CSS pixel * devicePixelRatio = physical pixel) to create the screenshot. Here is how we do that:

The troubles with the PDF version

As I wrote before, the PDF had to be in version 1.3 to be compatible with the E-Paper process. Imagemagick produced a PDF in v1.7. Some people on the internet say this is because of the embedded colorprofile.

I didn’t really want to mess with the color management again at this point. And there had to be a PDF v1.3 file available within an hour if I didn’t want to miss the opportunity to test the whole process with someone who knows how the E-Paper gets generated and can debug errors with me. So I quickly looked for alternatives instead of fighting with imagemagick.

The alternative was found in libtiff with its tiff2pdf program. It failed with an error message telling me that it doesn’t support 5 samples per pixel. That is because there was an alpha channel on all the pixels from some conversions before. After removing this using imagemagick, the PDF was generated in v1.3, the E-Paper pipeline worked, everyone was happy.

We didn’t invest more time into streamlining the PDF generation process in Q. The fact that we made something for the print publication doesn’t mean we should loose the focus. We need to keep improving the digital product. Digital first means print second.

Thanks for reading. If you have questions, suggestions or are working on similar or totally different newsroom tooling I’d love to talk. I am @benibu on Twitter, the teams are @NZZEditoTech and @NZZVisuals.

--

--

Beni Buess
NZZ Open
Writer for

@livingdocsIO — before @NZZEditoTech — code for journalism