GitHub All-Stars #3: termtosvg
Welcome in the third episode of our series. All the previous was covering utilities written in JavaScript. We started with covering very short-lived product — bot for the Saliens game for this year Steam Summer Sale. In the second post, we were able to take a look at the TensorFlow and how it can be used in the browser. The current edition is not covering technology which is trendy. This time I’ll be covering “nerdy” tool that I have interesting utility value — nbedos/termtosvg.
I semi-regularly do meetup/conference talks and (also semi-regularly) try to write something for the community — like the post you are reading right now. Since doing it mainly about developer’s stuff, I’m often able to find valuable to show terminal output. I’m especially fond of showing animated output that displays the command execution step by step. Asciinema is kinda industry standard for that, but it’s rather poor supported (due it’s embeddable nature) — cannot be used with Medium, Github Readme pages or (without hacks) with Reveal.js, which is a bummer, knowing the popularity of all those tools. That why I was pretty excited to find a tool that allows recording terminal output, with similar ease, to SVG animation. While being less powerful than Asciinema (lack of control buttons — play/pause/forward/backward), compatibility with my other tools was such a promise I couldn’t resist to check how it works in practice.
I’m not alone — being only 2 months old, termtosvg already have over 5k stars and a lot of coverage in the community. That’s why I decided to simultaneously try it in action and take a look at how it works in the internals. I also did some tests to check if termtosvg covers all user cases I hoped it would when I first decided to lay my hands over it.
Let’s start with the basics functionality.
The tool itself is triggered a really simple way — you need to install it with pip (termtosvg is written in Python 3 — it’s sad you need to every provide which version of Python was used …), run termtosvg command, and then you are ready to go — all the input and output to/from the terminal will be captured to SVG animation, with preservation of delays between subsequent commands. Everything is working fine — after killing the process (exit or standard SigKill signal), files are generated and ready to be viewed using any browser. What’s more, you are able to style your terminal using configuration: you are able to choose a custom theme or font.
Additionally, there is support for the Ascicast format — using command termtosvg record we are able to export terminal content to format compatible with previously mentioned asciinema, meanwhile using termtosvg render, we can convert any asciinema output to SVG animation. For the scope of this article, we’ll just focus on main functionality and won’t cover those additional options.
Now, when we know how to use the app, let’s focus on the code.
We’ll begin with python scripts. __main__.py is our entrance to the understanding how them works. The script starts with few configuration sections — for the theme and for verbose mode.
The first thing that block do is generating temporary SVG file — with a random or user-defined name. The important thing is also an initialization of input_fileno and output_fileno variable s— if program’s arguments which control it is not set, what we receive is sys.stdin.fileno() and sys.stdout.fileno() — descriptor for shell standard input and output respectively. It will be used in a moment.
Going to the main code, we’ll find a surprise — standard invocation of termtosvg is simply mixing behavior of it’s two previously mentioned, additional modes — first record to asciinema form and then render it SVG.
Next interesting part is an invocation of term.get_terminal_size, which is call for os.get_terminal_size from Python standard library — that retrieve information about the amount of columns and lines in our terminal window.
After some boilerplate code connected to retrieving theme, font and other configuration stuff, we create instance of TerminalMode passing our inputFile to it (everything within autocloseable block marked with with, that allow to manages lifecycle of our stream — working like automatic try-finally or try-witch known from Java and C#) — what’s interesting, it took me a moment to understand that this class is not used later and mainly needed for memory control and a bit of error handling
Now it’s time to go to the main part of application — term.record, which takes as a parameter previously retrieved list of columns, rows, input (in our case it will be stdin descriptor) and output file (which was generated in previous lines.
Let’s see what lurks inside the terminal recording section.
It’s great to see in our undocumented world piece of greatly commented code. It really helps to understand the section like this. If every code would be such well described, there won’t be probably any need for this series :)
Quoting author:
This function forks the current process. The child process is a shell which is a session leader and has a controlling terminal and is run in the background. The parent process, runs in the foreground, transmits data between the standard input, output and the shell process and logs it. From the user point of view, it appears they are communicating with their shell (through their terminal emulator) when in fact they communicate with our parent process which logs all the data exchanged with the shell.
Let’s see how it’s implemented:
First, it’s retrieving information about shell from SHELL environmental variable (with defaulting to sh).
Next, a terminal is forked — it’s important to spot that function fork is called once (by the parent) while returned twice — both by the parent as well as the child process. As documentation for pty.fork() state, if pid is 0, it means that we are in raw child process, in which case os.execlp(shell, shell) line is executed — this line replaces child process with a new instance of the shell program.
In the next steps, the terminal is set to specific size.
Meanwhile, parent terminal is set to raw mode, which disables buffering — every single character is written individually, which is needed to preserve the timing of recording.
Now comes time to capture data — function _capture_data is handling transmission between child process (which is our “real” terminal) to the parent (which is “a terminal for the terminal” process). _capture_data is waiting for every new character (select.select method) and yield (return to the caller without terminating loop) result as data-time tuple. The operation is looped as long as the pidkill signal is not sent.
{"version": 2, "width": 80, "height": 24, "timestamp": 1504467315, "title": "Demo", "env": {"TERM": "xterm-256color", "SHELL": "/bin/zsh"}}
[0.248848, "o", "\u001b[1;31mHello \u001b[32mWorld!\u001b[0m\n"]
[1.001376, "o", "That was ok\rThis is better."]
[2.143733, "o", " "]
[6.541828, "o", "Bye!"]
Now when we have all the data from terminal and time, we need to generate intermediary format in previously mentioned Asciicast format. It’s using very interesting schema called JSON lines. The first line (header of the file) contains metadata, every new line is a three element JSON Array, which contains the time, event type (“o” for output, “i” for input) and the data inputted. All conversion happens in asciicast.py, but is rather straightforward, so I will go strictly to replaying section.
As we have seen before, all the actions are not in the user terminal, but “headless” terminal that captures those operations. To provide output to the user, Pyte library is used. It creates a multipurpose terminal emulator. It is set up to emulate the terminal configuration of the user, what hides the fact that in truth he is seeing the output from this proxy all the time, not their own terminal. This emulated terminal is created by command Pyte.Screen, which is later populated by Asciicast, exactly the same that is used to generated SVG animation (it’s even using CharacterCellLineEvent from anim.py). Once again, we will not cover whole translation from Asciicast to terminal — it’s (once again) rather straightforward, with the most of hard part is properly rendering terminal cursor and line breaks. The most interesting point here is the mojo with Pyte emulated terminal usage.
Now it’s time for our last part, for which we are all here: rendering animation. Creating SVG files is done by svgwrite package, which is used by anim.py script. Everything is styled using CSS, which is properly filled to match user configuration.
SVG drawing is pre-generated for a specific size of rows and columns to simulate our terminal.
Every animation frame is grouped by time. First, a SVGgroup fot given animation frame is created.
In the following steps, the text is rendered over given rectangle. If the line is already in definitions for the file, it is changed (with animation effect) otherwise it’s added.
Every new Asciicast Json Line is inserted in definition <defs> section as a SVG <g> (group) element containing <text> tags with terminal input. Such precomputed elements are later used for animation.
In the beginning, every single line is hidden. The whole animation is done by computing time when a particular group needs to be shown. Each specific frame is timed relative to the previous element. When all lines are converted, SVG drawing is saved to disk and we are done! Now it can be open in a browser, and we can see the result of our animation.
After we are done with code, there is a question — where can we use termtosvg? Is SVG really a viable format that suit’s all our presentation needs?
Unfortunately… no. It’s working natively with Reveal.js, so it is still better than Asciinema, which was problematic to run with this popular CSS presentation framework.
Unfortunately, they are not working with Keynote. There is also a problem with converting SVG animation to formats supported by this tool.
The easiest way I found is… simply recording screen using QuickTime, as every conversion tool had its own quirks. That’s definitely not something we’re targeting — we could do it in the first place, without converting terminal to SVG.
The even sadder was that even Medium don’t support this format — it’s funny to see that platform that has dozens of tutorials “how to do SVG animations” don’t allow authors to demo any example due to the restriction of this blogging tool. I couldn’t too…
Concluding, while termtosvg itself was a fascinating tool and I learned a lot from digging through it (Pyte, how to use Python to fork shell process, a bit of SVG animation, JSON lines format), the SVG animations are a bit useless for now. They are not well supported by the ecosystem and probably we need to wait a few more years if we want them to be embraced. But if you have a good use case for SVG animation of the terminal output (and you know it will be supported), termtosvg is a great tool to use and even greater learning material.