Studio Tour (no Java)
nipafx news #90 — 6th of August 2021
I occasionally get asked about my setup for coding, gaming, recording, streaming, or writing and since it’s all the same hardware, I thought I’d kill 🦤🦤🦤🦤🦤 with one stone and let you know everything about it. In a similar vein, a few people have been curious about my workflows for the same things (well, not gaming — nobody watches me playing a game and wonders “how does he do that?!”) and so I decided to fold that in as well. Buckle up, this is a long one with gory details on everything from GPU to OBS setup, from microphones to Pandoc, and from camera lenses to video asset creation.
I took this opportunity to create a full snapshot of all the gear. This seems to be a good point in time for that because after buying a bit more in recent months, it’s gonna be stable for some time to come. Well, maybe, I already have a few more ideas… After going back and forth on it, I decided to include the prices. This is not to show off (most gear is solid mid-level consumer tech) but (a) it should give you an impression what to expect if you’re interested in such gear and (b) it fulfills my incessant need to bookkeep. Instead of looking up current prices, I put down what I paid, so some of that will of course be way cheaper by now.
Table of contents if you want to skip ahead:
- Office, PC, and other generalities
* Black Circus
* General workflows
(Warning: There’s almost no Java in this issue.)
I send this newsletter out some Sundays. Or other days. Sometimes not for weeks. But as an actual email. So, subscribe!
Office, PC, and other generalities
Let’s start with the basics — where do I plant my bottoms and what do I sit in front of?
Pretty small room: 2m by 3.40m (for you Imperial folks out there, that’s 13.68 roofer’s knees by 52 farmer’s toes) with wallpapered drywall wall (I swear, I needed all the “wall”s) and a cheap-wood ceiling that’s pitched for about half the room’s length. The desk, gear, and me take up about 2m x 2m with cabinets lining the walls that are out of the camera’s sight. The remaining space is occasionally occupied by a clotheshorse (had to look that one up), but it also (barely) fits a foldable bed, which came in handy when I had to spend a week of COVID quarantine in here in December (mostly playing Cyberpunk 2077, so it was a good time actually).
I sit in a nondescript office chair that my dad liberated from the insurance company he used to work for over a decade ago. Well, I thought it was nondescript but just found out that it’s actually a brand product, a Klöber Duera due98. Not bad, dad, thank you!
The connection to the world out there is established via Vodafone. I have 1Gbit/s down and 50 Mbit/s up, but my network cards and router don’t like one another and so I only have 100 MBit Ethernet. I get Gbit Ethernet for my laptop on the same cable/router and even for the same PC/cable with the laptop on the other end — I eventually gave up because it only ever matters when downloading games.
The room is great for an office, but only ok for a studio. It has no depth (wall is 80cm behind me), which makes it look boring on camera. It also keeps me from putting stuff on the walls that break echoes, so even with some sound dampening (see below), the audio isn’t great. The next step to reduce echo is probably to put more material out of sight (further above screens, on ceiling, on door), but that’s probably ugly (and/or expensive) and I have to be in that room for about 50 hours per week, so I want to keep it somewhat pleasant.
I’m really looking forward to a new office, but I can’t just rent one because then I have to put my PC there and how am I supposed to enjoy my free time then?! Speaking of the PC…
Because of my preference for black and my laziness to turn off all the LEDs (they occasionally turn back on on their own), I call it the Black Circus. It’s the best of late 2019:
- tower: Dark Base Pro 900 rev. 2–225 EUR
- PSU: Seasonic Focus PX Plus Platinum 750W — 118 EUR
- mainboard: Asus ROG Strix X570-E Gaming — 320 EUR
- CPU: Ryzen 9 3900 X (3950X wasn’t available yet) — 549 EUR
- cooler: Noctua NH-D15 chromax 140mm, dual tower — 100 EUR
- RAM: G.Skill Trident Z b/w 2 x 16GB, DDR4–3600–157 EUR
- GPU: MSI GeForce RTX 2080 Ti Gaming X Trio — 1'200 EUR
- SSD: 2x Samsung SSD 970 PRO M.2 1 TB — 197 EUR + 207 EUR (bought the second one a few days later)
- HD: WD Red 10TB — 308 EUR
- mouse: Logitech G903 Hero Lightspeed — 109 EUR
- keyboard: Ducky Shine 7 Blackout Edition with Cherry MX Blue (no further customization) — 203 EUR
- controller: Xbox One Controller (Model 1708, black) with charge kit — 52 EUR + 22 EUR
* left: Acer Predator XB3 (27", UHD, 144Hz, IPS) — 905 EUR
* right: Acer Predator XB1 (27", UHD, 60Hz, IPS) — 569 EUR
- speakers: Logitech Z523 — yellowed by time (that’s what you get for buying white!), so price unknown
- printer: HP Officejet Pro 8620 (cause I’m a pro!) — 189 EUR
- power strips: 2x Brennenstuhl Eco-Line Comfort Switch Plus, Socket Strip 6-Fold — 34 EUR
Why would I mention the power strips?! For one, because I was asked, but also because I feel pretty clever about getting them. 😁 The PC and the recording gear (see further down) used to be connected to one strip each, which I could turn off by crawling under the desk. But that’s uncomfortable. But the mixer’s A/C adapter always eats energy (it’s always warm), so I want to turn it off. But the printer is on the same strip and I need it every now and then, so I need to turn it on. I’m too old for that shit, so a few weeks ago I looked for power strips whose on/off switch wasn’t directly on the strip itself.
I found the one above, which not only has four sockets connected to the switch, but also two permanent-on ones. The left strip has the printer on permanent and recording gear switchable, the right one the tower on permanent (for sleep mode) and everything else switchable. The switches are two large blue plastic thingies that you operate by foot — I call them my grandpa switches 👴🏾 and they work like a charm. Recommend!
Originally, I considered getting monitor arms to keep the desk clean and free, but I have almost no space left behind the screens and exactly zero behind the desk, which doesn’t go well with many arms. So I didn’t get them, but I still kind of want to. May look into it again in the future, surely if that new office ever happens.
Other things I don’t need, but have a strong impulse to buy are a newer Ryzen and an Nvidia 3080. I could also look into another 32 gigs of RAM, but honestly, I think I never ran into a limitation there. If I keep streaming, I need another 10 TB HDD, though. 😆
I have a Lenovo Thinkpad T440p (1'900 EUR), which was at my side from summer 2014 to winter 2019/2020. As my only computer it was a workhorse for development work, a gaming station, a multimedia player, my diary and travel companion for thousands of kilometers, and compiler of most of its own software (because Gentoo). I’m not kidding you when I say that during those years, there were just a few dozen days in total when I wasn’t using it at least a bit.
In said winter, Black Circus (replaced it at home) and COVID (no more not “at home”) conspired to make it nigh obsolete — now it functions as a YouTube-player connected to our TV. But it’s still going strong (except the battery, a few keys, and a bit of display lightbleed) and I’m sure I will have a lump in my throat when I’ll eventually retire it. Which should happen later this year — I’m in the market for a Thinkpad T14s Gen2 with Ryzen 5 for when conferences start up again and I’ll be traveling once more.
On the main PC, one SSD runs Windows 10 for gaming, the other Gentoo Linux with KDE for everything else. Main programs:
* for browsing: Firefox
* for development: Chromium
* etc.: Chrome (for multimedia sites like Jitsi and as fall back)
- email, calendar, etc.: Kontact
- messenger: Signal (very rarely Telegram, fuck WhatsApp)
- file browser: Krusader
- office suite: LibreOffice, but as little as possible
- text editor: VS Code in UI, nano on terminal (plus the one vi instance that I can’t close)
- terminal: Bash in Konsole
- image editor: GIMP, KolourPaint (don’t judge me!)
First a word on the screens: Having two 4K panels side by side is the best thing since sliced bread! I use almost no scaling, so all pixels count. In most situations, I split the screen into four vertical slices with 1920x2160px (8:9) each. From left to right this is usually:
- communicate: email, messenger, sometimes a popped out YouTube or Twitch tab
- read: first and foremost the browser, sometimes a document I read from
- write: IDE, text editor, sometimes a table or document I write
- monitor: always a terminal, preview of what I’m working on if available (e.g. in a browser)
Of course I’ll shuffle things around if I have to. I’ve set up Meta + NumPad to make windows fill left or right half, top or bottom half, or one of the four corners.
Being able to look at four things at the same time and have them twice as tall as usual (compared to Full HD) is huge — literally! I wholeheartedly recommend this setup.
I don’t like taskbars because most of the time most of the information on it is ignored and they waste screen space. I launch programs with Krunner (hot-keyed to Alt + Space), switch with Alt + Tab, and have everything else (like the clock and connected devices) in widgets on the desktop that I can see with Meta + D. I have two virtual desktops set up, layed out vertically. The top one is the default where I spend most of my time with the programs open as mentioned above. When I do something that requires more screen space or separation (like streaming, video editing, or giving a talk), I use the lower desktop for that.
Overall, my workflows are a mixture of (1) doing a thing, (2) distracting myself with Twitter, news, YouTube, or random chores, and (3) chiding myself for not working, which has a 50/50 chance to land me in either (1) or (2). It was always a mess, but working from home for half a decade didn’t exactly improve it but in recent years I got better at not constantly beating myself up about it.
If I’m a bit tired, a pot of black tea helps a lot (and I like the feeling of a warm cup in my hand), but I rarely do that more than once or twice a week to keep me from building a tolerance. As for music, I can’t listen to what I really like (Rammstein, Metallica, Der W, etc.) because lyrics are too distracting and so I listen to electronic music — I really like the progressive mixes by Miss Monique.
Tons more gear is involved in streaming:
* camera: Lumix G7 with stock lens Lumix G Vario 12–60mm, F3.5–5.6–649 EUR
* A/C adapter: original from Lumix (scared of a fried camera mainboard) — 98 EUR
* HDMI-to-USB: Elgato Cam Link 4K — 130 EUR
* tripod: Manfrotto MKBFRA4-BH Befree — 131 EUR
* key light: Elgato Key Light — 195 EUR
* back light: Elgato Ring Light — 195 EUR
* lightstrip: Elgato Light Strip — 60 EUR
* floodlights: MustWin RGB Spotlight — 60 EUR
* microphone: Shure SM7B — 354 EUR
* microphone arm: Neuma B07JVSLDYW (something cheap while I evaluate better options, possibly Elgato Wave Mic Arm) — 18 EUR
* XLR-cable: Mogami 2534 Quad Professionel — 39 EUR
* preamp: TritonAudio FetHead — 69 EUR
* audio interface (aka mixer): Behringer Q802USB Xenyx — 65 EUR
* sound dampening: Elgato Wave Panels (6x) — 120 EUR
* two palm trees (I need to switch them every now and then because it’s so dark in that corner) — forgot the price
* a bunch of Displates (L, matte, no frame) — 60–80 EUR a piece (depends on discounts)
The lighting is a classic four-point setup with the Key Light serving as key light (~30° offset to the right and a bit up), the Ring Light and one Spotlight as colored fill light (~90° to the left and a bit up), the Light Strip as back light (just so) and the second Spot Light as background light.
The audio setup is pretty straightforward: Mic to XLR cable to mixer, mixer to headphones, and also mixer to PC via USB, where it shows up as a regular sound device (no software needed, which is great because software like Elgato’s rarely ships for Linux). You may notice that the preamp is missing. I’m currently experimenting without it to see whether it’s actually necessary (I feel that it isn’t).
The mic is still very new and the sound is raw — nothing is configured in the mic itself (the Shure has a presence boost and bass roll-off — both are in factory settings), in the mixer (the Behringer has the classic low, middle, high knobs — all on zero), not in software (e.g. software equalizer). Talking about the software, I use PulseAudio but am considering to switch to Jack for more control over the input.
The Elgato Wave Panels got rid of the worst of the echo in this room. I have three above the screen because that’s the direction im yelling in. The remaining three are on the rear wall but I have to be careful not to put them behind me or my black T-shirts, my dark hair and the panels merge into a see of black that looks bad on camera.
Displates are essentially metal posters. They look really good, but more importantly for me, they’re attached to the wall with magnets, which makes it very easy to exchange them for more variety across videos and streams. (Yes, they’re for you — I don’t see it, it’s behind me! 😉)
I want to point out that you can get a stream that looks and sounds as good (or bad) as mine or even better with cheaper gear. It’s essentially a tradeoff between time, skill, and money: The more time you’re able and willing to invest and the more skilled you are, the more you can get out of any given piece of hardware. I’d rather spend that time on Java, though, and lets not dwell on my recording skills (particularly audio), so I went with more expensive gear that delivers good results out of the box. Of course more time and skills will make that stuff even better and I’m striving to do that in the coming months.
I’m also very happy with not having bought the cheapest stuff — so far I didn’t have to replace anything because it failed and the only upgrade I sprung for was the mic (old microphone in the section on recording).
For streaming, I use OBS. I have a dedicated profile for streaming and should be using the NVENC settings but had trouble with them in the past, so I stuck to x264 encoding. Just changed it back to NVENC to give it another try. I use CBR for rate control and usually set the bitrate to 5'000 Kbps, but Vodafone occasionally drops the ball and my upload speed tanks (after one particularly troubling stream, I did a speed test and came away with 3 Mbit/s up), in which case I lower the bitrate. Thankfully, that works while streaming.
NVIDIA NVENC OBS Guide
The objective of this guide is to help you understand how to use the NVIDIA encoder, NVENC, in OBS. We have simplified…
I configured OBS to automatically start a recording when I start a stream. I just noticed that I recorded with CBR as well, which isn’t great because while good for live streaming, it has a worse quality/size ratio than variable bit rate control. I had it set to 20'000 Kbps, which results in about 9 GB/h, and just now switched to CQP with a CQ level of 15 — lets see where that leaves me (resulting size depends on screen activity).
I record multiple audio tracks (configured under Settings ~> Output (advanced mode) ~> Recording ~> Audio Track) with all outputs on track 1 and then additionally each output on its own track (configured under Audio Mixer ~> right click or cogwheel ~> Advanced Audio Properties), so I can isolate audio if I have to.
Because my screen resolutions are 3840x2160, so is my canvas resolution. In the past I downscaled the output to 1920x1080 right there (under Settings ~> Video) but then the recordings are also only in Full HD. Alternatively, you can downscale while encoding (under Settings ~> Output (advanced mode) ~> Streaming ~> Rescale Output), but with x264 that lowered performance to the point where I dropped frames. Just gave it a try with NVENC and it seems to work fine, so from now on I will record in 4K — neat.
When I stream, I share the entire left screen. To make everything readable, I use presentation mode in IntelliJ IDEA with font size 48, zoom in to ~181x50 in Konsole, and use the Firefox plugin Fixed Zoom to change the zoom for all websites to 250%.
The right screen is divided between Twitch’s Stream Manager (mainly showing the activity feed, quick actions, and chat), OBS, a VS Code instance for the notes, and a Chrome instance if I have guests.
(The entire setup is similar when I give remote talks.)
Speaking of guests, if you have any, use vdo.ninja. It takes a bit getting used to but at its core it’s a video conferencing tool that gives you one URL per participant. Throw that URL into an OBS browser source and you can use OBS to position the feed and manage audio instead of having to leave that to Jitsi, Zoom, Teams, etc. It works really great!
Bring live video from your smartphone, computer, or friends directly into your Studio. 100% free.
I have a dedicated scene collection for streaming with scenes from solo up to two guests focusing either on the screen, a guest, or the camera. The basic layout emerged from the axioms that I don’t like green screens and don’t want to put the camera feed over the screen share, so I never block anything. By default, that means two 16:9 feeds side by side, which means there’s gonna be some left over space. That’s why I have a title at the top and the notes in the lower right corner.
Speaking of the notes, they used to be pulled from a text file, but then I didn’t have any formatting and had to do line breaks myself, which sucked. I recently upgraded to:
- edit a Markdown file
- observe the file system with entr and on each file change, trigger Pandoc
- Pandoc turns the Markdown file into a standalone HTML document
- use that HTML document in an OBS browser source
Simple, right? 🤣 Well, it uses tools I use a lot anyway, so I’m familiar with them, but this may not be everybody’s cup of tea.
The overall layout (the charcoal background with the green frames and tabs) was created by pixel pushing in KolourPaint (see, told you not to judge me). Because the tabs contain texts that change with the layout, it’s not just one file but many and that makes it pretty laborious to change anything, whether it’s the color or a position. I’ve recently started recreating such frames as HTML and simply screenshotting the result, which makes it much more flexible — will do the same here very soon.
I’ve often thought about upgrading the camera or the lens to get a shallower depth of field, which would put the background out of focus — a visually pleasing, high quality look. I think it’s a lost cause in this room, though. I’d probably need to pay four figures to get a lens that has a sufficiently narrow depth of field to make in impact in my very shallow room and I don’t think that’s worth it. Maybe if that new office happens some day… I’ve also occasionally considered getting another camera (probably a good webcam) for a second angle, but I’m not sure where I’d point it. Probably another thing for that new office.
Two areas that I want to explore in the future are more interactivity and a different layout. Now that I finally have a browser source, I can do fancy things like on-screen notifications for follows or subscriptions. Or wacky things like changing the accent color (currently neon green) to something chosen in Twitch chat. Maybe we can even integrate the LED floodlights into the show!
As for the layout, I’d like to reduce the dead space that I fill with the header (not needed — Twitch UI can contain the same info) and notes (only needed occasionally). Since I’m a vertical being (although a bit less so since I stopped working out), I think I can get away with a vertical camera angle, maybe 5:9 (that’s roughly 16:9 in Portrait mode). I could then spend the remaining 11:9 on the shared screen and have zero dead space.
That effectively prevents me from ever having two programs side by side on stream but that rarely happens anyway. The challenge here is that I need a simple way to tile the desktop in that manner. Without being able to resize a program into a 11:9 slice (i.e. 68.75% of the screen width) with a press of a button, this would be unmanageable. (And I don’t want to do anything permanent like changing the screen resolution because that messes everything up.) I’m sure I can get this to work, I just didn’t look into it very closely yet.
I’ll do all of that in the coming winter.
When recording videos, I usually sit in the exact same spot as when streaming, but on a stool instead of the office chair because then I sit straighter and don’t have a dark backrest behind me that looks a bit stupid. I use the same hardware plus:
- teleprompter: Desview T2–130 EUR
In the before times, I would occasionally record in other places as well — at conferences, for example. For that, I have a few things I can take on the road with me:
- microphones: Movo LV4 Dual-XLR Lavalier Interview Kit (until very recently, I used the cardioid one from the set as main mic) — 85 EUR
- audio interface/recorder: Tascam DR-40–169 EUR
- camera batteries: LOOkit Travel Charger + 2x 1050mAh batteries — 39 EUR
I occasionally consider launching a podcast (because, of course I do) and the gear I listed under Streaming should be pretty good for that. As I mentioned, you can probably get a very similar result to what I would achieve with cheaper hardware, though. This is particularly true because there’s no visual component and so I you can improve the sound with quite obtrusive methods like a microphone isolation shield or a handcrafted vocal booth. In fact, because of the audio issues in this room that I explained earlier, I’d probably do one of those things as well if I ever start an audio-only format.
For recent videos, particularly the Inside Java Newscasts, I started with writing a script that I would then read from the teleprompter. Initially, I didn’t like the idea very much, but it has made a huge difference. It makes the language more precise (because I don’t make it up on the spot), the delivery more fluent (no speech buffer underruns), and the recording faster (fewer tries to get something right). The script also doubles as input to YouTube’s subtitle generation, which makes the subtitles much better than the auto-generated ones.
For details on writing the script, I’m gonna refer you to the Writing section further down — it’s essentially the same process. One addition is that when reviewing, I read the text out loud — not just mumbling it to myself, but speaking aloud as if I were presenting or recording (yes, it’s weird). That keeps sentences shorter and the text more colloquial.
For recording, I use OBS. I have a separate profile for it that (now) uses NVENC with CQP on level 10 (before: CBR with 50'000 Kbps). Crucially, there are no streaming information in this profile, so I can’t accidentally live-stream my failed takes. 😄 Depending on whether I want to record 4K@30fps or 2K@60fps, horizontal or vertical (for the YouTube shorts I did a while back), I change the profile settings and then pick the respective scene collection (so there are four of them). These scenes are always very simple: one is just the camera, the other just the screen.
With the entire recording machinery right in front of me, I hit Start Recording and Stop Recording like a madman. Every take is its own file that gets immediately reviewed and, when it’s good, moved into the correct folder (I delete all the outtakes once the editing is done). This saves disk space, but more importantly, I don’t need to go through two hours of material to find the good takes — big time saver!
Then I drop everything into Kdenlive, KDE’s native video editor, and start editing. The editor occupies the right screen with everything else that I need having to crowd together on the left, usually split into four quarters. While working on the Newscast, I started to adopt an iterative approach, where I go over the entire video several times to improve it:
- alpha version: all the good takes with acceptable cuts and title slides
- beta version: cleaned up cuts and added all necessary assets
- release candidate: additional cuts, pans, etc. and optional assets as well as audio levels
If I were good at this, the last version would include things like color correction or cleaning up the audio.
One interesting aspect are the assets, meaning the code, websites, frames around embedded videos, etc. you see during those videos. I started putting them together in Kdenlive, but that hurts so much. Not only is it a lot of manual work for each piece (background, border line, text on top, possibly colored), that work also makes me hesitant to change anything (maybe borders 1px wider?) because that’s a lot of manual rework. I quickly went a different route:
- a CSS sheet for all styling info
- create assets in HTML, one file per video
- show HTML in browser and screenshot all assets
- use images as they are during editing — all that’s needed is to adjust their position
This works really well and changing something like font size consistently across the video just requires a CSS change, new screenshots, and potentially updating the asset sizes in Kdenlive. Now, the screenshotting part sounds cumbersome and when I did it by hand (which works pretty well in Firefox), it was less work than fumbling in the video editor but still enough work to make me not wanting to change things too much.
I complained about this on Discord and a few days later (if that long), selckin came up with a great solution: Chrome and Chromium can be launched with an open remote debugging port that can be used to control their screenshot capabilities and he wrote a Java program that did just that, called chromeshot. When I run it, it…
- connects to Chromium that shows the asset HTML
- identifies all assets (marked by having a certain
- screenshots each one individually and writes to files
* target folder comes from a command line argument
* file name comes from each asset’s
Kdenlive picks up the file change and updates the video timeline (in about 1% of the cases, it crashes, but it has excellent error recovery so nothing ever gets lost — yay). Neat, right?
If you’re not already facepalming at these layers of complexity (my workflow is like an ogre!), I’ve got one more: What about syntax highlighting for code snippets? Simple, remember the video’s script? It contains the code snippets as well (usually first written in an IDE to make sure they work) and since I publish it on my blog, I write it as an article anyway.
So I use the blog setup to render the Markdown as HTML in the blog’s preview, during which PrismJS gets hold of the code and adds a bunch of CSS classes like
punctuation. Now I copy that HTML into the asset file I mentioned earlier, my CSS applies the styling I want, and chromeshot takes it from there.
So just for fun (and to drive you deeper into that double-handed facepalm), here’s the full workflow for making a code change:
- change code in IDE to make sure it works
- copy into video script (this triggers a render of the blog preview)
- in dev tools, pick out the HTML and copy it into the asset HTML file (the browser observes the file and auto-refreshes)
- run chromeshot (this replaces existing files, which Kdenlive will pick up)
- check video editor to see whether the change looks good
Is that faster than just updating text in the video editor? Except for the simplest of changes, absolutely! And it makes sure the code works, the script is up to date, and the code is correctly highlighted. Worth it!
By the way, if you haven’t kept count that means I need an IDE to write the code, a text editor with the video script to paste it in, a browser that I can steal the code as HTML from, another text editor with the asset HTML file, a Chromium instance that shows that file, either a command line or an IDE to run chromeshot, and a file browser to copy new assets into the video editor — that’s seven windows just for that.
Great, now I’m facepalming…
I dream of a solution to that asset ordeal that cuts some steps short. Maybe a pure JS solution that copies code out of a
.java file into a Markdown code block, runs PrismJS to create highlighted HTML, and then uses a JS testing framework to make the screenshot. If the blog could be adapted to also pull in sources from
.java files, that would be pretty nifty.
But what occupies my thoughts more than the assets is Kdenlive. I love open source software and stick to it for a long time time after I should probably have abandoned it for something better and now I’m wondering whether that’s happening here:
- I’m coming up against the first things it just plain can’t do (e.g. decent audio editing, customizable transition curves, speed ramps).
- Its GPU support is still lacking, particularly during playback, where I need it most. (While researching this statement to make sure it’s still up to date, I found out that it can render on GPU now. Tried it and it turned out to be a bit slower than 12 threads with x264. So presumably the GPU wouldn’t speed up the playback either, right? Weird.)
- It lacks a number of features that would make editing more efficient (like configuring multiple effects at once) and I have the lingering suspicion that other tools might have those.
There are some alternatives, but I dread looking into them because it would take time (and money) to get to know them well enough to understand their strengths and weaknesses, time during which every single video takes much longer…
On the hardware side, it would be handy to have a camera (and capture card) that does 4k@60fps (both the Lumix and the Cam Link do 4K@30fps or 2K@60fps). Then I could always record in that setting and decide in editing, what to chose for the final video. Another cool thing would be a third monitor, a wide one (20:9?) above the other two just for Kdenlive — then all the other windows could share the original two screens without overlap. That would be massive, though, so I’m not sure it’s a good idea.
* for Java and larger web projects: IntelliJ IDEA Ultimate (EA builds) with JetBrains Toolbox
* for simple web projects: VS Code
- Java: latest Oracle OpenJDK from jdk.java.net with SDKMAN
- build tool: usually Maven (out of habit)
- testing: JUnit 5, AssertJ, Mockito
I don’t do a whole lot of “regular” software development these days — most of my time coding is on JUnit Pioneer (Java 8), various little websites (Gatsby, which is React, plus Spring Boot if it has a backend), and a lot of experiments to understand new Java features. Accordingly, my workflow is nothing to write home about, although if the chance presents itself, I try to TDD the work on Pioneer.
JUnit 5 extension pack, pushing the frontiers on Jupiter. Released on GitHub and Maven Central underorg.junit-pioneer …
Large parts of my coding happens on stream, which is good because it keeps me focused: Even I don’t have the gall to hang out on Twitter instead of coding while I’m streaming. That keeps those pieces of work relatively distraction free. Even without the stream, staying focused on coding is easier than for any other activity.
I’d love to have a bunch of smart tips for you that you can use to improve to your writing. Become a 10x writer with these five simple tricks! Alas, I don’t. Writing is often fun, sometimes easy, but just as often it’s fucking hard. Particularly if the project is large and the editor empty.
One thing I can recommend for those specific situations is to start wherever you want: in the middle, with the end, in a tangent — doesn’t matter. As long as it gets you writing, it counts. Yes, you might have to rework it, you might even have to throw it away, but I can guarantee that 15 minutes of writing will always be a net positive over 5 minutes of staring at the blinking editor followed by 10 minutes of doing the laundry. Generally speaking, don’t start at the beginning. So much rides on the first sentences, they’re always the hardest, particularly if you don’t know the rest of the text yet.
One more thing: Try as you might to split writing and editing into two phases. These are very different processes: One is creative and you want to get the juices flowing without inhibition — the other is critical and you want to find flaws, so it’s limiting in nature. Ideally, you’d write an entire text and then edit it, but I rarely manage to do that. Instead I try to at least write out an entire paragraph or two before going back changing all the little things until it’s good. Generally speaking, the longer I can keep that little debugging monkey chained up below the desk, the faster (and sometimes better) I can formulate my thoughts.
I think that’s it, those are my two simple tricks, one of them is situational, the other I barely do myself. 😆 Let’s discuss something simpler.
I write in VS Code, a lot of Markdown (e.g. my blog and this newsletter) and a bit of Asciidoc (e.g. slides and Pioneer documentation). In such formats, I write one sentence per line. That’s great for editing when you’re searching for something or want to move things around. This newsletter is the exception because I send the Markdown version as text but don’t want to annoy you with line breaks in your paragraphs. (Side note: I consider
<br/> a red flag.)
Once I have most of the content in place and start editing, I open a preview. Either the tab in VS Code if it’s a standalone file (like this one here) or the browser if it’s embedded in a site/app that I can launch a preview version of. I prefer proof-reading the preview — I think the (slight) change in perspective from source to rendered text creates enough unfamiliarity to makes it easier to spot problems. Hey, that’s a third trick! :)
Nothing much to add — you got all the hardware specs up there. Here’s the list of games I played this year for more than a few hours in no particular order:
- The Ascent: very entertaining and glorious to look at, but feels like it left the bit of extra kick on the table that would’ve made it great instead of just very good
- Titanfall 2: has a really good campaign but it only takes about 8h, so get it on a sale (it’s aimed at multiplayer, which I’ve heard good things about but I’m too old for… you know)
- Ghostrunner: very stylish and challenging (for me); in fact, I think I’m stuck at the moment, which is why I didn’t play in a few weeks
- Stellaris: simply a classic; if you like 4X and SciFi, but don’t play Stellaris, you’re doing it wrong
- Hades: just kicks ass; highly recommended!
- Planet Zoo: good times with my daughter, except when I start to joke that I’ll load the game when she’s asleep to find out what happens when you put the tigers in with the gazelles and giraffes
- Slay the Spire: nice game to fall back to on a lazy evening
- Cyberpunk 2077: 100%ed it and had fun; don’t @ me
- Papers, Please: hought-provoking and highly recommended; I played with a friend, which made it more intense and interesting to reflect on
Wow, this got so ridiculously long! I spent most of the week on this newsletter and, frankly, this has probably been more for my benefit than anybody else’s. I ended up improving my OBS settings and scenes, cleaned my tower and reinstalled the PSU shroud, realized that I’m missing an invoice (good thing I didn’t do my taxes, yet), clarified my thoughts on various improvements, and realized just how much crap I’ve amassed — the things you own, they end up owning you.
But I still hope that you could take a thing or two away as well. If you have any questions about anything I mentioned here, ask me — I’ll be glad to help you get started or improve your studio and content.