Project Leyden, Maven dependencies, 25 hours of Java, Accento, conference calling on Linux

CodeFX Occasionally #78–30th of April 2020

Nicolai Parlog
nipafx news
Published in
14 min readMay 4, 2020

--

Hi everyone,

I hope you and your friends and families are all healthy and safe. Beyond that, I hope that whatever degree of lockdown is enacted in your country does not threaten your livelihood. If anybody got laid off and is looking for a new (remote) job as a software developer, let me know and I will share your CV in my network. On my end, me and my family are doing ok and I feel very fortunate that our biggest problem is the logistical challenge of home-schooling our daughter.

Now, let’s turn away from the current crisis for the remainder of this newsletter. Which is looong. Most of that comes from a niche problem very few of you may encounter, though — setting up conference calling with a proper camera on Linux — and yet I hope it’s entertaining enough to be worth your time. I also cover OpenJDK’s new hot project, Leyden, copying dependencies with Maven, Java’s birthday, Accento Digital, and SDKMAN!, so there’s plenty of other stuff to get into.

I send this newsletter out some Sundays. Or other days. Sometimes not for weeks. But as an actual email. So, subscribe!

Project Leyden

Mark Reinhold just announced a new OpenJDK project called Leyden:

Project, Leyden, whose primary goal will be to address the long-term pain points of Java’s slow startup time, slow time to peak performance, and large footprint.

Leyden will address these pain points by introducing a concept of static images to the Java Platform, and to the JDK.

* A static image is a standalone program, derived from an application, which runs that application — and no other.

* A static image is a closed world: It cannot load classes from outside the image, nor can it spin new bytecodes at run time.

[…]

Project Leyden will take inspiration from past efforts to explore this space, including the GNU Compiler for Java and the Native Image feature of GraalVM. Leyden will add static images to the Java Platform Specification, and we expect that GraalVM will evolve to implement that Specification.

His message contains a few more details and I recommend to read it. The echo on Twitter and on the mailing list were very positive and I’m really looking forward to this project!

Copy dependencies with Maven

Every now and then I need to get all dependencies of a Maven project into one folder and every time I search this newsletters’ archives because I’m sure I wrote about it once, but I can never find it. This time, I sat down and really looked through the old issues and it turns out that I actually never wrote it down. So here it goes.

If you ever want to copy all transitive dependencies of a Maven project, the copy-dependencies MOJO of the dependency plugin is there for you - just run mvn dependency:copy-dependencies and, et voilà, everything's in target/dependencies.

If you want to configure it, add this to your POM:

Executed during the build

There are a few more really useful parameters, like overwriting snapshots and including or excluding scopes, so check the MOJO documentation.

If you don’t want to execute the plugin on every build, yet want to configure it in the POM, putting it into <pluginManagement> does the trick. Since that's a common use case for me and I want to be able to just copy-paste my default configuration, I'll put it here:

Executed with `mvn dependency:copy-dependencies`

Events

There are two events I’m working on that I want to share with you because I would love to see all of you there. :)

25 hours of Java

On May 23rd, Java turns 25 and to celebrate I’ll do a 25h live stream on Twitch. Yes, 25 hours of Java! There will be technical deep dives, interviews, discussions, cake, code, more cake, and even more code! Most importantly, it will be a lot of fun and I hope you join me on that fine Saturday in May.

I’ll stream from 0600 (i.e. 6am) one the 23rd to 0700 (i.e. 7am) on the 24th (UTC).

Guests

  • Brian Goetz
  • Kevlin Henney
  • Martijn Verburg
  • Trisha Gee
  • Venkat Subramaniam

Topics

  • Java 9 to 14
  • Java Next
  • Java Releases and Distributions
  • Extending JUnit 5
  • StackOverflow Questions

For more details on the guests and topics, check https://blog.codefx.org/25-hours-of-java/. I plan to publish a schedule some time next week.

Accento Digital

We moved Accento from Karlsruhe to the Intertubes and from September to July 7th. Corona obviously played a big role here, but so did climate change (more on the background) and we don’t see this remote-thing as a stopgap until society reopens, but have the long-term goal to make the virtual aspects mix with and augment physical conferences.

So Accento Digital will be more than just the video+chat experience that most online conferences offer. It will give you plenty of opportunity to communicate:

  • talks and ensuing Q&As (obviously)
  • dedicated Q&A sessions where subject-matter specialists answer your questions
  • ignite talks that present an inspiring idea or a refreshing angle and then open the floor for a discussion where everybody can participate
  • panels where two to four experts rope you into a joined conversation on a hot topic
  • three rooms dedicated to general, cross-cutting topics (think “security” or “performance”) where everything but the talks take place; they remain open all day long, so conversations never have to end and you can join ongoing discussions and find like-minded developers whenever you want
  • a few more rooms for you to hang out in and chat about anything that comes to mind
  • pre-conference events where we get to know one another as well as the speakers (starting in early May)

We’re currently selecting talks and already confirmed the first few:

  • Eberhardt Wolff — keynote on continuous improvement
  • Kevlin Henney — keynote on forced evolution
  • Rackel Appel — talk on Azure
  • Martijn Verburg — talk on AdoptOpenJDK
  • Ron Pressler — talk on Project Loom
  • Peter Kröner — talk on the JavaScript engine
  • Grace Jansen — talk on reactive systems
  • Philipp Krenn — talk on aggregating logs

Attending the talks is free, participating in the other activities sets you back 30 €. Or 15€ if you use the code “nipafxAccento42” (without quotation marks) during checkout. Either way, I hope you like the idea. If you’re interested, get your ticket at tickets.accento.dev and I’ll see you there!

YouTube Videos

The regular Twitch streams occasionally result in interesting bits and pieces that find their way to YouTube:

Streaming & Co.

I thought streaming from a Linux box to Twitch was tough — until I tried to set up proper conference calls. Let me tell you a story of kernel modules, loopback devices, OBS with self-compiled plugins, audio pipes, and dancing with browser dialogs. Will it work out?

Requirements & challenges

What I want when connecting with others via Jitsi, Hangout, not Zoom, etc.:

  • use my mixer (the physical thing) as mic
  • use my Lumix G7 (connected to PC via Elgato Cam Link) as camera
  • share one of my two screens
  • optionally, use an OBS scene as “camera” or “shared screen”, so I have control over composing feeds

Here’s why that’s not as straightforward as you might think:

  • piping the Lumix G7’s output through the Cam Link adds about 250 ms latency, a visible delay over the audio
  • only OBS seems to be capable to properly read the Cam Link image; all other apps (e.g. VLC and browsers) show only a quarter of the image with a massive screen door effect and weird coloring
  • on my box (Gentoo with KDE), browsers only let me share both screens at once, i.e. 7680*2160px, or individual application windows, which sucks when tabbing between applications

This should be simple to fix — after all, what’s easier than to fiddle with Linux’ sound and video devices? (If you’re not familiar with Linux: pretty much everything else is easier.)

Solution

I tried multiple fixes for all of these and some follow-up problems and eventually arrived at a working solution:

  • use OBS to compose output as scenes
  • with a video loopback, one scene can act as webcam
  • by popping out a preview, another scene can be shared as a window
  • use Pulse Audio to create a delayed microphone

I’m not proposing this as the best or simplest solution, it’s just the first one I got working. But getting there took me way too long already, so I’m not going to spend more time optimizing it. Still, I want to document my solution because this is definitely going to break in the future and no way am I going to keep the details in mind until then.

Setting up the camera

After some initial research, this step had me worried the most because it involves a few non-trivial steps. In the end, everything worked as advertised, though. Here are the steps:

  1. install OBS (I did that already for my Twitch stream)
  2. install v4l2loopback — “a kernel module to create V4L2 loopback devices”
  3. install obs-v4l2sink — “an OBS Studio plugin that provides output capabilities to a Video4Linux2 device”

First step. The pre-build module that Gentoo’s package manager offers (version 0.12.1) couldn’t be loaded, presumably because I’m running a 5.4.x Kernel, not the more common 4.19.x. I still want to use the command line utility it provides (v4l2loopback-ctl , but I had to build the Kernel module myself:

git clone https://github.com/umlaeute/v4l2loopback.git
cd v4l2loopback
make

That built the kernel module (file v4l2loopback.ko) that I could then install:

# see list of existing video devices
ls -lah /dev/ | grep video
# install module
make install
depmod -a
modprobe v4l2loopback devices=1 exclusive_caps=1
# check list of video devices to see new device and note its ID
ls -lah /dev/ | grep video
# make the new device the default
v4l2-ctl --device 2

What is the last command needed for? I’ll explain after the next step.

Second step. The obs-v4l2sink plugin offers no binary download, so it has to be built locally. For that it needs a QT 5 package and some OBS libraries. The former was already installed on my system and for the latter I cloned OBS and checked out the tag for my version:

git clone https://github.com/obsproject/obs-studio.git
cd obs-studio/
git tag -l
git checkout 24.0.5
cd ..

Now, on to obs-v4l2sink:

git clone https://github.com/CatxFish/obs-v4l2sink.git
cd obs-v4l2sink/
mkdir build && cd build
cmake -DLIBOBS_INCLUDE_DIR="../../obs-studio/libobs" -DCMAKE_INSTALL_PREFIX=/usr ..
make -j12
make install
# the actual plugin `v4l2sink.so` ended up in the wrong folder, so I moved it
mv /usr/lib/obs-plugins/v4l2sink.so /usr/lib64/obs-plugins/

After launching OBS, the Tools menu contained V4L2 Video Output — success!

If you’re following along (you have my deepest sympathies) and haven’t done so already, now’s the time to create the scenes you would like to share with OBS. Just for the camera, that would be… well… just the camera. After starting the loopback output in the menu above (no need to start streaming or recording), you should be able to see it in other apps. I recommend using something like VLC for that (under Media ~> Open Capture Device) because the browser can give you other headaches.

Because I had trouble getting browsers to use the loopback video device. By default, they tried to grab my camera, which fails as OBS is already using it (which is kind of the point). The problem is that this failure seems to make browsers think that there’s no valid video device at all and so I don’t get asked for permission to share it and later can’t switch to another camera. The v4l2-ctl command above should fix this by making the loopback the default video device, but that only helped some of the time (don't ask).

The only reliable way to get out of this that I found is to start a little dance with OBS and browser where the former stops using the camera, so the latter can see it, ask me for permission, and be configured to use the loopback instead, after which I can switch the camera back on in OBS.

Setting up the screen share

To share the screen, I needed a new scene with just the screen content. That’s easy enough.

The problem is that while upstreaming the camera, I can’t switch away from the camera scene or the V4L2 loopback would output the newly selected scene instead of the camera feed. Instead, I switch OBS into Studio Mode, which gives me a view of the Program (camera — used for the loopback) and a Preview (shared screen). The preview’s context menu has an entry for Windowed Projector. This pops out a window that I can share via Firefox or Chrome.

Yay, video feeds are done! Although…

Side quest: Sharing more advanced scenes

Many conferencing tools allow neither sender nor receiver of a camera and screen share how to compose them. More often than not, the app does it for them, which is ok in general, but suboptimal in specific situations. Now I need to work around the fact that a fixed area of my screen is invisible.

The OBS setup alleviates that and gives me as the sender full control over the composition. Instead of sending both the camera feed and the shared screen, I can use OBS to compose them any way I want (and moving something around on the fly is easy, too) and just send the combined feed:

  • create one or more suitable scenes
  • use either V4L2 loopback or the screen sharing approach to pipe one stream into the conference call
  • switch between scenes as it becomes necessary

Setting up sound

First, to answer the obvious question: If all of this is piped through OBS anyway, why not use it’s mixer and simply grab that sound as input for the browser. Short answer: Because OBS doesn’t provide it’s output as a source. Long answer: see below. Second, it turns out that adding a delay to an audio input stream (remember, the video lags behind ~250ms) doesn’t seem to be a common use case. 😕

Instead I had to fiddle with PuleAudio quite a bit. (NB: While not strictly necessary Pulse Audio Volume Control, aka pavucontrol, makes this a bit easier.) What ended up working was the following:

  • create a sink for the delayed mic stream
pacmd load-module module-null-sink sink_name=DelayedMic
pacmd 'update-sink-proplist DelayedMic device.description="Delayed Mic"'
  • use pacat to pipe the original microphone stream into the new sink, but add a suitable delay:
# identify microphone source
pactl list short sources
pacat -r --latency-msec=1 -d $MIC_SOURCE | pacat -p --latency-msec=250 -d DelayedMic
  • let browser read from the delayed sink’s monitor:
    1. within the browser, select any microphone (it doesn’t matter which one)
    2. in Pulse Audio Volume Control under Recording, find your browser tab and select Monitor of Null Output (I found no way to name the monitor)

And that should be it!

By the way, to unload modules:

# identify the module's id (for null-sink append ` | grep null`)
pactl list short modules
pactl unload-module $ID

When starting a call

Some of the steps so far are permanent, some can go into a script, but a few are left to be done manually when starting a call:

  • switch on camera, launch OBS, start V4L2 video output
  • join the conference call with arbitrary mic and hope that the loopback device is detected as camera
  • if not:
    * open settings for camera source in OBS and switch it away from actual camera (e.g. to loopback)
    * reload browser tab — actual camera should now be detected
    * in the conference site or browser settings, look for a setting to switch cameras and select loopback
    * go back to OBS and switch the camera source back to the actual camera
  • in Pulse Audio, switch the browser’s recording stream to the delayed mic stream’ monitor
  • to share screen:
    * switch OBS to studio mode, select the screen-sharing scene, pop out the preview into its own window
    * in the browser, share that window

Yes, that’s all. Easy as pie. cough (not a dry cough!)

What would have been nice

While the entire setup appears quite complex, I think this is mostly incidental. In the end, I built a pipeline that feeds inputs into OBS and the output back to the browser. That is actually quite straightforward and having OBS in there is very powerful for composing scenes. That part, I’m pretty happy with. The downsides are that this is non-trivial to set up, will surely prove to be somewhat fragile, and (most annoyingly) requires special treatment for sound.

Instead of reproducing the input delay with the PulseAudio streams, it would have been nice to reuse OBS’ configuration. An added benefit would be that I could easily mix audio streams right where I’m used to. I almost got there, too:

  • create a sink for the OBS output:
pacmd load-module module-null-sink sink_name=OBS-Output-As-Sink
pacmd 'update-sink-proplist OBS-Output-As-Sink device.description="OBS Output (Sink)"'
  • pipe OBS output into the sink:
    * in OBS’ audio settings, configure the sink as monitoring device
    * in OBS’s advanced audio properties under Audio Monitoring , set Monitor and Output for all devices
  • use PulseAudio, to configure browser to record the sink’s monitor as above

It looks like I’m done, but unfortunately it turns out that the sync offset is not applied to the monitor. But can I build on this with a pacat trick similar to the one above?

  • create a source for the OBS output:
pacmd load-module module-null-source source_name=OBS-Output-As-Source
pacmd 'update-source-proplist OBS-Output-As-Source device.description="OBS Output (Source)"'
  • use pacat to loop sink back into source ~> couldn't make this work
  • pick this source in browser

That last point would have been a really nice add-on! There would’ve been no need to fiddle with PulseAudio to select the right audio stream for the call because browsers and other apps make it easy to select a source. Unfortunately, I just found no way to pipe the sink’s stream into the source. Maybe you do?

Project Showcase: SDKMAN!

Since I started experimenting with Java 9, I’ve been fiddling with command line scripts to download, install, and select JDKs and researched different ways to get tools to work with them. For a while I used jenv, but that didn’t work too well and at some point stopped working entirely, so I went back to scripts. What a waste of time!

A few months ago I saw someone use SDKMAN! and it’s so cool! It just works. Want to see which JDKs are available? sdk list java Want to install the recent AdoptOpenJDK 11 with HotSpot (as opposed to J9)? sdk install java 11.0.7.hs Or prefer Graal? sdk install java 20.0.0.r11-grl Want to pick which Java version to use on the open terminal? sdk use java 11.0.7.hs

And it doesn’t just work with Java, either. Kotlin, Groovy, Scala, Maven, Gradle, sbt, Micronaut, and many more. It’s always the same and it just works (I feel like I mentioned that already).

In short, go get SDKMAN!.

PS: Don’t forget to subscribe or recommend! :)

--

--

Nicolai Parlog
nipafx news

Nicolai is a #Java enthusiast with a passion for learning and sharing — in posts & books; in videos & streams; at conferences & in courses. https://nipafx.dev