WSL 2: Why you should use real Linux instead
I tried my best to use it, got more problems than solutions
Web development on Windows is not something just a couple of people do. There is some software, like Adobe Photoshop, Affinity Designer or just playing games, that is not available for Linux. Mix that with people without money to spend on a Mac, nor the time to dive deep into Linux, and Windows becomes like the middle ground.
I’m practically that type of guy, accustomed to booting into Windows or Ubuntu depending on what I need to do. In some scenarios where I don’t want to reboot, I just use the binaries for Windows if there are ones available (like PHP or Node.js) or I run a VM with Ubuntu for 20 minute adventure. Hell, I even mount the physical partition where Ubuntu is installed.
The folks at Microsoft don’t want you to lay stray from them, and Windows Subsystem for Linux (WSL) was born. The idea is simple: bring Linux as a first-class citizen into your Windows OS without the VMs noticeable performance penalties, without saying goodbye to Windows.
WSL 2 seems like a blessing considering Microsoft pointed out massive performance gains with the new version. But one thing I’ve learned in all these years with Microsoft is to NEVER trust their words, whoever is working there at the moment.
Note the last point. No “performance across OS file systems”. Isn’t that a fancy way to tell that WSL 2 FILE SYSTEM IS SLOW? WSL 2 wasn’t supposed to be fast?
Let’s investigate what kind of “performance” they’re are pointing out.
Where the slow begins?
I decided to enable WSL 2 as Microsoft instructs, install Ubuntu from the Microsoft Store, and start developing. I have two dozens of projects inside a hard disk, so it should be easy to use PHP as a remote interpreter by pointing out their path inside WSL.
Since I use multiple PHP and Node.js versions due to each project server requirements, I decided to install Docker for Windows to handle these different versions. It’s easy as pulling the image and that’s it.
docker pull php:7.4.8-cli-alpine
Everything went well. I decided to make a quick test on a vanilla Laravel project just for show. No Node.js or whatever for the time being.
docker run -p 8080:8080 \
-v S:\Projects\Laravel:/app \
-S 0.0.0.0:8080 -t /app/public /app/server.php
Okay, now let’s hit the browser and check if the home route returns
That was surprisingly slow.
The network wasn’t the problem, as the request was registered instantly by PHP itself, but something was holding the request processing back.
Compared to using the PHP binaries for Windows directly, requests are resolved the instant I hit the browser. I went the extra mile and decided to use PHP binaries for Linux inside the Ubuntu distribution instead of through Docker.
php -S 0.0.0.0:8080 -t /mnt/s/Projects/Laravel/public \
Just a heads up, Microsoft decided wouldn’t support official PHP builds anymore.
Now, we hit the browser again to check if it was Docker what slowed the whole application lifecycle, or if it was WSL 2 itself.
There was a big amount of slowness remaining. Odd, seeing Microsoft said there were huge performance gains with the new version. These performance gains were nonexistent to me. In fact, all I got from WSL 2 was a performance regression.
Not happy with that, I decided to make a simple test. Instead of using the mounted file system from Windows, I copied the project files directly inside the Linux file system, and then run the PHP Built-in Server there. It took a while since I have a lot of files to copy for this project.
cp -R /mnt/s/Projects/Laravel ~/laravel
php -S 0.0.0.0:8080 -t ~/laravel/public ~/laravel/server.php
Now we hit the browser, and it ran so fast I couldn’t note the difference between using the PHP for Windows and PHP for Linux.
So clearly wasn’t the network, but the files. Why is so slow when trying to use my files on Windows? Shouldn’t it be blazingly fast as the above example?
After pushing some tickets and investigating around I came to the conclusion that WSL 2 treats Windows as a second-class citizen.
Things went Antarctica south
If you see the above diagram, you will see that the VM worker offers access from Linux to Windows files using the 9P network protocol server. The virtual machine files of each Linux distribution live in their own VHDX disk image that you won’t have direct access to unless you hack your way into the Windows Apps directory.
What this means is, basically:
- WSL access your Windows files over a network share, and
- Windows access Linux files through a network share.
Because of this design, WSL 2 treats Windows files as a second-class citizen and vice versa. WSL 1 did not have this kind of problem, sort of.
Every time PHP decided to access my project files, it would fetch the files from the network share mounted in
/mnt/s/Projects/Laravel/. If I used Docker, it would add an extra step to mount these files into the container, adding MORE overhead to the file system operations.
So the roundup trip is: Windows file system → Network protocol → Linux file system → Docker container.
(Trying to) deal with it
So what’s the point of using WSL 2 if my project will be slowed to hell? I went back to use Ubuntu with Docker (which runs natively) because it works fast minor the hindrance of booting into it. But I wasn’t happy at all. I decided to go into the rabbit hole and check what could I do to make faster my development environment on Windows.
You may say that putting your project into the WSL distribution should be preferred to avoid the performance problems, as this is what VS Code does when using WSL 2, but for someone with a lot of projects and files, taking every project inside WSL independently of the software to develop has a lot of important drawbacks apart from time:
- The WSL distribution will grow larger in size.
- You don’t have clear control of the WSL Linux image (size, location).
- If your Windows goes implodes, you lose your code.
Let’s use Docker with WSL 1 since Microsoft recommends using WSL 1 to avoid slow file system IO. Oh crap. Docker uses the Moby VM instead. Yes, You’re back to using the old VM for your work as you have done past years, which has a huge memory footprint for a couple of 20MB processes.
Not everything is lost. To avoid keeping my project files shared inside a network protocol, and later mounted into WSL, I decided to mount it directly into WSL to avoid paying the 9P protocol tax for each, using utilities like
mount. Oh ducking crap, not supported… since 2016!
The only solution to this performance problem was to mount VHD files. You know, a virtual hard disk image. But when the devil fulfills your wish, you pay a high price.
Mounting VHD like there was no tomorrow
I mounted a VHD through the network share, not before formatting a partition inside it to EXT4 using an external utility. Okay, I admit it: I created one of 2 GB using the Hyper-V tools, mounted it through Windows Disk Management, used AOMEI Partition Assistant for EXT4 formatting, and then unmounted it. You can do it without any additional software, though.
While mounting a VHD image file is not supported not documented, you can force it with some magic. While the 9P protocol tax is still there, is just for one file instead of multiple files, so is something you pay only “once”.
First, I used
fdisk to check the VHD.
user@myPC:$ fdisk -l /mnt/s/laravel.vhdDisk /mnt/s/laravel.vhd: 2 GiB, 2147484160 bytes, 4194305 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0ecba3e3Device Boot Start End Sectors Size Id Type
/mnt/s/laravel.vhd1 2048 4176899 4174852 2G 83 Linux
To mount this, I need to tell the mounting system where the partition starts. I can get this offset value by multiplying the sector number where the partition starts and the sector size:
512 * 2048 = 1048576
Finally, I called
mount with that offset of bytes, along with some parameters like the
loop flag (because is a file we’re mounting) and the
rw flag (we want to write on it too).
sudo mount -o loop,rw,offset=1048576 /mnt/s/laravel.vhd ~/laravel
Then, I spun up a Docker container. You may think I did it from Windows, but instead, I remained in the command line and called Docker directly inside Ubuntu so I could get the paths right to where my VHD was mounted.
user@myPC:$ docker run -p 8080:8080 -v ~/laravel/:/app \
-S 0.0.0.0:8080 -t /app/public /app/server.php
The gains are perceptible, the request is blazing fast since it doesn’t have to ask anybody to read the files, just the filesystem itself.
Yes, there is a way to avoid the network tax and the whole file system slowness by just mounting a virtual hard disk into WSL 2 the hacky way. The problem is that now the disk files are a second-class citizen for Windows.
The only way we can access them only through the WSL network share, which some applications may have problems to load.
Because these files live inside a network share, no file watching can be enabled, meaning, the host OS (Windows in this case) won’t know what happens to the files when modified, deleted or created; you’re bound to press F5 every time you expect a change. I just opened PHPStorm and pointed the project path from the WSL share, and I got this:
This is a huge show stopper. PHPStorm relies on watching your project files to know what to cache or update.
For example, let’s say you install a new Composer package. How the hell Windows and PHPStorm will know we have a new package installed to cache in the first place? Does means the software must traverse the whole project directory for changes? The short answer is yes, and this is slow.
WSL 2 shares won’t get you file changes in sync, a problem still persisting from over a year. Another bummer for so much work trying to make WSL 2 work flawlessly.
But not everything is just bad news. From what I have gathered, the performance from accessing WSL through Windows is not that low compared to the reverse, where we saw the request hang for a lot of seconds.
So, to test, I decided to start a PHP server in Windows using the native binaries but using the WSL path to my project. I expected the same slowness, but for some reason, the performance was very acceptable considering it was basically the same thing the other way around.
.\php -S localhost:8080 -t \\wsl$\Ubuntu\home\user\laravel\public \\wsl$\Ubuntu\home\user\laravel\server.php
Then we hit the browser and, wow, not bad for reading files inside a virtual drive through essentially a network protocol.
But again, since there is no file sync, I wouldn’t do this until its fixed or Microsoft pushes some kind of utility. And seeing how the work on WSL as come by in all these years, by the time WSL becomes relevant (as if) I will have Windows sitting inside a VM just to play games or open up a graphic design program.
In a nutshell
Yeah, filesystem IO from Windows to WSL 2 is terrible, there is no reliability on
inotify to have live changes in the WSL share path, and mounting VHD seems like an awful solution for all the hindrances inside WSL 2 for any development purposes.
Indeed, if you’re using some project with many files already under Windows and you’ve wanted to jump to the Linux side, just do it. Every time I tried to use WSL, every time I got the short end of the stick, and I can’t figure out why you wouldn’t at any point of setting up your environment.
To me, the annoyance in booting into Ubuntu, or even put a cheap VM for a quick fix, is not enough considering the drawbacks of using WSL 2 with a VHD to avoid the performance problems:
- You must mount the VHD manually. Sometimes, every restart (haven’t tested).
- All Docker commands must be executed inside WSL itself.
- The VHD file partitions are EXT4, so you can’t edit them natively from Windows or while it’s mounted. There are some utilities.
- Once the VHD is mounted,
- You MUST go through
\\wsl$\network path to reach your files.
\\wsl$\will never sync with any change made remotely.
Paying the price for the convenience may be enough for a guy with too much time in their hands and not paying attention if you decide to force your way up:
- You get all benefits from WSL 2 like a full Linux kernel.
- Your projects live inside a portable and controllable VHD.
- It doesn’t get slowdowns from multiple IO through network share (9P Protocol).
- Docker still works.
- Reading WSL files from Windows has decent-but-not-great performance.
And with that concludes the problem with the current state of Windows Subsystem for Linux 2. I still wouldn’t recommend it. File system IO it’s a big problem, lack of control its mind-blowing, and you can’t even mount an USB stick to it. Until WSL matures, it’s pretty much useless except on niche scenarios. After writing this I just uninstalled it and I haven’t missed it a single bit.
I’m very let down by the team responsible of WSL. I know that they’re doing WSL with the best of intentions, but WSL 2 should have become a mature toolkit now rather than 4 years after its introduction. Maybe it’s the lack of developers, maybe the Hypervisor, who knows at this time. If Microsoft wants this to become a feature to brag about, it will take more than a handful of engineers.
In the meantime, just tap your eyes on articles that say “dual booting is dead” and “WSL 2 is life-changing experience”; be real and read someone impressions after switching from Windows to Ubuntu for a week.
The tux ain’t for cheap tricks.