Simplify any REST API via a simple FUSE client. Here’s how.

Taming the wild REST with FUSE

John Boero
Apr 25, 2019 · 9 min read

Quite a few Hashicorp users use our products strictly via REST API, as is the DevOps or automation way. Some still prefer a robust GUI and/or CLI. I’m here to present a third option that simplifies all of the above via simple FUSE filesystem clients. Why would anybody need that? Because people are confusing “REST API” with “User Experience.” APIs are a developer experience, hence the DevOps moniker. Why don’t we make an interface that users and developers can find equally as simple? My typical adoption conversation usually goes something like this:

So I can access everything with a CLI, GUI, and REST API. What language bindings does the REST API have?

Immediately, I need to check every current REST binding across supported and community and which versions they support. Once you cross multiple HashiCorp products across multiple API binding libraries and versions, you get a dramatic 3D support matrix of questionable value. Maybe a customer just resorts to spending their life with an open tab on the current API documentation and custom writes their own calls. Users have even reported errors when using a conflicting version of the CLI with their server, even though we combine server and CLI together in a single handy Go binary.

Just a sample of the minesweeper game that is REST binding support matrix. Versions are real — checkboxes are hypothetical.

There must be a better way than this. A REST API tends to mimic a filesystem via HTTP, though in practice this ends up being very Wild West, with inconsistencies and freestyle standards. Often times you need a GET to retrieve a LIST of items, while other times you’ll need a LIST to get what you’re looking for.

TLJ knows there is a much better solution for CRUDy REST.

Rather than have scores of community projects and libraries with thousands of lines of code to try to keep up with the REST versioning of an endpoint, what if everything was presented as a universal interface that every language and tool can understand? Why not present it back as a filesystem?

Use standard CLI tools from the OS such as ls, tree, cat, jq, etc. Even standard GUI tools and IDEs can handle files. It turns out it’s really simple to go from this:

Complex to maintain.

… to the much leaner:

As long as the FUSE client supports your REST API version, anything can access it.

In practice, this gives a huge benefit. When using a shell to browse Vault secrets, Consul KV, Nomad jobs, or Terraform Enterprise workspaces, autocompletion is global. Listing contents with ls and displaying contents directly with cat or jq is easy. Everything is performed with just a few kilobyte binary and dependencies (jsoncpp, libcurl, libfuse) but no CLI or other binary is required. Writing secrets is fundamental for any tool or language that handles files, which are the universal language of most operating systems:

In just 630 lines of code, VaultFS translates the majority of API endpoints into easily browseable filesystem operations. It also does it securely using only the user’s token. With default FUSE config, users aren’t allowed to browse other users’ mounts. This includes root! Obviously if you’re root you can switch to any user, but when root is compromised, all bets are off. Still, this means any development language current or future that can read, write, and browse files can use Vault without bindings.

Example of Python using the trusty hvac bindings. Note: do not copy and paste as Medium has mangled the double quotes:

import hvac
client = hvac.Client()
mysecret = client.secrets.kv.v1.read_secret(

Same secret read via VaultFS without bindings (vaultfs already mounted):

f = open(“~/vault/secret/mysecret”,”r”)
mysecret =

Interactive demo on Linux showing UI and FUSE clients side by side.

There’s nothing quite like having a fresh Kubernetes pod which automatically authenticates and mounts Vault secrets to a local path with instant access to a fresh set of short-lived database credentials and TLS certificate. All of this can happen with no sidecar requirement.

Full disclaimer: Linus Torvalds famously says FUSE filesystems are nothing more than toys. He’s absolutely right. Never use FUSE for block storage or a primary filesystem. Performance is terrible because of kernel-user mode switches for every single operation. Every block transaction takes a mode switch, and default read/write size is 4k. Luckily that’s a great fit for small REST calls. I once wrote a NOOP FUSE filesystem to demonstrate how horrible maximum performance is. Doing nothing but handling FUSE callbacks, a 3Ghz Westmere Xeon maxes out about 1GB/s with 100% CPU utilization. That’s a terrible idea for block storage but a great fit for 1–2k operations that already have HTTP latency issues anyway. Sorry Gluster fans, but FUSE won’t cut it for block storage.

What about limitations? REST calls allow functionality that a filesystem can’t reproduce. If I need to POST something to an endpoint and read the response, that isn’t an atomic operation to filesystems. If you’ve ever used the bash /dev/tcp device, you’ve seen an attempted workaround:

$ exec 3<>/dev/tcp/
$ echo -e “GET / HTTP/1.1\r\nhost:\r\nConnection: close\r\n\r\n” >&3
$ cat <&3

This is a bash trick to write to a file and read its response without a domain socket. To reproduce this in our FUSE client would require a domain socket or a single-threaded mandate with very high user trust. You would expect the users to write to a file, cache the response in the FUSE for the next read operation, and hope that events unfold as expected. As a CRUD+L use case (create, read, update, delete, list) this works perfectly, but for more complex operations like transit engine encryption, the REST must still be used.

Other Applications

What other projects could benefit from a system like this? As it’s easy to template from the VaultFS code, what if we cookie cutter out some other use cases for REST->FUSE wrappers?

It turns out there’s quite a lot of low-hanging fruit. I’ve built one for Consul, Nomad, Terraform Enterprise, and even Kubernetes. All of it is experimental but available on GitHub here:

Note that I’m not a full-time engineer anymore and my workhorse of choice is still C/C++. It’s simple enough for C++98 even — no benefit from autos or C++11 features. One thing that could simplify it is to use the libCurl C++ wrapper but I see no need to add an extra dependency for such simple code. Anyone who would like to rebuild everything in another language like Go or Rust is more than welcome to but I doubt it could fit in fewer lines than my C++ versions. I have built RPMs for x86_64 with dependencies listed, so a yum/dnf install is all that’s required to get up and running. Also, Docker Hub contains a statically linked build: Otherwise, anyone can build their own via included Makefiles.

Script, move, copy, edit, or even drag & drop Nomad jobs. It couldn’t be simpler.


What about K8s? Kuberentes isn’t a HashiCorp product but we do have plenty of integrations. So while we’re at it, what if we try to map the K8s API to its own FUSE client? It turns out this is particularly tricky as there are multiple components which use multiple API versions within each Kube release. The good news is I got a basic version working in just 380 lines of code, with full support for reads and writes. The bad news is that read+writes are much more tricky. I can write any Kubernetes manifest in JSON format to a file and have it submitted to the API. I can then read back that manifest fine. The trouble comes when I try to read+write (edit) a resource. Reading Kubernetes resources actually retrieves a lot of extra data that isn’t idempotent — timestamps, metadata, and other attributes that can’t be written. So the K8sFS agent is there but with caveats. It needs to edit out the invalid JSON bits during writes to be truly idempotent and edit-friendly. But for now, in the expimental state, it works pretty well.

K8sFS live demo on single node Fedora 28 cluster Kubernetes v1.14 via Kubeadm, client cert.

Building a REST API for ideal FUSE usage.

Implementing FUSE clients for multiple REST APIs has made it obvious that there are certain standards I would recommend conforming to when designing REST endpoints. If you’re designing one from scratch, I recommend the following guidelines help a lot:

  1. Always implement LIST on a path that would be mapped to a directory. Also, never use the same endpoint for both a GET and a LIST. In a filesystem, stat can return a file or a directory but not both. It makes good sense actually and is one tricky bit S3 has introduced. If you “ls /path”, it’s basically treated as an “ls -d /path” but if you “ls /path/” it’s treated as “ls /path/” — where FUSE filesystems don’t recognize trailing slashes in any paths sadly. I built everything with FUSE2 and researched if FUSE3 could handle this better but I think this is system level and unavoidable, sadly.
  2. If possible, implement an attribute GET on any path that would be mapped to a file. As there is no way to get details about an endpoint before fetching it, we need to interpret in the client code what would be a directory and what would be a file. Also, we have no way of getting the file size without fetching an entire response. This means we use DIRECT_IO and read as much as we can. If a fetch could first get the size of a response, and use INDIRECT_IO, this would have the huge added benefit of automatic caching of reads in kernel buffers. With direct_io we sometimes need to make two reads which is extra slow and inefficient. FUSE3 even simplifies readdirplus, which allows a combined readdir with getattr and saves a lot of time traversing directories in high latency scenarios.
  3. For CRUD writes, be consistent in your methods. If the client code needs to determine which endpoints use POST and which use PUT or alternate for create/write, this will become a point of frustration. DELETE should always be used to perform a delete operation.
  4. Implement a version lookup by path. This wasn’t apparent until Kubernetes came into play, but with so many contributors and so many different API versions available at root, it would be nice if there were a central place to look up which API versions a service expects for a root endpoint. /v1, /v1beta1, /v2alpha1??? How do users keep these straight across releases?
  5. Always, always use and version a JSON/YAML schema. JSON allows schemaless data via arbitrary storage without worrying about the schema errors of a traditional RDBMS or SQL database. The upside is that those schema errors were always great at catching errors before they were a problem. JSON allows schemas too, which can be used to validate JSON input or even generate an interactive editor that simplifies UI or config creation; it also simplifies unit tests. Don’t fear schema validation errors. They are your friends that prevent runtime errors down the line and they can tell you exactly what’s wrong with your inputs before you commit mistakes.
  6. Match HTTP errors to file access errors. If an operation is a 404, FUSE can return a -ENOENT, meaning file not found. If an operation is denied via 403, FUSE can return -EPERM and show file permission denied. If an operation has an error or another issue, it can return a -EINVAL.


I’m not a full-time engineer anymore but I still appreciate a good UX. Sometimes when performing work or live demos of a REST API, it’s unsettling to need to keep open a REST API doc open in the background. I wouldn’t expect to need to drive a car with the manual in my lap, and I don’t think CLI/REST UX should be any different. FUSE filesystems make horrible block storage interfaces but it can actually simplify REST API CRUD operations for users, applications, and automation. Feedback and pull requests are more than welcome. All code can be found here:

Apologies if code and IDE seem archaic, but my inner dinosaur is happy as a clam with classic Monodevelop and C++98ish. If anyone is happy to rebuild these from scratch, I’d love to follow along.

HashiCorp Solutions Engineering Blog

A Community Blog by the Solutions Engineers of HashiCorp and Invited Guests

John Boero

Written by

HashiCorp Solutions Engineering Blog

A Community Blog by the Solutions Engineers of HashiCorp and Invited Guests

More From Medium

More from HashiCorp Solutions Engineering Blog

More from HashiCorp Solutions Engineering Blog

IP Addresses are Cattle

More on Hashicorp from HashiCorp Solutions Engineering Blog

More on Hashicorp from HashiCorp Solutions Engineering Blog

Vault Integration Patterns with Venafi Cloud

More on Hashicorp from HashiCorp Solutions Engineering Blog

More on Hashicorp from HashiCorp Solutions Engineering Blog

Vault Integration Patterns with Venafi: Part 2

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade