One answer is people recommending their favourite framework or one they’ve had success with in the past. A whole series of blogs could be written comparing them all, but my short take on frameworks is that they are great … for simple applications. In my career I have never used a framework that I didn’t end up fighting at some point or another, usually trying to do something that should have been simple. I’m sure peoples mileage will vary depending on the kind of work they do, but I for one don’t like being put in a box.
And the other answer I hear the most is just use Go’s std net/http. To be honest the std net/http package is very robust, well thought out and performant; I’m sure based on my take on frameworks you’re thinking well that’s the end of this article, but please read on.
So what’s the biggest knock on the std net/http? I’m sure it’s debatable … but for me it’s no, what I call, SEO query parameter support. eg.
So why is that important? well as you might have guessed, for SEO purposes, but also how any sane API is written; I’m sure most people are surprised it wasn’t take into account during the design of the std net/http, but perhaps it was and dismissed as there are so many ways to support it.
So do we really need to step away from the std net/http package? Let’s make it work with the std net/http that redirects on trailing slashes:
OK so far so good, but that’s a pretty basic example, let try something a little more complex; with multiple parameters and pages.
Oh wow! that got ugly fast! and that’s only 3 URL’s, just wait until we start adding more and event some static URL’s into the mix.
There must be a solution to this that’s not so verbose, perhaps we can hold a list of URL’s and use regexes to match … wait a minute, “sniff, sniff” something smells here, I’ve seen logic like that before! right! gorilla/mux; it can handle what we need and more.
Now before continuing I wish to state for the record that the gorilla team was, and continues to be, one of the forerunners for Go web development and have earned their high standing; I am in no way saying not to use their package, I’ve used it myself.
I am oversimplifying but gorilla/mux smartly prioritises routes and matches via regex, which provides some other nice features, but in the end it’s iterating over a list of URL’s and matching via a regex. Matching via a regex is pretty fast, but the more URL’s you add the more you may have to check before achieving a match, no matter how smartly prioritised they are, and that might not scale as well as some other approaches.
So skipping to the approach that I’ve found to be fast and scalable is by using a radix tree and the best I’ve encountered is julienschmidt/httprouter.
As I mentioned earlier I always end up fighting frameworks and so recommend creating your own based off of the libraries and routers that work best for you and if you work yourself into a corner it’s you own fault, but because you have full control you can probably fix it too! ;)
To nobody’s surprise I use the go-playground/pure router, which I created, and is 100% compatible with Go’s std net/http and build off of it to create my projects, big or small; but use whatever one works best for you.
So to answer the question “Is Go’s std net/http all you need?” the answer is yes however as demonstrated above if you have any sort of complexity you’ll most likely end up writing your own router in the process, so why not use some of the routers out there that have already done the heavy lifting for you!
Or to put it another way ,
“Just because you don’t need winter tires doesn’t mean you’re not going to use them!”
With something designed specifically for your needs, you’ll probably get to where you’re going quicker and much safer :)
Thanks for reading, stay tuned for my next article on how to pass variables to http.Handler’s and how to structure your http.Handlers.