
It’s weird to think about, but I’ve been using Go on and off for more than three years now. Go is a new enough language that Go users tend to describe themselves as coming to it from some other “native” language. “As a reformed Java dev…” “Coming from dynamic languages like Python…” As for me, I was lucky enough to start using Go early enough in my career that it’s the lens I naturally view other programming language through. But somehow this was my first year at GopherCon!
In her closing remarks on the importance of people who are new to Go, Natalie Pistunovich hit upon some of the reasons I haven’t gone before. Startups are unlikely to have budgets for travel and hotels, and those that can afford to send one or a few representatives will usually pick a lead engineer over a junior one. Even if they work for companies with conference budgets, newer users or people new to software development entirely probably don’t feel confident that they “belong” there, and people from underrepresented backgrounds or international users must feel this effect hugely amplified. Yet as a newer language, Go has a higher portion of new users than many others and has the most room to grow by drawing in newbies. As Natalie pointed out, conference attendees are a biased representation of a community, not an accurate reflection of its makeup.
Community and inclusiveness was the topic for bunch of talks that were no less deep than the technical ones. Gophers love deep debugging to find the root causes of problems, even the debugging leads you to flaws in the language itself. Prateek Gogia’s talk about how goroutines caused bugs when using network namespaces was a great example of a technical talk about a bug leading deep into the roots of a problem. It’s great to see this enthusiasm for deep debugging crossing over into the social sphere too.
I have yet to get involved in open source, in part because of imposter syndrome and fear of encountering Torvaldsian assholes. I’ve always felt like I needed to have an impressive first contribution if I was going to get started. But Kevin Burke’s talk was a compelling plea to start with small and unglamorous PRs like adding examples to documentation. He noted that people tend to feel like they need to show up on the scene of an OSS project with a flashy entrance like a Tokyo street racer. But in reality, lowering your expectations is the more realistic way to ease into making meaningful contributions.
Julia Ferraioli’s talk the second day about writing code that’s accessible to developers who rely on screen readers was really illuminating. Those who have never had a coworker with impaired vision have probably never even considered this need. One thing she talked about was the curb cut effect, the phenomenon where introducing an initiative intended to help one population ends up having major benefits for the greater population. Indeed, each of her suggestions about how to structure code better for screen reader users, like declaring variables close to where they’re used and being thoughtful about concatenating words in variable names, was good advice for writing more readable code for everyone. In the same way, a development environment that does things like answer questions nicely, support diverse perspectives in its public forums, and welcome new people is just bound to be more creative and fast-moving than a stagnating and grumpy environment.
It’s inspiring that we still have the opportunity to develop Go’s reputation within the greater programming world as a language that’s welcoming and kind.
There is nothing I love more than a high-energy deep dive into something low-level that I have no practical reason to need to know about because my job doesn’t involve things like, say, writing my own columnar storage engine. Gophers eat this stuff up, and there were some excellent talks about how language internals and lower level libraries work. Kavya Joshi’s talk put the spotlight on one of Go’s most important features, the scheduler that makes it so easy to write performant concurrent code. As Bryan Mills’ talk on concurrency patterns highlighted, other languages like JavaScript and Scala (or at least its most popular libraries) have stipulations that you use unintuitive patterns like callbacks or futures to avoid blocking threads and getting into a situation where a lot of your process’ resources are tied up doing nothing.
When a new Go user sits down and writes an HTTP server that does something simple like serve static files, on the other hand, they can simply read files from the file system in the obvious way. They don’t need to know about the event loop or worry about too many threads being created. The language manages OS threads in an intelligent way, and the standard HTTP library uses goroutines without exposing them to users. When reading files, making a blocking call to `open` is the most natural thing to do. Blocking is an important conceptual tool for humans and when languages like JavaScript deprive us of it there’s a conceptual overhead, especially for people who aren’t familiar with concurrent design patterns. By default, we like to reason about what we’re doing in a serializable manner. The Go runtime is responsible for recognizing when a goroutine is blocked and ensuring that other important work is done in the meantime. This allows us to only have to think about how goroutines interact with each other when they actually need to synchronize activity, or when they need to share information. Getting a glimpse at the algorithms that make this possible was a lot of fun.
Other highlights on topics that are usually a lot lower-level than I need to worry about:
* Eben Freeman on how Go allocates and frees memory and how to improve performance by doing things like reusing structs instead of creating new ones.
* Filippo Valsorda on using net.Conn to write a TLS-compatible proxy, giving us a taste of how you’d intercept bytes from the connection to inspect things like the Server Name Indication (used to figure out what hostname the requested certificate is for).
* Matt Layher gave two talks about low-level networking libraries he wrote, one about Netlink, which is used to talk to the kernel to configure networking things like routing tables; and one about IPv6.
* Michael Stapelberg ran into an incompatibility between his ISP and OpenWrt so he wrote his own library in Go https://github.com/rtr7/router7/. I’m so excited to have a resource in Go to dig into how things like how DHCP actually work.
On the first day of the conference, a problem overview and design document were released outlining a way to expand the possibilities of parametric polymorphism for Go 2.0. This has caused a mild stir in the Go world. Parametric polymorphism originally came from functional programming and eventually made its way to Java and other object oriented languages. In Java, this feature was called generics, and due to some specifics of the implementation in Java that the Go designers are committed to avoiding, it made people sad.
Many people are resistant to contracts because they would make the language more complicated (no one denies that this is true). While this is an important consideration to weigh, I’m very interested in the idea of contracts because I think there’s a great expressive power to parametric polymorphism in other languages I’ve dabbled in. It does feel like there are some quirks in the design. For example, the fact that contracts are defined “by example” through a series of arbitrary statements seems like it could encourage people to just throw a bunch of code in a contract and let the compiler figure out what parts are actually relevant to the contract.
Before the conference, I took part in Peter Bourgon’s wonderful workshop about Domain Driven Development and Go-Kit. Go-Kit’s pattern for defining transport-agnostic service endpoints feels like a perfect example of what contracts could accomplish and why interfaces aren’t enough.
If you’re not familiar with Go-Kit, it advocates encapsulating a service’s business logic in an Endpoint function with the type signature func(ctx context.Context, request interface{}) (response interface{}, err error), where the first interface{} is typically some kind of structured data representing the incoming request and the second interface{} is structured data representing the response. This Endpoint is completely agnostic to the service’s wire transfer protocol, which could be JSON, GRPC, SOAP, Thrift, Unix domain sockets, or whatever the cool new thing is. For each transport type the service supports, you’d define a different DecodeRequestFunc and EncodeResponseFunc which is responsible for constructing the request and response values, respectively, for a given Endpoint. For an HTTP service, the DecodeRequestFunc's type is func(context.Context, *http.Request) (request interface{}, err error) and the EncodeResponseFunc's type is func(context.Context, http.ResponseWriter, interface{}) error.
In Go-Kit, you create a Server (Go-Kit’s implementation of http.Handler) by combining the Endpoint, the DecodeRequestFunc and the EncodeResponseFunc. The go-kit Server has a response handling method like this:
func (s Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
request, _ := s.decode(ctx, r)
response, _ := s.endpoint(ctx, request)
s.encode(ctx, w, response)
}Even though the Endpoint’s type signature states that it accepts an interface{}, Endpoints generally cannot actually deal with any type. Instead, each Endpoint expects a specific business domain type that’s returned by the Decoder it’s paired with in the Server. In my experience, Encoders tend to be a bit more permissive with the type they accept than the Endpoint is— I often just json.Encode whatever type is returned by the Endpoint.
With this type of pattern, the current Go compiler can’t guarantee that the developer hasn’t made a mistake by wiring up an Encoder to an Endpoint that expects a different business domain input or an Endpoint to a Decoder that handles a different output than the Endpoint returns. Personally, I’m willing to plug up this gap in my type safety armor with extra attention in code reviews and minimal testing. The nice structure the go-kit pattern provides makes circumventing the type safety worthwhile. And I think as more people continue using Go for large-scale projects, we’ll start seeing more and more patterns emerging that bend the type system to allow more flexibility. You can moralize against this, but I think it’s just reality. I’m interested in contracts because they have the potential to eliminate this unnecessary tradeoff between flexibility and compiler-time checking.
Right now the strongest claim the compiler can enforce is that an Server’s Decoder can return anything it wants in the first argument, an Endpoint can accept anything it wants in the second argument, and an Encoder can accept anything it wants in the second argument. But we’d really like to make an additional, more sophisticated stipulation: a single Server’s Decoder has to return the same type that the Endpoint accepts, and whatever type the Endpoint of that Server returns should be the only type the Decoder can accept. This is simply no way in the current Go type system to make expressive assertions about how a type’s fields and methods relate to one another! Interface’s only allow parametric polymorphism with respect to a single value.
With the contract design, we could express the go-kit server’s expectations on the type level:
type Server(type req, resp) struct {
ep func(ctx context.Context, req) (resp, error)
dec func(ctx context.Context r *http.Request) (req, error)
enc func(ctx context.Context, w http.ResponseWriter, resp) error
}(I still need to reread the design document to fully process it, and one thing I’m not totally sure about yet is whether an Endpoint of a type like func(ctx context.Context, loginRequest LoginRequest) (LoginResponse, error) and an Encoder of type func(ctx context.Context, w http.ResponseWriter, resp interface{}) errorwould satisfy this contract or if the Encoder would need to take a concrete type in its third argument.)
People who have already drunk the Kool-Aid on Go, like myself, are more willing than the average developer to sacrifice higher order functions like map, reduce, and filter in exchange for the language’s simplicity. Usually, Go developers’ reasoning to omit higher order functions is “you don’t really need them”, which is true. But I’ve tried to introduce coworkers to Go and encountered resistance because they reasonably expect their productivity to go down without access to these simple shortcuts. Go’s simplicity is a major selling point, but it’s not its only virtue, and I’m not certain that we should allow a fixation on a precise simplicity benchmark to stifle the language’s growth. As Dave Cheney has argued, if we Go does not end up adding support for features other languages take for granted, Go’s advocates will need to make up for that lack by helping potential converts to the language understand why they can live without them.
