EMPEX Speaker: Jeff Smith

Andy McCown
empex
Published in
21 min readMay 16, 2018

Jeff will be speaking at EMPEX on May 19th about “Neuroevolution in Elixir.” He is an AI developer, author, and manager. He coined the term reactive machine learning and wrote the definitive text on the topic.

Andy: Hi Jeff — you spoke at our Halloween event a couple years ago, welcome back! Tell us a bit about what you’ve been up to since then.

Jeff: Sure. I like to do a lot of different things and I think at the time I spoke at the Halloween event, I was managing a team building a conversational AI, that was working primarily in Scala, and I was doing explorations outside of work in Elixir to try to understand this new technology, how could it be applied to problems that I care about.

When I left x.ai, I left to start an Elixir-first conversational AI company called John Done, with one of my colleagues from x.ai. At that company we raised a bit of money to build this idea of building a vocal conversational intelligent agent that operated over the telephone. The most widely deployed vocal platform in existence, right? So the idea was very similar to x.ai, where we were trying to build a conversational intelligent agent that does real work for users, that takes on the responsibility to go into the world and accomplish things.

We started with Elixir first there really because of our experiences in building high availability systems and needing to be able to iterate rapidly while still adding fairly sophisticated features and integrate with a lot of web systems. Our two main concerns were that we were going to be operating with messaging systems (this was facebook messenger or slack type interaction or voice interaction) and the telephone system. Both of those were examples of areas where Erlang has a lot of history there and is very well designed to suit the challenges of messaging systems and telephony.

We built our POC using Elixir and it was straight forward from there, using a lot of integration with external systems, using standard web technologies that allowed us to get around some of the inherent limitations of Elixir not being the first language people write client libraries in to integrate with their systems. We built bespoke integrations with the AI system, with telephony systems but always integrating with them using http or websockets, building our own transports, channels and things like that using Phoenix, and that all worked pretty well.

One of the things that drove that startup is that we were able to get something working that we knew for a fact was incomplete, we were doing these rapid cycles of a small dev team, raising funding, acquiring users, building all this up from the ground. All sorts of use cases that didn’t work at a given time. We knew processes might fail, supervision mechanisms would automatically restart things and allow things to recover, allowing us to be able to resume conversations successfully. So that was…

Andy: No, certainly that’s quite a lot to do.

Jeff: I ended up leaving that company, and I’ve since moved on to managing a development team in a much larger organization building another conversational AI…

Andy: So you’re kind of a serial… [laughing]

Jeff: I’ve made a few AIs, a large proportion of them are conversational AIs. I have kind of a niche. Now I work down the street at a company called IPsoft that builds a conversational AI called Amelia. I’m working on a lot of the same sorts of problems, and in this case I’m managing a large development team. I’m not actually working as a developer myself.

Andy: A little bit different.

Jeff: Yeah, it’s similar to how I functioned at x.ai where my role was to find a way that we could organizationally succeed at building technology that has never really existed before. As part of that I’ve tried to carve out space to do open source work that allows me to explore what I think are important areas in AI development, and I still think that Elixir has a role to play within AI and ML [Machine Learning] that I don’t think is obvious to everyone.

In the world that I live in where I’m constantly thinking about highly concurrent conversations occurring across a large range of different modalities and on different platforms and languages, whether it’s voice or text, my world’s filled with challenges which I would like to have the ability to have the sort of tools I had when building on the BEAM.

I think that most ML developers are working without tools that have that ability. I’ve been pressing on that gap in open source, how can I open that crack a little bit wider. I feel like I’m a member of these two tribes that don’t actually talk very much: Elixir/Erlang, and this world of deep learning AI folks building things in Python and C++, that have to deal with problems of concurrency and high availability, distribution, and all those sorts of things.

Andy: When you frame it as a lot of concurrent conversations and processes then yes, it’s easy to see Elixir or Erlang playing a role there; which is absolutely different from typical image of deep learning guys digging into Python. It’s interesting to hear.

Jeff: Yeah, yeah, I gave a talk last night where I tried to talk to a community that has spanned Scala folks, Elixir folks, and ML folks, and I tried to focus on some of these toolings similarities, to bring people together a bit more.

I think a sort of mismatch in focus that the ML community often has, that kind of ignores this problem, is that we in ML have always focused on one part of our architecture above others, and this is something I talk about in my book as well as with Sean Owen who’s really a leading figure in ML architectures, founder of Myrrix, creator of Mahout and Oryx and long time head of data science at Cloudera. He and I had this experience and I talk about it in my first book, where we in ML development teams, that don’t think about what do we do with an ML model when we publish it to production. Instead we’ve historically spent so much time thinking about “how do we train it,” “how do we actually learn from data” which is a really important challenging problem, but it’s not the only one.

We actually want to serve these models, in production, to a large range of users, which leads us into those concerns around concurrency and availability. There’s this sort of misplaced emphasis on training over serving (or inference).

Issues of where in the system we should focus our development have been a big part of my focus, in my professional work as an ML engineer. This concern led to my work designing example reference architectures for my book, Reactive Machine Learning Systems. This is what I’m really striving to do now, to build things that I haven’t seen other people build, around really mating up the world of deep learning and bleeding edge ML techniques to those real world needs about what happens after you’ve learned the model. How do you work with AI and a large active userbase? How do you interact with valuable functionality and guarantee the absence of problems?

Andy: Today, people think AI systems are computerized assistants, digital assistants, like hey Google, and Siri and Alexa. I would guess those are serving pretty large user bases… I don’t know how much you know about their actual production systems and what they do for…

Jeff: Yeah, so I’d say that some things can definitely be taken as a given. Which is, whether we’re talking about stacks that include a vocal component like the smart speakers, or things that are more about pure messaging, that level of language understanding, we’re definitely using deep learning throughout the field. It’s been critical to getting commercial grade automatic speech recognition and speech synthesis, giving our speakers the ability to talk. But it’s also absolutely crucial to having any sort of conversational AI, even in text.

Deep learning is at the heart of that, even if it’s not strictly speaking the only technique that solves everything. There’s still more niche components, things like conditional random fields, something that will come up in some sub-problems within NLP [Natural Language Processing]. They’re using DL [deep learning] models across the industry that spans big tech and small tech the same.

Even if you’re a scrappy one man startup you’re still probably grabbing some models running them via tensorflow or grabbing something off of GitHub. If you put yourself in the position of someone who’s not Google, presumably you don’t have armies of engineers…

Andy: …most people don’t…

Jeff: To build these previously unheard of systems with incredible availability surrounding them to serve at scale. Some of this is actually really hard to do. You see a simple iPython notebook that trains one model and shows you how to use it. It’s just a terminal session where one user utterance, a string of a sentence, is sent into the model and it returns a sentence score which it extracts out the entities.

That’s not your real world where you’re trying to keep your customers happy. You need to be running a high availability service to do that, and the open source tooling for model serving is pretty weak.

There’s more progress from some of the cloud vendors who actually try to support this; there are model serving platforms available from Google, Microsoft and Amazon. And they’re useful, they’re a start down that road. Most ML systems I’ve worked with in the real world are more complex and more baroque than any of the building block uses you’re going to be able to grab from one of these cloud vendors. So even when they take some of that pain off your plate — like maybe they help make model training work a little bit better — there’s all sorts of real work you’re going to need to do that’s your engineering team’s problem. In which case, I feel like the current state of the art in working with deep learning frameworks doesn’t make anyone’s life easier.

The choice of Python is the lingua franca for the user APIs has resulted in pretty poor discovery of what are the proper use of parameters in interacting with DL frameworks. This means without a static type system I can’t really say “What is an appropriate call to this method? Can I only pass in values between 0.0 and 1.0? or could I put in a 1.2?” In fact in most Python DL APIs you discover that by setting your own thing or maybe reading the docs, if there are docs.

Andy: [laughing] …if there are. Right, right…

Jeff: There’s a reason why the bleeding edge looks like this. This isn’t bad engineering; this is moving forward the capabilities of computer science. A human’s ability to work with technology that imitates human intelligence. The bleeding edge has these sort of static properties.

Andy: There’s a reason it’s the bleeding edge.

Jeff: Right, but what can we do? Right? A static type system isn’t the only way to solve that. Another way is to be able to respond to failure. To look at what would happen if we could let it crash. That’s the direction I’ve been trying to plug away at. And I’ve seen some ability to mate up those worlds of uncertainty around what my Python implementation might be able to handle, what it can do, what’s gonna happen half an hour into the training cycle when it hits a value it didn’t expect to see.

Those supervision mechanisms that descend from Erlang and OTP. They can solve some of those problems, they can give us the ability to continue to achieve the part of the mission which is still achievable. Keep learning, keep serving the users, which can pass us useful data, this is achievable, but I think this is a technique that is not being widely exploited. I think most folks who are in this situation are trying to hack it together with docker and kubernetes, which gives you very little ability to reason about these things at the level of application logic, because those are infrastructural tools.

The fact is some of these are ML problems, they’re maybe even specific to your domain, something about conversational interactions and you want to be able to do them in your code, and make use of your business rules, and decide how to respond to particular failures. I think Elixir has been a great tool for allowing me personally to explore it, and I would like to see if there’s ways to build more general reusable tools to apply the unique capabilities of that platform to the challenges of DL.

Andy: So in working with Elixir recently, are there any kind of new surprises or new elements that you’ve come across; maybe new things in recent releases of the language, or new bugs; surprises that you didn’t expect?

Jeff: On a day to day basis I love the formatter. [laughing] It really makes me happy. Because it’s such a simple little thing but I’m glad we could reach some sort of reasonable agreement on this. There’s a baseline and it keeps us happy. And it’s a somewhat different direction I think than Python and Go both took, and it’s workable for me. I’m glad I have it in my codebase and it’s enforced by CI builds that I set up for myself. That’s great.

At a kind of high level functionality I personally am still getting my head around the proper way to use tasks. I think they’re pretty core to some of the problems that I like to work on, and I have a lot of experience of working in the plague of different task implementations within the Scala community. This is something that, in Scala land, we worked on for a long time and there are a lot of different competing task implementations with different properties. That resulted in a combination of that community fragmentation which, in combination with a very typeful statically typed workflow, results in a lot of incidental complexity for developers simply trying to abstract computation over time in tasks.

I find the Elixir implementation so far is pretty productive, while at the same time I’m still personally working down my learning curve and how to employ it correctly with supervision mechanisms. I would expect that if someone took a look at, for example, the Galápagos Nǎo repo, I think someone could probably file a decent issue PR or something to improve the way that I work with tasks and supervision, but I think this is important stuff to do, and this is the harder stuff. I think this is something you see as a great reason to adopt toolchains that are working on these problems. Because a lot of real world problems that I’m familiar with have the shape of a task requiring supervision of some sort. This is a powerful technique, and it’s great to see what I would say is focussed development from the language community, on a given implementation, that we all agree we want to improve and invest in further. Not that tasks should be rigid, but that we should agree that tasks are tasks and lets not have 12 types of tasks that don’t interoperate. That’s made me happy.

Andy: Have you played at all with, or maybe it doesn’t fit into your use cases for it, some of the GenStage/Flow stuff.

Jeff: Yeah, that looks interesting to me. I think that’s closer to some of the more data engineering tasks in model learning pipelines and things like that…

Andy: That’s kind of what I was thinking of… if you’re feeding a lot of data into a model it potentially would fit in there but I’m not sure …

Jeff: Yeah, I would say that right now I think there are use cases within the sorts of ML problems that I work on and I think it’s something that would be worthwhile to explore. I haven’t gotten a chance to do as much with it as I would like to. I do like the richness of the range of different ways that we can think about our data flow within the Elixir toolchain. I think that makes a great argument for people who are trying to understand really how to mate those things which are offline and heavily compute intensive with the online and very concurrency and latency focussed.

Those are definitely some of the challenges that I encounter in trying to build ML systems with other toolchains like JVM and Python. And there can be pretty dramatic switches in how much the toolchain supports some of those workflows, when you’re using tools that are really focussed on one of those two modes, you know, it’s either batch mode offline vs realtime. But I feel like with Elixir, it’s starting to show a lot of those properties of what I would call good front-end engineering; modern JS toolchains are going in this direction as well and really thinking about abstractions for data flow that we can use consistently across contexts, and worry a little bit less about changing our programming model when we change the context in which we’re actually performing those data transformations.

Andy: You’re playing with Elixir and investigating Elixir; is your team actually using it now in any production?

Jeff: So at IPSoft right now that is not the toolset we’re using. This company’s quite old, the product is significantly younger, but the focus right now has been on serving very large enterprise customers, so the methodology that we’ve used begins with a lot of classical enterprise Java techniques — which has a lot of the expected limitations that you might imagine.

In particular one thing that I’ve found to be true both at IPSoft and x.ai and talking to other friends with different toolchains, like say fullstack JS, and talking to the folks at Hugging Face [they make a great conversational ai teens and tweens, it’s a sort of AI friend] — almost everyone’s in this position where we have no alternative but to use the latest and greatest Python DL tools, and then deal with the consequences of trying to incorporate that into a live production application.

And so it doesn’t really matter where you start or what you’re building, this Python issue is becoming pervasive. Very few people are actually working on real solutions allowing us to work across language toolchains and to use tools which allow developers to use the right tool for the right job.

One of the things I hope to talk about at EMPEX is the importance of using open interchange formats that allow us to break down some of these language barriers, because I’m not really an Elixir zealot, or a Scala zealot, I’m a guy who likes to build things. I want to use my full range of capabilities and the entire range of capabilities that the tech community has created. So I’m just trying to find ways to break down walls and build better things that can be too hard, for reasons of incidental complexity.

There are folks who are moving in that direction. Two things I’m going to be talking about that open the door to that are 1) Apache MXNet, which is a fairly new DL framework. It’s being supported primarily by Amazon right now; and 2) I’m also excited by another Open Source project ONNX. Which is the Open Neural Network Exchange format. There’s pretty broad cooperation around the industry in trying to get DL frameworks and technologies to interact. So with ONNX, you see Amazon, Facebook, Microsoft, Baidu, Nvidia, all these other companies, collaborating on finding ways to use different DL toolchains and have them pass data back and forth using language-agnostic schemas. ONNX at its simplest level is just a proto-buf schema that you can build code against in different languages and different toolchains.

Getting back to the previous technology, Apache MXNet is a polyglot DL framework. It’s starting off trying to find a way to build DL interface technology in not just Python, but also in Scala, in Julia, in R, in Go, so that we can start to have this world where all developers can become ML developers. That’s the world that I think is definitely happening, though we’re still in the early days of people doing the hard work of opening those doors up. I’ve seen a small amount of activity of people opening up ML technology to JS, and that’s the future…

Andy: everything can be done by some node module, node can do anything right? (laughing)

Jeff: Yeah, it’s all so much harder once you get into a situation like, I want to implement that bleeding edge paper, it just achieved something no one else achieved. It was posted last month and there’s one reference implementation, it’s in Python, it uses tensorflow. We can interoperate across this, these are solvable problems, these are not easy but these are worthwhile because this broadens the community.

This makes it possible for ML and DL to not be this secret priesthood of folks only in one small collection of companies and academic institutions. These are capabilities anyone in the field of CS has the ability to use and this greater democratization through sharing data and tooling, through simple interop mechanisms, is going to have a major impact in the shape of products we’re going to be able to build in the future — increasingly with small scrappy teams of folks, who have bright ideas and just want to run with them.

Andy: Looking at the wide range of submissions we got for talks, there was a lot of interest in your topic, but one of our concerns was “yeah but is it going to be a lot of Python and here’s a line or two of Elixir to show you how to call all this Python”, this is an Elixir conference so… this idea of democratizing and opening up the access to DL tools and toolchains…

Jeff: Yeah, I think it’s an important topic and after this interview right now, I’m going to go talk to Amazon’s DL team, who are the primary financial sponsors of the Apache MXNet project.

They are the best example of anyone trying to actually do this, to say there’s a world full of developers using a whole bunch of tools for entirely different reasons, many of them good ones. How can we build scalable technology with the resources of Amazon, that embrace the world of developers, not just folks who decided that it’s ok to solve all of these problems in Python?

I think that this idea is not evenly distributed. Not everyone is focussed on this part of how we can move DL forward; but if you look at the things people say we want to build in the future, like intelligent devices, iOT sorts of things, smart cameras… to get to a world where we actually see robots doing real work on a daily basis, we’re gonna have to use a broad range of tools.

These are hard problems, if you talk to folks working on embedded systems, or applications like drones, which need to do things like absolutely stay in the air, you don’t use the same tool for that as you might use for a data collection form on a website. We need to be able to have development toolchains and workflows that allow us to approach that stuff. Eventually we’re going to get to that point where folks with the relevant skills to solve those domain problems are using tools appropriate to embrace the most powerful capabilities that come out of AI as a field. It’s not a niche, it’s not specialty, it’s the same as… you don’t encounter folks who treat databases as something specialist … “oh no that’s a different field, other people know stuff about databases, I’m doing mobile.” Everyone does something with databases, everyone can put their data somewhere and get it back… you have opinions about how to do it well.

Andy: And I think you see that, historically, through a lot of CS fields — at first there’s db specialists, other specialists, there’re ops teams; then devops, and everyone knows databases… and eventually ML is going to go that way at some point.

Jeff: Yeah. Maybe we won’t all be writing academic papers in our free time.

Andy: Probably not.

Jeff: But we should be able to autonomously learn from the data that comes into our system. How to use it to make decisions that we can encode. This stuff has been in the works for more than 50 years now. This is an important goal of our field and it’s finally coming to fruition. The maturity of that is something that’s going to benefit us all, in ways that are not all foreseeable now but are important that we fully embrace, as the concern of the whole technological community.

AI’s going to change a lot of things; it’s going to be the most effective solution to a lot of people’s problems, so as a software engineer, as someone who cares about technology, I want to work on that. I want to see how that can help me do things better. As a manager who works in a large organization, I want to understand how I can help others work down that learning curve, master technologies. I absolutely don’t want to feel like there’s a specialist group within a larger team who has this knowledge that you couldn’t possibly get anywhere else; that these are the only people who can solve this particular subproblem. Because this is all still software.

These are all part of our shared responsibilities. When developing a solution, we can all build ML systems. There’s no guardian at the gate. There’s nothing you have to do. You can go back and write a DL model to recognize handwritten digits. It’s possible, all the tools are out there, and I think we’re only going to make this easier for everyone.

Andy: We’ve covered this a bit, but obviously for EMPEX one of our goals is to enhance, build, and enrich the Elixir community. You’re already in the position of trying to bridge communities and grow them together. What would you like to see worked on, built, or focussed on in the Elixir community?

Jeff: Hmm… Yeah, it’s a good question. When I think about Elixir I think the strengths are so strong.. the things that Elixir does well have been pretty much amazing since the first time I saw it. The high productivity of working with mix as a build tool, phoenix blew me away, and I was really impressed with all the things that were so easy to do and so productive, and on and on, you know, working with messaging systems, the high availability and supervision stuff is great.

I guess from where I sit some of the things that I think are opportunities for us or ways to improve are those things that make particular toolchains popular for specific domain problems. I think about a lot of numerical computing use cases. This is an area where we continually develop new tools, new languages, new frameworks, and we will keep doing so forever. It’s a not a great story for Erlang and Elixir right now.

Andy: No, it’s not.

Jeff: But I don’t think that it has to stay that way forever. Part of the way that Python took as much of the market share for numerical computing as it did is this thing called Cython, which is a dialect of Python which allows it to produce native code via C, and that produces extremely efficient code for numerical computations. Which is interesting, because it means the reason Python is so good for working with data is not really a feature of Python at all. Eventually more sophisticated tools get built on top of it like numpy, scipy, and pandas, then scikit-learn, and tensorflow and on and on. There’s this virtuous cycle that’s occurred there.

But numerical computing is a really important problem that occurs not just here in the financial district, where people want to crunch up stocks and bonds and make money — my previous field was in bioinformatics and we did a whole bunch of data crunching there to do things like figure out how cancer works and how humans are different and seeing if we can make life better. There are important numerical computing problems out there and many of them are still quite poorly served by the tools that we have today.

ML is a little bit spoilt for choice as long as you stick to the Python toolchain, but there’s a rich domain of numerical problems where I feel I would love to have a broader range of tools, particularly ones that have all the incredibly properties of BEAM technologies, coupled with the high productivity and conceptual coherence of working with Elixir.

That’s something I think about a bit, and there are a few folks who’ve already tried to pave the way in this respect. There’s a bioinformatician somewhere in Europe whose name I forget, he built a language called Cuneiform, it’s meant to be a bioinformatics glue code built on top of Erlang. What it does is it deals with the fact that all these bioinformatics workflows actually use a bunch of command line utilities: a bit of R here, a bit of perl there, a bit of bash there — and it provides a nice way to kind of glue those together in a sort of DSL, built on top of Erlang. And this is still the reality for people working in biomedical fields. Their toolchains are so fragmented and they have huge important datasets. They’re getting great genomic data….

Andy: …twined and duct taped together…

Jeff: Right, that situation makes how the internet works seem elegant by comparison. If you’ve ever looked at why every browser pretends to be Mozilla or something, it’s like that, just times 100, with the weird arcane formats, or some guy’s paper from 1997 — those things are pervasive within bioinformatics because so much of it occurs within academia, and the profit motives aren’t there in the same way that they for something like serving ads.

There’s a bunch of exciting truly unsolved painful problems that exist, especially if you think about what we do with biomedical knowledge. Well, what we do with biomedical knowledge increasingly is we try to build it into other sorts of solutions. I worked in diagnostic technologies for a long time, there’s a lot of possibilities to build useful computing technologies which solve meaningful biological and medical problems that need to have really good properties — that they run forever, that they fail in knowable and consistent ways, that they do many things at once, and so those opportunities are also under-served. I think that’s kinda whitespace I’d like to see the Elixir community attack more and building on those very strong areas that everyone already knows about Elixir.

Andy: Good answer. Thank you so much for taking the time to talk to us today, Jeff, and we’re excited to see you speak at EMPEX NY 2018!

I really appreciate Jeff taking the time to talk with me. Please don’t forget to get your ticket to the EMPEX conference to be held on May 19thin Manhattan. Say hello if you see me!

--

--