Published in


AI In Practice: Part II of III

Image Source

In Part I, we went over how to think about the problem, the data, and the solution when doing AI in practice.

Continuing on, here, we’ll touch upon a bit more general topics: the messiness and uncertainty of AI projects, the bigger picture, and learning for AI in Practice.

4) The Messiness and Uncertainty of AI Projects

Guiding Principle: Embrace the mess and uncertainty. Work to reduce them.

Embrace the Uncertainty and Mess

The real world is messy. Uncertainty and a tendency towards increased entropy are natural.

The implication is that when doing AI in practice, we’ll find that:

i) The data we have to work with is messy. It’s seldom put on a platter for us. One reason is that the data up to that point would have served only the purpose of running the business. As such, it’s usually not AI-ready off the bat.


Also, over time changes in business strategies, rules or ways of doing things manifest themselves in the data. Sometimes in non-obvious ways.
For instance, maybe the data shows people’s behavior in a sales funnel changed at one point in time but we might learn later that it was due to a change in the funnel itself!

ii) The software systems and/or the business process the AI needs to fit into are messy. Over time, interacting with the real world results in them accruing idiosyncracies that are necessary for operation.

For example: for an e-commerce business, maybe they have varying inventory constraints for different products or maybe they are not able to ship certain products to some of the user segment. So if we are building a Recommender System, we may need to take that into account as well.

iii) Organizations themselves tend to be messy simply because they involve humans. (We tend to be intuitive and irrational sometimes). Plus, it’s difficult to build a well-oiled-machine of an organization. There are kinks here and there.

For instance, the marketing team and the data team might be siloed such that it’s hard to acquire some data related to the marketing campaigns being run.

All these are, to some extent, inherent in any software endeavor. But for AI systems, there’s the additional uncertainty around the models themselves. We cannot chart the path to completion of a project before we get started. It’s mostly experimental.

The obligatory ML xkcd

At any rate, we need to first embrace the fact that we’ll run up against uncertainty and messiness.

Don’t contribute to the mess

In working to reduce the mess, the first step is to not contribute to it.

For that:

i) Stay organized and be diligent. There have been quite a many times when I failed to document and keep track of nuances related to the project (eg: something a client mentioned, a detail about the data, experimental setup, etc) which came back to bite later on. These things are under our control. We need to make sure these don’t contribute to the uncertainty and mess.

Eg: Have a well-defined project structure and try to stick to it.

ii) If necessary, lean toward over-communicating. Ideally, we get the communication level just right. But when something is on the border, it’s better to lean towards over-communicating.

Of course, the way we communicate is also important but that’s a whole different topic. In general, we need to think about what we are trying to do (inform, update, ask) and frame our message accordingly. For instance, if we feel that something might be worth mentioning but isn’t critical, we might phrase the update such that there’s no inherent expectation of a reply from the other party.

iii) Make assumptions explicit. We are sure to make tons of assumptions during the course of the project. It’s important to make them explicit so that a) everyone is on the same page as to what assumptions are being made b) there’s a historical trail of how certain decisions were made. It helps in retrospection.

For example: “Currently the way we are modeling the time series for demand assumes that the demands across the outlets follow similar trends”.

iv) Finally, measure, measure and measure some more. In general, the numbers are our friends. They help reduce confusion and uncertainty. Eg: Are we doing better than before (on the metric of interest)? How fast does the system run when at max capacity?

Be Agile

The MLE Loop. Source: How to deliver on Machine Learning Projects

This can be a post on its own but the takeaway is to focus on iterating, keeping the feedback loop short and shipping intermediate outputs.

But what can be intermediate outputs if we cannot be certain that the models we build will be production-ready?

In “A Manifesto for Agile data science”, Russell Jurney talks about it as the need to “Get meta”:

If we can’t easily ship good product assets on a schedule comparable to developing a normal application, what will we ship? If we don’t ship, we aren’t Agile. To solve this problem, in Agile data science, we “get meta”. The focus is on documenting the analytics process as opposed to the end state or product we are seeking. This lets us be Agile and ship intermediate content as we iteratively climb the data-value pyramid to pursue the critical path to a killer product.

In essence, the results of our experiments, whether successful or not, are our intermediate outputs. Maybe we might not have a model but managed to glean some actionable insight. (“Hey, looks like most of your churn for this segment of users happen in the second month”). That can be valuable in and of itself.

5) The Bigger Picture

Guiding Principle: Focus on the details but keep an eye on the big picture.

The bigger context of the problem

When working on an AI problem, it’s easy to get lost in the details and not take a moment to step back. However, the task at hand is almost never an isolated problem. It can be linked in some ways to other problems that need solving.

Source: How Uber scaled its real-time infrastructure to trillion events per day

Of course, it may not apply to all projects, but it’s worth keeping an eye out for the network of problems. It might result in us gaining a new perspective on why the problem needs solving in the first place, or how we should approach it. Or, we might end up finding some other related problem that we may be able to solve.

Example: we are working on a demand forecasting system but we find, after some quick exploration, that the capability we are building also allows us to predict churn! Could it be a direction to pursue? That could be brought up as a topic for discussion.

AI component as part of the bigger system

The AI component is only one part of a bigger system. Again, it’s easy to fall into a box where we are only looking at the AI component and not the bigger picture of the system. Some questions we might ask:

  • How does it interact and interplay with the other components?
  • Does the component align with the bigger motive?
  • How is it actually used when deployed?

AI component as part of the bigger system of people

Furthermore, there‘s a system of people that the AI component needs to fit into. We might build a great AI model but if no-one cares or if it’s not usable in some way, then the problem isn’t solved. Some questions to think about:

  • Who are the stakeholders? What are their motivations?
  • Does the project have buy-in from the right people?
  • Who uses the system? What hoops do they have to go through to use the system?

As Daoud Clarke says in his ML guide:

The biggest risk is that you fail to deliver the project …
The next biggest risk is that you deliver a system that doesn’t get used.

Ensure that there is a buy-in for what you are delivering from as high up in the business as possible, and from those (people) that will actually be using it (the system).

Treat the AI component as software

At the end of the day, the AI component is a piece of software (albeit, Software 2.0). As such, we need to treat it like any other piece of software. That means:

6) Human Learning

Guiding Principle: Keep Learning

Set a high learning rate for yourself

A lot of details around AI in practice is still new. Plus, the field is moving rapidly. As such, we are constantly coming up against new challenges and ways to solve them. To be effective, we need to keep learning and constantly get better. And quickly at that!

Learning quickly can make a stark difference :P

Pick your battles; employ gradient ascent

The truth of the matter is that we cannot learn everything. There are so many things I don’t know and so many things I’d like to get to but alas, there’s only one of me.

As with the uncertainty, we need to acknowledge this and pick our battles. Mostly it means learning what’s needed now for the project and not trying to learn everything about the field before moving forward. A lot of our learning will be on the go.

Also, take your learning in your own hands. Be deliberate about finding areas you want to level up in.

Optimize your learning

For the long run, it’s better to optimize the learning itself, to learn how to learn. That can have a massive compound effect.

Among other things:

  • Favor doing over pretty much everything else. As Emil Wallner says in How To Learn: “Learning is when you use (emphasis mine) something from your memory”.
  • Employ deliberate practice.
  • Consider your goals and align your learning to them. Do you want to lean towards the engineering side of ML or the research side? Or is there some niche you want to build for yourself?

Understand that it’s not only about the technical skills

Finally, stepping back, one thing that sticks out in this article is that a lot of the topics/skills we covered are “non-technical”. Things like: problem understanding, asking the right questions, staying organized, communication, etc.

That’s to emphasize that AI in practice has as much to do with these aspects as the more technical ones. That can be easy to forget.

As Yonatan Zunger says in Hard and Soft skills in Tech:

… everyone needs to understand and value the human skills: after all, you’re building a system which at the very least includes you.

(On the plus side, a lot of these skills are transferable!)

All these aspects we covered don’t need to be tackled by a single person (that would be a mighty big ask). It needs to be a team effort. Having said that, it’s important for everyone involved to have these in mind.

In theory, there is no difference between theory and practice; but in practice, there is.

Stay tuned. In the last part, we’ll go over some resources on the topic.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Bijay Gurung

Bijay Gurung

Software Engineer. Knows nothing (much). Always looking to learn. https://bglearning.github.io