The AutoArchitect: Keep your AI Architecture Competitive with Open Challenges

Gregory Bramble
IBM Data Science in Practice
5 min readDec 16, 2020

This article looks at the best ways to architect automated and general purpose AI systems.

The world is moving fast. For those of us working in AI, sometimes it seems to be spinning at warp speed. Every day, more and more tools emerge for us to do better AI. New libraries are unveiled at a weekly clip, new versions all the time, better languages, new approaches, and with all of that, new API’s. For the reactive, all of this can easily turn into mountains of technical debt. For the agile, however, every day can be a new opportunity to improve and build upon rapidly evolving software frameworks and systems for AI. For many of us, though, “being agile” is a destination to constantly strive towards, but not a clear direction in itself. How is it possible to be agile when there are just too many new frameworks, ways of working, and just new ‘stuff’ hitting the software landscape every day? We cannot read articles and examine libraries all day, we have to write code and be productive within the realms of our control, too.

One unlikely way, that has helped our teams tremendously in my work as AutoAI Architect at IBM Research AI, is to compete. “Compete, you say? How does that have anything to do with API’s?” Well, as I have learned throughout my past years of experience, competing is indeed quite relevant to building and sustaining quality AI architectures.

Rapid Prototyping++

Photo by Jan Vašek on Jeshoots

Especially in research and development, one of the most popular ways to develop software is with rapid prototyping. In rapid prototyping, typically one researcher, or a small group of researchers, builds a ‘quick and dirty’ implementation of a novel software system. The goal is to prove the technology quickly, perhaps for a research paper, or for a proof of concept for a potential product. This method is popular because it is fast, but it often falls short in a few major ways. One reason that this method often falls short is that the team does not take the time to plan for a solid architecture, because they are often trying to make a deadline, so they just use whatever libraries and other components they can find to get the job done quickly. Another related reason that rapid prototyping often falls short is that the team often lacks the experience in the given software field to know what the standards and best libraries are to get the job done.

This is where I propose that this team should compete. The team should begin by searching conferences and other competition hosts to see if there is an open challenge in the domain of their software system. Often times, especially in AI, the team will find that there are many. Then, sign up, and try to win. After this, something quite surprising will happen. The first shortcoming in rapid prototyping mentioned earlier will often disappear, because the team will engage the competition community to learn the best libraries, and probably will learn the most cutting-edge API’s from the competition itself. And, that is not all. The second shortcoming of rapid prototyping will also disappear, because the team will quickly learn best practices in order to try to win the competition. By the end of the competition, win, lose, or draw, the team will win because it will have a competition result, a prototype, and enough knowledge of the software domain to go forth and build a great system.

Real World Benchmarking

Photo by Austin Distel on Unsplash

Much of a researcher’s time is spent trying other researchers’ approaches, in order to understand the state of the art and compare his or her own ideas to the best ideas in the given field. In general, I refer to this practice as benchmarking, though it can take different forms. In benchmarking, a researcher tries to prove to herself or himself that the system under investigation works. If the subject of the benchmarking is a new and novel software system, this is important for two major reasons. One, it can mean the difference between being published at a major conference or not. And two, it can mean having the confidence to push forward and attempt to turn your system into a great product of the future, or leave it on the shelf.

These issues raise the question, what is the best way to benchmark? If one starts to look, he or she will find more benchmarking libraries than he or she ever could want. Therefore, I propose a different and more focused way to benchmark. Yes, you may have guessed it, compete!

Competing in a challenge relevant to the team’s software domain will produce real world results, typically against the best researchers in the field. If the competition is well-run and reputable, a good result will produce indisputable evidence that the team’s approach works. This is invaluable, both in terms of validating the approach, but also very importantly, for giving the team confidence to venture forward with the system. The team will learn more than it would ever expect, and come out of the challenge with newfound knowledge and accompanying that, confidence in the system.

Have Fun!

Photo by Element5 Digital on Unsplash

Any job suffers from its share of monotony, and software research often can fall victim to this common ailment. Creating great software often leads a researcher to become extremely detail oriented, logical, and focused on making the software correct and bug-free. These are all great qualities, but sometimes a team needs some extra motivation to keep perspective and enjoy their jobs more.

Competitions are extremely fun. The spirit of the competition will help motivate and unify the team. Quickly, the team will discover loads of previously untapped creativity and be impressed by how willing they are to try different ideas. They may have some late nights reminiscent of college days. And perhaps the best part comes last; few things beat the excitement and anticipation of learning the final results. Moreover, a great result will really boost morale, and be something around which the team can together stand proudly.

We live in an agile world. There may not be any single idea, process, or way of working that can help us keep up with it all. This is a reality that AI Architects must face, and try to deal with in different ways. One very important way is to compete. Competing in open challenges may just be the thing your team needs to be more productive — and help bring your AI Architecture to the next level.

--

--

Gregory Bramble
IBM Data Science in Practice

Gregory is a Research Software Engineer and AutoAI Architect at IBM Research AI, in Yorktown Heights, NY.