Message in a Bottle: Design Principles for AI

Reflections on the current state of AI and some takeaways for the future

Alex Han
4 min readMar 19, 2023

It is strange to be studying Artificial Intelligence at this point in history, when it feels like the initial explosion of the field has already begun, but has not nearly reached a plateau; rather, it seems with each passing week AI promises to continue to expand rapidly in scope, power, and ubiquity. The term “AI” itself has become infused with all sorts of connotations about culture, international politics, economics, and the arts. It has become a charged word, capable of eliciting a range of emotions depending on who you mention it to — excitement, intellectual curiosity, capitalistic greed, fear of competition or replacement — but above all, it comes laden with uncertainty.

We are actively building the future, and it seems that even “the people in charge” don’t know what they are doing. Our laws and regulations don’t know how to consider the potential for data misuse or intellectual property theft (e.g. Stable Diffusion). We have no precedent for the kinds of ethical decision-making for autonomous systems (e.g. Tesla). We can’t decide whether tools like Chat GPT are good for society, but we are sure that they are scarily competent at what they do. It is abundantly clear that AI is growing faster than the ethical, cultural, legal, economic, and artistic understandings that surround it. It is critical, then, that we actively discuss why we want to build the things we do and how AI fits into the greater picture of how we want to live in the world.

So, I would like to propose some guiding questions and ideas for those designing systems with AI, as well as some reflections for those interacting with those systems or those watching warily from the sidelines. Since I somehow belong to all three of those categories, I want this to serve as a sort of letter to my future self to help keep a steady rudder amidst the chaos.

  1. AI is not magic.

It is a tool, a process, and in most cases, just a buzzword. It is something created and implemented by human beings, and while there may be a “black box” embedded at the heart of an AI system, we should not allow the mystery of AI to assume the ethical agency of its designers nor should we consider an AI system infallible. The latter is especially true in a system like GPT3, which spits out true information most of the time, but not all of the time. It is easier to believe a lie that is sandwiched between two truths.

2. Beware of the Turing Trap

Is the ultimate goal of AI to perfectly emulate human behavior? Perhaps this is what is useful in certain cases, but it is not and should not be the objective in many others. Question whether it even makes sense to evaluate AI based on how well it can mimic a human being. Especially in music and art, the focus should be on how AI can augment human capabilities, not replace it.

3. Does ___ even need AI?

Don’t start at the premise that adding AI to something is an end-in-itself. Not every system needs to have AI integrated within it. There should be some purpose to having AI be part of design. Is there something that could not be possible without the use of AI? Would this contribution even be meaningful?

4. Focus on how AI impacts people and the world, not whether or not it is “conscious”, “intelligent”, or “creative”

It is too easy to anthropomorphize AI, and (in my opinion) a largely pointless pursuit to debate whether an AI system is conscious, because we don’t even know what it means for us as human beings to be conscious. We should worry less about AI’s metaphysical status and more about how we want and don’t want to use it, and how it will affect the way we live. Remember that humans are still the ones designing and implementing these systems.

5. AI can be beautiful

While what the field of AI needs more at this moment in time is healthy skepticism, caution, and restraint, it is also worth acknowledging that it is possible to use AI to do good, to help people, and to aid in the creation of beauty in the world. Strive to be an example of how we can use AI to extend the ways humans can express themselves. In a world full of dumb, unnecessary AI systems, try to show that something good, true, and beautiful can still result.

6. “You can do a lot with not a lot of data”

The answer isn’t always to feed more input data or increase the number of nodes or hidden layers. Instead, consider recontextualizing the use of AI within a larger system. See Rebecca Fiebrink’s interactive ML system Wekinator, which lessens the involvement of AI in the system itself, placing greater emphasis on human decisions and inputs. There can be a lot of power in AI’s capacity to generalize based on sparse information.

7. Don’t be an uncritical nerd

Just as we shouldn’t overestimate the power of AI, we also shouldn’t underestimate its impact. We should not barrel forward in the name of technological progress without thinking about the long-term impacts. Think about the meeting with the [REDACTED] team, whose decisions affect millions of people worldwide, and have the potential to shape the consumption culture of media. Just because you can do something, doesn’t mean you should.

Lastly, I want to update this list as I continue to learn about AI and encounter examples of how to and how not to design AI systems. But above all, the key is balance: be aware of both the limits and potentials of AI, be cautious and skeptical but don’t freak out too much, and find a healthy symbiosis between humans and machines.

--

--