AI Coding Tools and The Rise of Personal Software

Eric Snowden

--

I made an app that shouldn’t exist.

Or more specifically one that wouldn’t have existed in any other time for a use case that only a few people will care about. An app for showcasing a user’s art collection in a minimal and flexible way, tied to a niche genre portfolio website. But with the help of AI coding tools I was able to go from an idea to an app with users in a few days.

I have (at least) one nerdy hobby - I collect original comic book art. I love art, I love supporting artists, and I love being surrounded by beautiful things. There is a website called Comic Art Fans where collectors can showcase their collections for display/trade/sale.

Comic Art Fans on a Phone
Comic Art Fans on a Phone

It’s an incredibly useful site but it doesn’t work well on mobile, it’s not focused on browsing your own collection, and when I want to show someone my gallery I need a stripped down experience that will work offline. These are not the problems this site was built to solve. So when I saw the ability to export a CSV file of my collection I had an idea.

Give the computer a plan.

Enter Replit. Replit is an agent based coding platform where you describe an app in simple language and an agent will build it for you. My app was one I would not have paid someone to build, never would have taken the time to build myself, and if I’m being honest I quickly would have gotten out of my technical depth if I was coding it solo.

My first prompt in Replit to start creating my app
My first prompt in Replit to start creating my app

From your first prompt Replit builds out a plan that you can approve or tweak it before it starts to code. I was even able to attach the CSV file I download from ComicArtFans and have Replit write the parsing code, something I know I couldn’t do myself. It can’t do everything all at once, so Replit will sequence your developement plan into multiple steps.

The example plan Replit created
The example plan Replit created

Replit really wants to use React — at least for the apps I built. I managed to build three different apps in three days. By default Replit pulls visuals from popular frameworks, but as a test I gave it Adobe’s Spectrum design system and it was able to use Spectrum for the look of one of my apps. Replit seemed to want to overwrite Spectrum with its preferred frameworks so that was a constant back and forth, but I was impressed that I could point Replit to a url to use specific external code. This came in handy several times.

An imperfect but interesting attempt by Replit to use React Spectrum as its design language
An imperfect but impressive attempt by Replit to use React Spectrum as its design language

From here there are three paths forward. You have the Agent which costs more per prompt but is able to think bigger and do more complex multi-step changes. There is the Assistant, which is better at adding one new feature at a time but costs much less. Or you can open the generated code directly in their code editor.

This stage AI coding feels like the early days of self driving — there are times where you will feel frustrated with its inability to do simple things after accomplishing surprisingly complex tasks, and there is a regular need to grab control from the machine.

As an example there was a time where Replit couldn’t get a lightbox to work properly. I searched the internet for a mobile friendly javascript lightbox and asked Replit to use the code I found. It did, and it worked the first time — this totally blew me away.

After a few hours of working with Replit something strange started to happen. I began to converse with the assistant, and even compliment it. I would say “should we do this” or “do you think” or “that’s great, thanks”.

Me complimenting the computer and securing my spot in the AI uprising
Me complimenting the computer and securing my spot in the AI uprising

And overall I felt like I was giving it direction exactly how I would describe things to a person if I was Slacking them. It really blew my mind how good it was and how truly conversational the experience was even compared to other AI tools.

Early Designs of my application
Early Designs of my application

Going Native.

At this point I had the basics of my app as a website. Next up was to package it as an iPhone/iPad/Mac app, but the little Swift I knew I hadn’t used in years. Here Xcode and the second AI tool Google Ai Studio came into play.

Google Ai Studio among other things has the ability to look at your screen and give you verbal or written feedback similar to someone watching over your shoulder. You talk to it, ask it questions, and it talks back. People have been asking it to teach them how to use tools like Adobe After Effects so I saw no reason it couldn’t teach me Xcode (Replit doesn’t have Swift support) and write the Swift code I needed.

Interface for communicating with Gemini Live
Interface for communicating with Gemini Live

I opened Xcode, shared my window with AI Studio, and set the output format from Audio to Text so it would type out the code I needed. Note: you have to be specific if you want it to write code that can copy and paste into Xcode otherwise it will try to teach you how to do it yourself. AI Studio helped me set up my project, wrote the code for me to copy and paste, and then I ran the simulator. My app worked!

Because AI Studio can see your screen you can point at any code errors with your mouse and it will rewrite problematic code for you and tell you which lines to replace to fix the error. If you want to know where something is in Xcode UI it can tell you where to click to find it. If you need help figuring out how to get a build into Testflight, it can walk you through the process and tell you step by step where to go. Replit’s Agent takes occasional screenshots, but Google’s realtime streaming is on a different level in this respect.

End to end, I was able to make an app and push my first External build to users in Testflight in about two days of work.

My app running on a Mac, iPad and iPhone

Update: since writing this article, Replit has since launched the (beta) ability to develop Native apps using React Native as the programming language and Expo as the development platform for iOS and Android. Just look for the Expo template on the Replit homepage and try it out!

What’s the catch?

This experience was amazing, and while I believe this is the future of a lot of software development, this was also a very kind retelling of my experience. While I can’t remember being this blown away with new tech in a long time, it was not without its challenges.

  1. I at least needed a working understanding of Node, Git, TestFlight and AppStore processes. This could be really daunting for someone with no experience in this space.
  2. I could not have finished my app without knowing how to code and fix things myself. My ability to locate bugs and point them out, or to edit code by hand was the only way to get around some issues that Replit just couldn’t seem to solve.
  3. Knowing where to find solutions on the web (ie locating Lightbox code) to suggest to the AI was crucial. There were times when the code kept getting more buggy, and even asking Replit to completely start over didn’t help. But often, suggesting new outside code was a fix.
  4. In Replit, sometimes there are errors written to the console that it can’t see so you have to copy and paste the errors back into the assistant. This should be automated — if the app errors, scrape the console and automatically run a fix. Don’t make me ask to fix an issue that’s causing the app not to run.
  5. As an ‘app’ plaform I’m hoping in the future Replit helps users turn their projects into native apps. The extra Google/Xcode hoop is something I’d love to minimize or skip entirely.
  6. It was strange using Google AI tools with the Apple Developer platform. I’m looking forward to having native AI tools in Xcode.

Ultimately it came down to knowing what I wanted my app to do and knowing how I would ask an engineer to do it. This is still the Achilles heel of most AI tools — the blank page and the secret handshake problem. But if you know what you want to build, ask for it in reasonably technical language, and know how to point out issues clearly you can make truly amazing things that weren’t possible even a few months ago. And maybe you can make an app that is just for you.

Reach out to me and I’d love to talk about this post, or answer any of your questions.

Eric Snowden is Senior Vice President and Head of Design at Adobe. In the past he’s worked for Behance, 99U, Atlantic Records, Warner Music Group, The New School, and Anderson Ranch Arts Center.

--

--

Eric Snowden
Eric Snowden

Written by Eric Snowden

SVP and Head of Design at Adobe. Past: Behance, 99U, Atlantic Records, Warner Music, The New School & Anderson Ranch.

Responses (6)