GenAI: An Essential Ally in Software Engineering

--

DALLE-3

My favorite personal editor, GPT (AI — Assistant), polished this article’s spelling, grammar, and flow. This, in turn, made my wife, my previous editor, very happy.

In the world of tech and coding, there’s a new ally: Generative AI. Let’s delve into how AI and I teamed up to build an EKG Simulator. It’s a story of how AI has become a trusty sidekick for a seasoned coder, making development faster and more fun.

The Ask

It all began with a simple phone call from a friend. He told me about this program that simulated an EKG rhythm: it was written by his dad 10 years ago and ran exclusively on Windows XP. The inevitable question followed, “How do I convert it to run on Windows 11?” My response was straightforward: you don’t. Instead, I suggested something a bit more ambitious — why not build a website to replicate it?

This suggestion set off a chain reaction of thoughts and ideas. Could GenAI, generate an EKG simulator for me? While I’ve had my fair share of web development experience, my focus has been on business applications. The idea building something using HTML Canvas was new for me.

I took some time trying to get ChatGPT and Bard to help me out. After a few hours of experimenting, all I managed to create was a moving line on a canvas. It was pretty cool, but it was still far from what I had in mind. That’s when all the late nights started up again. I had stumbled upon something else to build, something that I thought would only take a handful of hours.

Learning how an EKG/ECG works.

While GenAI wasn’t able to create exactly what I needed (even with basic prompt engineering, as I’m no expert in that field), I want to emphasize that I couldn’t have built the ECG-SIM without it. My journey involved using a combination of Bard, Bing Chat, and ChatGPT 3.5 to acquire the necessary knowledge. Initially, I had no understanding of how an EKG worked, and this lack of knowledge posed a significant challenge in creating something meaningful. I could make lines appear that resembled a real rhythm, but it was limited to pre-canned, well-known rhythms, which didn’t quite align with my desire for a more engaging project.

Initially, I managed to make lines appear on the canvas, somewhat resembling a real rhythm. However, I quickly realized that merely reproducing pre-canned rhythms wouldn’t be as enjoyable or educational as I had hoped. That’s when I turned to my AI companions, primarily Bing Chat, which proved indispensable for internet searches and up-to-date information. With their guidance, I embarked on the journey of understanding how an EKG actually functions, and to my surprise, it turned out to be both simple and fascinating.

With the assistance of my AI friends, primarily relying on Bing Chat for its ability to provide internet searches and up-to-date information, I embarked on a journey to understand how an EKG truly operates. It turns out, the concept is both simple and fascinating. Picture an old-school machine where a strip of paper moves at a steady rate, typically around 25mm per second, with a needle tracking the voltage it receives from the leads, ultimately creating the lines you see on an EKG. Multiple leads can be used, each with its own needle and voltage tracking. While a 12-lead EKG is the gold standard, for the project at hand, only one lead was necessary. The graph lines on the paper hold significance — each box represents a specific voltage amount and time. The larger boxes measure 0.2 seconds horizontally and 0.1 millivolts vertically. My aim was precision; I wanted to ensure that what I built closely mirrored real EKG data. Instead of relying on pre-canned rhythms, I aimed to generate data as if it were coming directly from the EKG leads. My reasoning was simple: if I could generate the correct voltage at the right time intervals, then “all” the HTML canvas had to do was draw the lines.

In the process of crafting custom rhythms, my goal was to simplify the experience for end users. I wanted to spare them the laborious task of manually entering hundreds of voltage values, so I dived deeper into understanding EKG intricacies. An EKG comprises multiple cardiac cycles, each consisting of various components such as the p-wave, pr-segment, qrs complex, t-wave, st-segment, sometimes a u-wave, and occasionally an f-wave. It’s a complex world, but my approach was to let users define the duration and peak voltage for each part of the cardiac cycle. Initially, this seemed straightforward, but I later discovered (almost at the finish line) that merely having duration and peak voltage wasn’t sufficient. The challenge arose when I realized that the apex of the wave could occur at either the beginning or end of the wave/segment. A problem that drove me to the purchase of chat GPT 4!

How AI Helped Me with the Build

The digital landscape is brimming with articles speculating about how GenAI might eventually replace developers. While that day could come, it’s safe to say that today is not that day. There are a couple of quotes that have lingered with me regarding Gen AI, and I’m not sure who originally said them, but I genuinely believe in their wisdom.

The first quote asserts that Gen AI won’t replace people; instead, it will be people using AI that replaces other people. This rings true, especially for software engineers. If you’re a software engineer and you’re not incorporating Gen AI into your workflow, you’re at risk of falling behind. Gen AI isn’t here to build entire applications or design solid architectures; that’s still your domain. Gen AI’s role is to enhance your proficiency in the work you do. A friend of mine once likened GenAI to what Photoshop was for photographers — it elevated the skilled and empowered the average, ultimately lowering the entry barrier. In the world of software engineering, Gen AI acts as a bridge, closing the gap between an average developer and today’s sought-after 10x developer. It provides instant access to knowledge. Of course, you still need to excel at troubleshooting, problem-solving, and system architecture, but Gen AI can significantly expedite and enhance these processes. Is it infallible every time? No, but as engineers, we’re no strangers to not getting it right on the first try. We iterate, adjust, and learn from the process. Below, I’ll illustrate how Gen AI played a crucial role in assisting me in building the ECG-Sim website.

Creating Content for the Learn Page (C+) — ChatGPT

One of the challenges I encountered was creating the content for the Learn page, which explains how an EKG works. It’s important to note that I had to do a significant amount of prior research and learning to provide the necessary outlines and expected content. GenAI, while helpful, had limitations in generating this content without a solid foundation. I also had to approach it methodically, generating content section by section to ensure accuracy and coherence. While GenAI played a role, this task required substantial manual effort to shape the content effectively, resulting in a grade of C+.

Creating a Quiz (A) — ChatGPT & Bard

I created a quiz based on the learning content, a task that would have been extremely painful to do manually. ChatGPT generated the 50 multiple-choice questions I needed. Of course, this wasn’t accomplished all at once; I broke it down into sets of 10 questions at a time. GenAI deserves an A for its generation capabilities. Additionally, Bard receives an A for its ability to evaluate the questions and ensure there were no duplicates.

To complete this task, I started by requesting the 50 questions from ChatGPT, which it efficiently provided in just two queries. Following that, I defined a TypeScript type and had ChatGPT convert the questions into the desired JSON format, managing it in sets of 10 questions per iteration.

Generating Cardiac Rhythms (C-) — ChatGPT & Bard

One of the challenges I faced was generating cardiac rhythms based on the rhythm configuration. Both Bard and GPT yielded similar results, earning this task a C- grade. While they were able to generate something, it’s important to note that all but the simplest rhythms turned out to be incorrect.

To provide GenAI with the necessary context for rhythm creation, I invested a substantial amount of time in writing up how to create custom rhythms. Although the AI struggled with this task, it wasn’t a complete loss, as the detailed documentation I created became valuable additional content for the page.

Creating Images (A) — Bing Image Generation & GPT-4 DALL-E

Initially, I relied on Bing Image Generation to create images for the donation page, which proved to be a reliable choice, earning an A grade. However, I later upgraded to GPT-4 with full DALL·E capabilities, and it exceeded my expectations. As a side note, I also used GPT-4 with DALL·E to generate over 80 images for trivia night. I provided the question to the AI, and it astounded me with its ability to create images that represented the questions without revealing the answers. The results were truly remarkable and deserving of an A++ rating.

Ideas Generation (B+) — Bard

When it came to generating ideas for the Contributions page, Bard proved to be a valuable resource, earning a solid B+ grade. I found Bard to be particularly adept at creative idea generation, making it my go-to tool when I needed to brainstorm and come up with compelling content for the Contributions page.

Grammar and Flow Enhancement (A+) — ChatGPT

My spelling and grammar have always been a weak point for me. Over the years, I’ve had to rely on friends and family to proofread everything I write. While I’ve never lacked ideas and thoughts, putting them into written form has been a constant struggle. The assistance I’ve received from ChatGPT has been a game-changer in this regard. It might not be as straightforward as one might think; crafting the right message and refining the output still requires effort. However, ChatGPT has undeniably saved me a substantial amount of time and has greatly improved the quality of my writing.

Speeding up Development with Auto-Complete (A) — Copilot

Auto-complete with Copilot has been a game-changer in speeding up development, earning it a solid A grade. This productivity boost is especially pronounced when working with React and basic Node.js code, where it shines with an A+. However, when using Solid.js, it tends to generate React code unless specified otherwise.

I’ve become so accustomed to Copilot that I don’t find coding as enjoyable without it. It’s akin to the difference between using a feature-rich IDE and a basic notepad for development. While Copilot significantly improves my productivity, there is a concern that I may become overly reliant on it and neglect the principles of “Don’t Repeat Yourself” (DRY) when it’s easier to generate code rather than breaking it into its own function or module.

Generating Full Components and Large Chunks of Code (C+) — Copilot, ChatGPT, Bard

When it comes to generating full components and handling large chunks of code, tools like Copilot, Bard, and GPT have proven to be helpful, earning them a collective grade of C+. While they assist in generating code, it’s important to note that they are never 100% accurate. Rather than attempting to generate everything at once, it can often be faster to use Copilot’s suggestions as you write the code incrementally. Sometimes, you might find yourself spending more effort trying to explain your intentions to the AI tools rather than writing the code directly. It’s crucial to strike a balance between automation and ensuring meaningful variable and function names in the code.

Providing Insightful One-Liners and Typescript Discoveries (A+) — Copilot

Copilot deserves an exceptional A+ when it comes to providing insightful one-liners, introducing me to TypeScript syntax I wasn’t aware of, and suggesting alternative ways to accomplish tasks. Copilot consistently surprises me with code snippets that make me say, “Well, that is cool! I didn’t know you could do that.” This feature not only enhances my code but also expands my understanding of TypeScript and programming in general.

Troubleshooting and Error Resolution (B) — Copilot, Bard, ChatGPT, Bing Chat

When it comes to troubleshooting errors, Bard, ChatGPT, Bing Chat, and Copilot all perform at a similar level, earning a solid B grade. My go-to choice for general error resolution remains Bing Chat, as it provides quick access to Stack Overflow and other common sites, making it efficient for resolving issues. Bing Chat also proves invaluable when diving into documentation for specific libraries. However, it’s worth noting that Copilot excels in identifying logic-related errors with messages like “this should be working.” On the flip side, it may not be as effective in dealing with runtime errors. As for CSS styling errors, the challenge persists, and none of these tools have managed to offer a definitive solution in that department yet.

Styling Components with Co-Pilot (C+) — Copilot

When it comes to styling components with Copilot, I’d rate it at a C+ level. While it can be helpful in generating styles, it often tends to overstyle elements, adding unnecessary styles or not considering the style libraries or global styles you have enabled.

Converting Text Lists to JSON & Typescript (A+) — ChatGPT & Bard

I frequently needed to convert text lists into JSON objects and TypeScript types. Both Bard and GPT excelled in this task, earning an A+ grade.

Finding JS Libraries (B+) — Bing Chat

When it came to locating JavaScript libraries and comparing their size and features, Bing Chat emerged as my preferred tool, earning a solid B+ grade. Its quick accessibility within the Edge browser made it a reliable resource.

How GPT-4 Saved the Project

I’ve made it a point to leverage GenAI and the most suitable tools to boost my productivity. There came a point in the project when I wasn’t sure if I could make things work. During the initial coding phase, I created a function called calculateWaveCurve(). Co-Pilot generated some code for it, and while I found it interesting, I wasn’t entirely sure about the mathematics behind it. I decided to run the code, and to my surprise, it worked perfectly. With that, I continued my work for the evening.

I had been building the project using two basic cardiac rhythms, but that persistent inner voice reminded me, “You’re being stupid; you should attempt to tackle the Saw Tooth Rhythm.” It was akin to that voice I hear when my wife imparts advice, and I, regrettably, ignore it, thinking everything will be fine. Yet, as experience has shown whether it’s coding or listening to my better half, that voice often returns to haunt me.

When I had reached the point where my project was about 90% complete, I decided to tackle those other rhythms, only to be met with a resounding failure. The issue at hand was the need to allow adjustments to the apex of the curve, which, in turn, demanded a substantial amount of mathematical wizardry. Unfortunately, my knowledge in this realm is sorely lacking.

Instead, I turned to GPT 3.5 and Bard for assistance. Together, we delved into the problem, with me explaining the issue and the desired output. We made several attempts, getting closer each time, but the solution remained elusive. I was on the verge of considering a major shift by altering the HTML canvas to use Bezier curves or Quadratic curves. However, I knew that doing so would likely result in a less smooth rendering on the canvas. I spent several hours brainstorming potential solutions throughout the day, considering whether I could employ multiple canvases and clipping techniques to create the illusion of smooth flow, even if it required a significant overhaul of the project.

Almost on a whim, I decided to make the leap to GPT-4 while simultaneously seeking answers about Quadratic Curves and the calculations required to ensure a line passes through a specific point using them. The moment I upgraded to GPT-4, I noticed a significant improvement in the quality of answers I received. While they still weren’t perfect, they were considerably better. After a few more iterations and a back-and-forth exchange of “this is close, but here’s what’s happening, here are the calculated points, and here’s where it’s wrong and not meeting my expectations,” something remarkable happened — GPT-4 provided me with the right answer!

Now, do I fully comprehend the mathematical intricacies of the solution it generated? Not really, and honestly, I’m not sure it matters. It’s somewhat akin to incorporating a library into your project that you may not fully understand but serves its purpose. I label that portion of the code as AI-generated and moved on. Without the help of GPT-4 I would have scrapped the project and moved on to something else.

Conclusion

In summary, my journey through this project has highlighted the incredible synergy between GenAI tools like Bard, ChatGPT, Copilot, and Bing Chat, and how they have played a pivotal role in enhancing my productivity as a software engineer. While GenAI may not replace developers, it unquestionably acts as a powerful force multiplier, bridging the gap between the average developer and a highly skilled one. These tools have aided me in tasks ranging from content creation and code generation to troubleshooting and idea generation. They have not only saved me time but have also expanded my horizons by introducing me to new coding techniques and ideas. Most importantly, when I reached a critical juncture where I thought the project might fail, GPT-4 saved the day. This experience has reinforced my belief in the potential of GenAI to elevate the capabilities of software engineers.

Writing these articles takes time and often requires a good amount of coffee. If you found this article helpful, consider buying me a cup of coffee.

--

--

Adam Jelinek - Engineering Director | Entrepreneur

I’m a dedicated husband and father, a technology enthusiast, and a leader who thrives on solving complex problems