Adding Guard Rails when Vibe Coding
Vibe Coding took us by the storm and it’s here to stay. It will improve overtime but as of now it demands lot of careful consideration and tiptoeing around. Let’s start with a meme that explains the current problem
Now clearly this guy saw AI generating code and thought now he doesn’t need any programmers, and AI would build the entire thing on his own and he’s going to make money from it. Well, he learnt the Hardway that, well in fact you still do need software engineers.
Cheers! We still can have our jobs. Or at least for now it’s safe.
Anyhow nonmatter your background while vibe programming you need to add some safety measures now. Maybe these would be integrated into the system by default in the near future but for now we can put some guard rails that would help us vibe a bit more easily.
Common Potential issues and how to prevent them
Protected Secrets
AI Hallucinates. And it will hallucinate. Most AI models have some protection to not write secrets. But we can’t trust that. Funnily enough I saw AI to hallucinate environment secrets in the code, which is harmless, but we shouldn’t be pushing those. That being said. Our Friend here might not exactly know what’s a secret in the first place. We humans push secrets in our code that could happen here again.
Pre-Cautions
- Make sure to add .env to your .gitignore file. This should usually be a default setup for whatever tool you are using for Vibe Programming. But just in case you should do a double check.
- Enable Github Push protection. In your repo go to Settings > Code Security > Push Protection. You might need to enable the Code Security first to be able to see those settings. But yeah, you should have them enabled in the first place. Having this will stop AI to be able to push secrets.
- If you are still not convinced you can use an additional workflow that fails when it detects secrets on pull requests. My recommendation is to use something under MIT License like GitGuardian
- You can use Git Guardian in the pre-commit hook as well since it’s a CLI tool. And you probably should have it set globally because you should never push secrets in any repo ever.
Dependency Licensing
I know it’s really fun watching AI to write your code automatically find relevant packages or libraries and create a beautifully looking UI. Only to find out later that, one of the libraries that the AI used was probably under a non-permissive license. Or worse it used some kind of demo license of a paid version which can’t be shipped.
This is more likely to happen then not.
Pre-Cautions
Set up OSS Review Toolkit | OSS Review Toolkit
What it will do is. Analyze your library & their dependencies Licenses and generate a report. And you can create a config file to set some rules and thresholds. so that when there’s a new Pull request it can fail when requirements are not met. That way you can get a warning & a report on things that went wrong.
In this case you can simply ask your AI assistant to replace the culprit package with an open-source alternative or make an implementation of its own. This way you will get warnings before things go really wrong. And no need to manually verify every line of code.
Code Quality
AI Generated code. How good is it? Usually, AI follows proper coding patterns and it’s usually good. But it’d not be wise to actually blindly trust that. We need to setup tools to validate this. And good news is there has been tools to analyze code quality since forever. We know this as Static Analysis and there are lots of tools available. One Such tool is Sonar Cloud . It’s Free for Open-Source Projects. There’s Also Qodana.
Anyhow you should set up your repo with one of these and these too have quality gates that can be marked to fail. We can set conditions for What is acceptable and what is not. And set thresholds. So, when AI pushes code that doesn’t match the expected results, we can instruct AI based on the reports we get from this analysis.
It will warn us about critical security hotspots. We can see what’s risk in the code and all.
In the attached screenshot we can see that it raises a security concern about using an unsafe random method. Which would be bad in terms of security if it was used in user data. In this case it’s a false alarm as this code are for faking some UI notification which would eventually not exist. But this is good. I can decide which is false alarm and which is not. AI could have used this in some serious cases.
And we can check these reports to improve our code base. and we can simply instruct the AI to
Fix these codes that are being marked. So basically, it becomes a feedback loop where the AI will generate code. It will be protected by some automation tools that assist the Architect or Lead to review what AI has generated and can decide if it should be added to the code base.
In short AI isn’t doing black magic. Treat AI as if you would be treating an intern. And to guide an intern you need to be knowledgeable enough. Product Owners without Engineering knowledge might be able to create system that works. But that probably would not be secure. If anything, this makes the senior developers or architects more valuable than ever. We still need to know what we are doing. And this would probably be the case for a very long time.
That Being said, I mean most of us don’t understand assembly. That didn’t break the world. Eventually we wouldn’t need to understand code. Everything would be natural language based. But that future isn’t today or maybe not coming very soon either. So, when developing something hire someone who knows what they are doing.

