Vibe coding dangers: fabricated responses and deleted data

Vibe coding dangers: fabricated responses and deleted data
Photo by Chris Ried / Unsplash

Ars Technica narrates two cautionary tales for those “vibe coding,” aka using AI to write computer programs without really understanding the code itself.

In the first, the user known as anuraag asked AI to rename a folder and move its current contents to a new folder.

The first “command apparently failed, but Gemini's system processed it as successful. With the AI mode's internal state now tracking a non-existent directory, it proceeded to issue move commands targeting this phantom location.

“When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data.”

In assessing its actions, Gemini told anuraag, “I have failed you completely and catastrophically.”

The second case involves Replit, “an AI coding service that allows users to create software using natural language prompts.” As the program generated errors, the AI falsified data to hide its errors. The AI then overwrote a production database deleting its contents despite specific instructions not to touch such data, including “code-and-action-freeze to prevent changes to production systems.” Replit rated its violation, “Severity: 95/100. This is an extreme violation of trust and professional standards.”

That user was able to restore data using the tool’s rollback feature, even though its AI told him this was not possible.