Hello, and welcome back to Inc.'s 1 Smart Business Story. If it feels like something is wrong with the software you use every day, you’re probably right. Amazon Web Services has already suffered multiple server outages caused by AI coding bots making misconfigured commits. One even deleted an entire software environment. While some are blaming "vibe coding", the blame likely lies with companies freezing junior developer hiring, piling more work onto senior engineers, and using public users as quality testers. Meanwhile, a "capability reliability gap" means AI-generated code can look flawless until real-world use exposes its cracks. Could all of this  be a temporary growing pain that sparks meaningful regulation? Or is it the beginning of a software reliability crisis?

In this article you'll find:

  • What a hammer analogy reveals about how people misuse AI coding tools

  • How a hiring freeze for junior developers is changing software quality

  • Why software reliability issues predate AI — and what's actually new

Everything Feels More Glitchy Right Now and Everyone Is Blaming Vibe Coding. The Real Story Is More Complicated

Software everywhere is getting glitchier. Here’s what’s causing the reliability crisis—and how we might fix it.

You’re not imagining it: Your software, tools, and services are getting glitchier. From Windows 11’s multiple glitches to vibe-coded platforms permeating social media to OpenClaw instances that run wild and end up deleting half your inbox, glitches, errors, and snafus are becoming a common part of our digital lives.

Often the friction from a malformed bit of software is something only you experience. Other times it can take down elements of a massive publicly traded company, as in the case of Amazon Web Services, where an AI coding assistant has reportedly taken down its servers at least twice thanks to misconfigured commits by the bots. One of them even deleted an entire software environment.

The Amazon incident, alongside countless others—as well as the anecdotal experiences of individuals recognizing that their favorite tools and software just don’t perform in the same flawless way they once did—has opened up a conversation among those at the coalface about whether vibe coding, which is the use of certain tools that produce code from everyday language prompts, could lead us to a promised land of new productivity and software perfection, or fill our world with cruft—badly designed software.

“I like to think of it sometimes through a metaphor,” says Amy J. Ko, professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. “Nobody would say that hammers are making everything worse, but there are a couple of specific reasons for that. Most of the time when people are using a hammer, they know what they’re doing with it.” Ko suggests that people are misusing the AI coding tools available to them like Claude Code, Codex, and others—either because they don’t have the requisite coding skills to identify where the AI tool goes wrong, or they have a misguided belief in AI’s supremacy and skills.

“People are being told that they can trust the output, that it’s amazing, that it’s spectacular, that it will change the labor market and replace everyone’s jobs,” Ko says. (Claude Code’s creator, Boris Cherny, has recently shared that he does little day-to-day coding anymore, instead orchestrating his AI bots to do so.) “That good marketing hype certainly is leaking into some people’s behavior,” Ko explains.

However, not every glitch we’re seeing is likely down to vibe coding, cautions Lilly Ryan, an experienced cybersecurity consultant and software historian based in Australia. “I’d be loath to conflate correlation and causation,” she warns. “Software has always been kind of janky and had bugs, and the fact that we are seeing a lot of these bugs being reported and that they are being patched and fixed is a really positive thing for the ecosystem.”

However, she says that there is a “capability reliability gap” in AI-assisted coding: Systems can look impressive in the moment, but reliability only shows up over time, after varied real-world use. That’s most likely to occur at the personal level of projects. And at the broader level, Ryan reckons that organizations are adopting AI coding tools and pushing out updates without full checks because they’re testing what quality the public will accept, while regulators and society scramble to keep up with rapid change and further content saturation.

That’s not necessarily the fault of the coders themselves, but their bosses, says Ko. “A lot of organizations have paused hiring for junior software developers and put a lot more work on the plates of senior software developers under the premise of being able to use large language model agents,” she explains. That’s compelling them to cut corners by putting in more work during the same working day.

Embracing AI in the right way, rather than a scattergun approach, would help alleviate some of those issues, reckon the experts. But it requires an honest conversation about where and when AI use is permissible—and when it’s absolutely not. “What I’m hoping for is that we’re more honest about the different software that we’re writing,” says Christian Kästner, associate professor of software engineering at Carnegie Mellon University.

“There is a lot of software that we can automate, and that doesn’t matter so much, and we don’t need the best quality assurance for this,” he admits. “Then there’s software where we want to be careful, and we want to keep the human in the loop.” Distinguishing between them could make the difference between AI being a game changer and helpful, and the enshittification and AI slop that’s already permeated large parts of our lives entering another element.

But there’s hope on the horizon. The cruft and friction we’re facing now could be a catalyst for change. “In the history of software development, there have always been quality issues, and they have always stemmed from people not attending to quality carefully,” says Ko. “If the public decides that software quality matters, we might actually start putting some regulations in place that require that people who ship software don’t ship defects,” she says. 

Keep Reading