AI coding boom drives surge in unreviewed software and security risks
AI coding tools boost productivity but create security risks as companies struggle to review rapidly growing volumes of code.
Artificial intelligence tools designed to speed up software development are delivering results at an unprecedented scale. Still, industry experts warn that the rapid growth in machine-generated code is creating new operational and security challenges for businesses.
Table Of Content
Organisations across the technology sector are now producing code at rates far beyond what traditional review processes were designed to handle. While the technology has improved productivity and shortened development timelines, it has also introduced a backlog of unverified software and raised concerns about vulnerabilities slipping through unnoticed.
Recent reporting highlighted how a financial services firm that adopted the AI-powered coding tool Cursor dramatically increased its output. Monthly production rose from around 25,000 lines of code to approximately 250,000 lines. However, the company soon found itself with a backlog of roughly one million lines of code awaiting review, revealing the limits of existing security and quality assurance systems.
“The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can’t keep up with,” said Joni Klippert, chief executive of StackHawk, a security company working with the firm.
Growing backlog exposes shortage of security expertise
The surge in AI-generated code has intensified demand for specialised professionals who identify flaws and vulnerabilities before software is released. These professionals, known as application security engineers, are tasked with reviewing code and ensuring that it meets security standards.
Industry leaders warn that the supply of such specialists is far from adequate. Joe Sullivan, an adviser to Costanoa Ventures, said the workforce shortage has become a major obstacle for organisations attempting to keep pace with the new coding environment. “There are not enough application security engineers on the planet to satisfy what just American companies need,” he said.
The shortage has implications beyond staffing difficulties. Companies that rely heavily on automated coding tools risk introducing vulnerabilities because the human oversight required to validate the code cannot keep pace with the code’s generation speed. Even minor coding errors can create openings for cyber attacks or cause system failures when deployed at scale.
Security concerns are also emerging around the environments where AI coding tools operate. In many cases, engineers report that these tools perform more effectively on personal laptops than on tightly controlled corporate systems. As a result, developers are increasingly downloading entire codebases onto individual devices to work more efficiently.
This practice introduces a new layer of risk. If a personal device is lost, stolen, or compromised, sensitive company data could be exposed. For industries that handle financial or customer information, the consequences of such breaches can be severe, both financially and reputational.
Industry turns to AI to fix AI-created problems
Faced with growing code volumes and limited human resources, technology companies are once again turning to artificial intelligence as a potential solution. Several leading firms are developing AI-driven review systems designed to analyse and verify machine-generated code before it reaches production.
Companies including Anthropic, OpenAI, and Cursor are investing in tools that promise to automate code review processes, detect vulnerabilities, and flag unusual patterns that might indicate errors. Cursor has also expanded its capabilities by acquiring a startup focused on automated code review, integrating the technology into its platform.
According to Cursor’s head of engineering, the shift reflects a broader rethinking of software development processes. “The software development factory broke. We’re trying to rearrange the parts in some sense,” the executive said, describing how traditional workflows are struggling to cope with the scale of AI-driven output.
Despite the optimism surrounding automated review systems, some experts caution that artificial intelligence alone may not fully resolve the challenges it has created. They argue that while AI tools can assist in identifying patterns and highlighting potential risks, final verification by human reviewers remains essential before software is deployed in live environments.
Recent incidents have reinforced these concerns. In one widely reported case, faulty AI-generated code contributed to a major service disruption at a large e-commerce platform. The outage resulted in more than 100,000 lost orders and triggered approximately 1.6 million system errors, illustrating the potential consequences of unchecked automation.
Such events underscore the stakes involved as companies adopt AI-powered development tools. Businesses rely heavily on software systems to manage operations, customer interactions, and financial transactions, meaning even short disruptions can lead to significant losses and damage to customer trust.
Balancing speed with reliability in the AI era
The widespread adoption of AI coding tools reflects a broader trend towards automation in software development. Organisations are under pressure to release new features quickly, respond to customer demands, and remain competitive in fast-moving markets. AI has proven highly effective in accelerating these processes, allowing developers to produce code at speeds previously unattainable.
However, the increase in productivity has revealed weaknesses in existing oversight systems. Many development teams were structured around manual review processes designed for slower workflows. As AI generates code continuously, these systems struggle to maintain quality control without introducing delays.
Experts suggest that companies will need to rethink how they structure software development teams. This may involve expanding security teams, redesigning workflows to prioritise risk management, and implementing hybrid review systems that combine automation with human oversight.
Another emerging challenge is the cultural shift required within engineering teams. Developers accustomed to writing code manually must now learn how to supervise and validate automated outputs effectively. This shift requires new skills, including understanding how AI systems generate code and identifying subtle errors that may not be immediately obvious.
While artificial intelligence remains a powerful tool for innovation, its rapid adoption has created unintended consequences that companies are still learning to manage. The industry now faces the task of balancing speed with reliability, ensuring that technological progress does not outpace the safeguards needed to maintain security and stability.
As AI continues to shape the future of software development, businesses are likely to invest heavily in both automated tools and human expertise. The outcome of this balance will determine whether AI-driven coding ultimately delivers lasting efficiency gains without compromising the safety and integrity of modern digital systems.





