Artificial intelligence–powered coding tools have triggered an unprecedented explosion in code production inside companies, creating a new crisis: an overwhelming surplus of code that is difficult to review and manage. Tech and financial firms are now producing far more than before, but are struggling to audit this massive volume and control its risks.
Details
Real-world cases show that using tools like Cursor increased one company’s output from 25,000 lines of code per month to 250,000, leaving nearly one million lines awaiting review. This acceleration has not only strained development teams, but has also spread to departments like marketing and customer support, which are now forced to keep up with a faster pace.
The most significant shift is that coding is no longer limited to engineers. With tools developed by companies like OpenAI and Anthropic, almost any employee can build software within hours — boosting output while weakening oversight.
At the same time, there are not enough engineers to review this flood of code, especially when it comes to:
- Detecting software bugs
- Identifying security vulnerabilities
- Ensuring compliance and standards
This shortage has pushed companies to seek highly experienced engineers capable of auditing and supervising, rather than focusing solely on production. Open-source projects have also been affected, seeing a surge of AI-generated contributions, some of which are low-quality or poorly controlled.
Recent data indicates that around 90% of developers now use AI tools in their work, reflecting how quickly the industry is transforming. Meanwhile, tech companies have begun cutting jobs, citing efficiency gains, as projects that once required hundreds of engineers can now be completed with far fewer people.
However, this rapid expansion has introduced new risks, including:
- Difficulty assigning responsibility for AI-generated code errors
- Increased likelihood of security vulnerabilities
- Code being downloaded to personal devices, creating additional security gaps
In response, some companies have imposed strict rules, such as requiring human review for every piece of code, to avoid losing control over what AI systems produce.
What’s next?
Companies are moving toward deploying additional AI tools to review code itself, in an attempt to manage the growing surplus, as the entire software development model undergoes a fundamental shift.