As artificial intelligence (AI) continues to progress exponentially, the future of industries like software engineering are looking increasingly automated.
One in four Y Combinator startups are using AI for 95% of their code, according to TechCrunch; and we are already seeing models like OpenAI’s GPT-4.1 aim to enable coding models to essentially build entire software programs from start to finish.
According to the 2025 UiPath Agentic AI Report, 37% of organizations are already deploying agentic AI and 93% are looking to go the same way. In five years, most forward-thinking organizations will run with automation and AI at their core with agents, large language models (LLMs), predictive analytics and seamless Application Programming Interfaces (APIs). Shopify, for example, recently announced that any new hires will essentially have to be better than AI, while Klarna is championing the AI-first workforce.
The move towards agentic AI in software development of course touches all aspects of the coding process — not least of which is the technology’s impact on app security.
It’s true that code is being generated 10x faster than at any other point in time, but still with significant bugs. Remember that, with all this automation comes new added risks.
AI-generated code still isn’t the cleanest, even though it is becoming a huge component in software development. Research has found that almost half of the AI-generated code being studied had bugs that could lead to harmful exploitation.
Can AI really fix what AI breaks?
AI’s integration to DevSecOps – which implements security strategy into the development cycle – is becoming increasingly prevalent. Companies like GitLab and Harness have created AI-based DevSecOps platforms.
However, DeepSource, a San Francisco-based unified DevSecOps platform for securing code, has just announced the launch of three fully autonomous AI agents that promise to save coders hours of work by scanning and fixing code security vulnerabilities.
According to a company statement, the new AI Agents observe key events — like commits made to the code base — apply reasoning to optimize for their security goals, and autonomously take action to proactively keep the organization’s code base secure.
The three AI Agents released have different functions. First, a False-positive Triage Agent, based on the repository’s context, its own memory, and the real-world threat intelligence, will decide if security issues found in the code are valid or not. If they are invalid, it will automatically suppress them with proper reasoning.
Second, the Common Vulnerabilities and Exposures (CVE) Prioritization Agent triages open-source vulnerabilities based on the repository’s context and re-prioritizes them autonomously — currently a manual task that AppSec teams spend a lot of time on that can be fully replaced by AI.
And lastly, Autofix™ AI Autopilot, which puts DeepSource’s existing Autofix™ AI feature on autopilot by learning developer behavior and autonomously creating pull-requests with security fixes in the code.
It may be counterintuitive to suggest that AI-driven tools can solve an AI-generated problem; however, the nature of LLM-based AI being used by code generators and the AI used in this Software Composition Analysis (SCA) tooling is very different.
“Code is no longer being written just by humans. The surge of AI-generated code means 10x more code can now be developed in the same amount of time, and by less experienced developers. But we’re not speeding up our code security practices by that same factor,” says Sanket Saurav, co-founder and CEO of DeepSource. “Real end users will be impacted if companies don’t evolve their tooling to ensure they’re securing this exponentially higher volume of code.”
Code security practices are not keeping up, says Sanket. And by relying on manual reviews, you can’t possibly handle 10x the amount of volume.
What’s new about DeepSource’s AI Agents?
According to the company, DeepSource built the new AI agents to run 100% autonomously in the background for each organization. They said that this is an industry first — with other companies building instead human-triggered agentic loops.
Their pricing model is different too, charging companies per agent, rather than the more common “per consumption” or “per outcome” model.
The company also noted that the agents “understand the context of the software projects,” and reason about their observations based on “their memories and their team’s goals.” Teams can add to the long-term memory of these agents to align their behavior better with the goals.
“We built our AI Agents to be goal-based, and work with hundreds of signals and observations, so we are able to align these agents to act autonomously – rather than follow simple code generation loops,” says Jai Pradeesh, co-founder of DeepSource. “All the traces of our AI Agents are visible to users, so they can see how the agents reason. This can be used by companies to align how the agents behave. Doing this is not possible for generalist AI tools since they lack the code’s context that we see with static analysis.”
DeepSource SCA launch
Along with its AI Agents, DeepSource also announced the simultaneous launch of its Software Composition Analysis (SCA) solution to secure codebases against unsafe, open-source elements.
These types of elements represent up to 90% of applications’ code.
The company said that the new SCA product continuously monitors and fixes the open-source supply chain’s vulnerabilities, eliminating countless hours of manual work for AppSec teams.
DeepSource’s new additions aim to make it an all-in-one solution to the AppSec space, and are added on top of other existing product offerings including their Static Application Security Testing (SAST), Autofix™ AI, and code quality and code coverage solutions.
In February 2025, DeepSource released Globstar, an open-source project bringing the most cutting-edge code security tooling to the AppSec community, with no restrictions on commercial usage.
The company works with startups, enterprises and organizations including Babbel, Ancestry.com and NASA, to help them secure a business’ development lifecycle via static code analysis and AI.
In 2020, the company announced a $2.6 million seed round. In 2021, it raised another round of $5 million from YC bringing the total capital raised to $7.7 million.