Harness has introduced two innovative security solutions designed to address the unique challenges organizations face when developing software with generative AI and integrating AI models and agents into applications.
The first product, AI Security, is focused on safeguarding AI-native applications during runtime, while the second product, Secure AI Coding, evaluates the code produced by AI coding assistants directly within the developer workflow.
Harness emphasizes that this launch seeks to fill critical gaps in visibility and testing as the adoption of AI technologies continues to grow across application stacks and software engineering teams.
Runtime Focus
AI Security enhances discovery, testing, and protection for AI components accessed through application programming interfaces (APIs). The suite begins with AI Discovery, which is now widely available.
AI Discovery creates a comprehensive inventory of AI-related assets and traffic patterns in a given environment. It identifies calls to large language models, multi-cloud platform servers, AI agents, and the use of third-party generative AI services, including OpenAI and Anthropic. Additionally, it flags runtime issues such as unauthenticated interfaces calling models, weak encryption measures, and the transfer of regulated data to external services.
Harness positions AI Security as an extension of API security, noting that agents and models interact with applications and external services primarily through APIs. The company has also highlighted specific AI-related threats such as prompt injection, model manipulation, and data poisoning, alongside traditional API vulnerabilities.
Two further components of AI Security are currently in beta. AI Testing conducts probes against AI-powered APIs, models, and agents to detect prompt injection, jailbreaks, and data leakage. Harness asserts that these tests can be integrated into continuous integration/continuous delivery (CI/CD) pipelines, ensuring checks occur as part of the release process.
AI Firewall introduces runtime controls specifically for generative AI threats, aligning with the OWASP Top 10 for Large Language Model Applications. It inspects both inputs and outputs, applying filters to block prompt injection attempts and prevent unauthorized data extraction. The Firewall also enforces behavioral guardrails for models and agents, according to the company.
Developer Workflow
Secure AI Coding enhances Harness’s Static Application Security Testing (SAST) capabilities by enabling scanning as code is generated within the integrated development environment. It integrates with AI coding tools such as Cursor, Windsurf, and Claude Code.
This feature highlights issues in real-time while the code is being written, rather than waiting for a later review phase. It can also send flagged code back to the AI assistant for remediation, eliminating the need for developers to switch platforms or conduct a manual scan.
“Security shouldn’t be an afterthought when using AI development tools. Our partnership with Harness initiates vulnerability detection right within the developer workflow, ensuring that all generated code undergoes screening from the very start,” stated Jeff Wang, CEO of Windsurf.
Harness emphasizes that this approach goes beyond simple linting by analyzing how data traverses through an application. It leverages a code property graph to track data flows across the entire codebase rather than just assessing the snippet produced by an assistant. This contextual analysis aids in identifying issues like injection flaws and improper data handling.
Market Context
Survey results referenced by Harness indicate that AI is being widely adopted in both application functionality and coding practices. In its State of AI-Native Application Security report, Harness revealed that 61% of new applications are now AI-driven. Moreover, it noted that security and engineering leaders often find it challenging to identify the models and agents operating in their environments and to understand their behavior in production.
In its State of AI in Software Engineering report, Harness highlighted that 63% of organizations are utilizing AI coding assistants. The company argues that increased code volume and more frequent releases place additional pressure on application security teams. Additionally, AI-generated code often comes in larger commits with less thorough review, which raises the risk of vulnerabilities being introduced into codebases.
Katie Norton, Research Manager for DevSecOps at IDC, noted that the adoption of AI-generated code raises governance issues. “As AI-assisted development becomes standard practice, the potential security ramifications of AI-generated code are turning into a significant blind spot for enterprises. IDC research shows that developers accept nearly 40% of AI-generated code without modification, allowing insecure patterns to spread as organizations accelerate their code production without matching validation and governance mechanisms, thereby widening the divide between development speed and application risk.”
Platform Integration
Harness offers a comprehensive DevSecOps platform that encompasses application security testing and runtime protection. This includes SAST, software composition analysis integrated into development workflows, software supply chain security from build to deployment, as well as tools for reporting security posture, policy, and governance. It also provides web application and API protection for production environments.
Harness aims to relay runtime findings back to developers, rather than relegating issues to a security backlog. The company positions AI Security and Secure AI Coding as enhancements that extend this model to encompass AI-generated code and AI services utilized in production.
Internal Use
Harness reports that it has implemented AI Security internally to map and oversee its own AI ecosystem, which includes calls to external service providers like OpenAI, Vertex AI, and Anthropic.
The company now monitors 111 AI assets and tracks more than 4.76 million API calls monthly. It conducts 2,500 AI testing scans each week and has addressed 92% of the identified issues. Additionally, Harness has successfully blocked 1,140 distinct threat actors who attempted over 14,900 attacks on its AI infrastructure.
AI Discovery is currently available, while AI Testing and AI Firewall are in beta. Secure AI Coding can be accessed as part of Harness SAST, which also includes integrations with the AI coding assistants mentioned.
In conclusion, Harness’s new security offerings are a vital step towards enhancing the safety and integrity of AI-driven applications and coding practices. By integrating these solutions into the developer workflow and runtime processes, organizations can better manage the risks associated with generative AI technologies.