The OWASP Top 10 for Large Language Model Applications are mitigated in the development of large language model applications.
Ensure the integrity and security of LLM supply chains by mitigating risks associated with third-party models, outdated components, licensing issues, and deployment vulnerabilities. Apply robust verification, auditing, and monitoring mechanisms to prevent unauthorized modifications, backdoors, and compromised dependencies that could impact model reliability and security.
To mitigate these risks, organizations must carefully vet data sources and suppliers, ensuring adherence to strict security and access controls. Implement continuous vulnerability scanning and patch outdated dependencies to prevent exploitation. AI red teaming should be employed to evaluate third-party models, verifying them against established benchmarks and trustworthiness criteria. Maintaining a comprehensive Software Bill of Materials (SBOM) and Machine Learning (ML) SBOMs helps organizations track model components and detect unauthorized modifications. Collaborative development processes, such as model merging and handling services, require strict monitoring to prevent injection of vulnerabilities. Device-based LLMs introduce further risks, necessitating firmware integrity verification and encryption of edge-deployed models. Licensing compliance should be ensured by maintaining an inventory of licenses and automating audits to prevent unauthorized data usage. AI edge models must use integrity checks, vendor attestation APIs, and encrypted deployments to prevent tampering.
ID | Operation | Description | Phase | Agent |
---|---|---|---|---|
SSS-02-05-03-01-01 | Vet and audit third-party models and suppliers | Review third-party models, datasets, and licenses to ensure compliance with security standards. Regularly audit supplier security and access controls. | Development | Security team, Procurement team, AI research team |
SSS-02-05-03-01-02 | Maintain an SBOM for model and software components | Use Software Bill of Materials (SBOMs) to track dependencies and detect vulnerabilities in LLM models and associated software components. | Deployment | Security team, Development team |
SSS-02-05-03-01-03 | Perform AI red teaming and model evaluations | Use adversarial testing and AI red teaming techniques to detect model weaknesses, backdoors, and data poisoning before deployment. | Development | AI research team, Security team |
SSS-02-05-03-01-04 | Monitor and secure collaborative model development environments | Apply automated monitoring tools, anomaly detection, and access restrictions to prevent model tampering in shared development environments. | Development | Infrastructure team, Security team |
SSS-02-05-03-01-05 | Encrypt and verify models deployed at the edge | Use encryption, integrity verification, and vendor attestation APIs to protect LLMs deployed on local devices and prevent tampered applications. | Deployment | Infrastructure team, Security team |
Industry framework | Academic work | Real-world case |
---|---|---|
Information Security Manual (ISM-1923) OWASP Top 10 for LLM OWASP Top 10 for LLM (LLM03:2025) |