[ISM] LLM risk mitigation (OWASP top 10):

The OWASP Top 10 for Large Language Model Applications are mitigated in the development of large language model applications.

[OWASP] Protect and prevent prompt injection (SSS-02-05-01)

Ensure that large language models (LLMs) are protected from prompt injection vulnerabilities that can manipulate model behavior, bypass safety protocols, and generate unintended or harmful outputs. Implement input validation, privilege restrictions, and adversarial testing to minimize the risk of direct and indirect prompt injections.

[OWASP] Implement direct/indirect input validation and verification (SSS-02-05-01-01)

Establish strict controls on user input and model processing. Implement structured prompt validation to detect and reject adversarial prompts before they reach the model. Apply content filtering mechanisms, such as semantic analysis and string-based checks, to identify and block malicious inputs. Enforce least privilege access by restricting API tokens and external integrations to only necessary functions. Segregate external and untrusted content to prevent model behavior from being altered by indirect injections. Require human verification for high-risk actions where model outputs could lead to significant decisions. Conduct regular adversarial testing to simulate real-world attacks and continuously update safety protocols. Ensure that multimodal AI models, which handle different data types, have cross-modal security measures in place. Develop clear output formatting guidelines to prevent response manipulation and improve detection of injection attempts.

Operations

ID Operation Description Phase Agent
SSS-02-05-01-01-01 Implement input validation and sanitization Apply filtering mechanisms to detect adversarial prompts and remove potentially harmful instructions before processing. Development Security team, AI engineers
SSS-02-05-01-01-02 Enforce privilege control and access restrictions Limit the model’s access to external APIs and system functionalities by implementing role-based access control (RBAC) and API token segregation. Deployment Security team, Infrastructure team
SSS-02-05-01-01-03 Apply structured output formatting and validation Define expected output formats and use deterministic validation techniques to verify model responses before they are returned to users. Post-deployment AI engineers, Product team
SSS-02-05-01-01-04 Conduct adversarial testing and attack simulations Regularly perform security assessments by simulating real-world attacks to evaluate model vulnerabilities and improve response mechanisms. Post-deployment Security team, Red team, AI engineers
SSS-02-05-01-01-05 Segregate and identify external content sources Clearly label and separate trusted and untrusted content to prevent unauthorized influence on model responses. Development AI engineers, Legal team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM01:2025)

[OWASP] Ensure sensitive information disclosure (SSS-02-05-02)

Prevent the disclosure of sensitive information, including personally identifiable information (PII), financial data, proprietary algorithms, and business-critical data, by implementing strong data protection and access control measures. Ensure that LLM applications use data sanitization, privacy-preserving techniques, and secure system configurations to mitigate risks of unintended data exposure.

[OWASP] Minimize sensitive information disclosure (SSS-02-05-02-01)

To minimize sensitive information disclosure, LLM applications should implement data sanitization techniques to prevent user-provided data from being used in training models. Input validation must be applied to detect and filter out confidential or personally identifiable data before processing. Access controls should follow the principle of least privilege, ensuring that only necessary components have access to sensitive data. Restrict external data sources to prevent runtime data leaks, and use federated learning to decentralize data collection and reduce exposure risks. Differential privacy techniques should be incorporated to obscure identifiable data points, preventing attackers from reconstructing confidential information. System configurations should be secured by limiting access to internal model settings and ensuring misconfigurations do not expose sensitive details. Transparency must be maintained through clear data policies, providing users with control over their data and opt-out mechanisms for training inclusion. Advanced encryption methods such as homomorphic encryption and tokenization should be used to protect data throughout the LLM pipeline.

Operations

ID Operation Description Phase Agent
SSS-02-05-02-01-01 Implement data sanitization and redaction techniques Apply automatic redaction and tokenization methods to remove sensitive information from inputs before processing. Development Security team, AI engineers
SSS-02-05-02-01-02 Enforce strict access control policies Apply least privilege principles and restrict unauthorized access to confidential data through role-based access controls and secure API gateways. Deployment Security team, Legal team
SSS-02-05-02-01-03 Educate users on safe interactions with LLMs Provide guidelines and training sessions to inform users about the risks of inputting sensitive data and offer best practices for secure LLM usage. Post-deployment Training team
SSS-02-05-02-01-04 Utilize privacy-preserving machine learning techniques Apply federated learning and differential privacy mechanisms to ensure that models process data securely while minimizing exposure risks. Development AI research team, Privacy team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM02:2025)

[OWASP] Consider entire LLM supply chain vulnerabilities (SSS-02-05-03)

Ensure the integrity and security of LLM supply chains by mitigating risks associated with third-party models, outdated components, licensing issues, and deployment vulnerabilities. Apply robust verification, auditing, and monitoring mechanisms to prevent unauthorized modifications, backdoors, and compromised dependencies that could impact model reliability and security.

[OWASP] Implement ongoing monitoring and vulnerability checking (SSS-02-05-03-01)

To mitigate these risks, organizations must carefully vet data sources and suppliers, ensuring adherence to strict security and access controls. Implement continuous vulnerability scanning and patch outdated dependencies to prevent exploitation. AI red teaming should be employed to evaluate third-party models, verifying them against established benchmarks and trustworthiness criteria. Maintaining a comprehensive Software Bill of Materials (SBOM) and Machine Learning (ML) SBOMs helps organizations track model components and detect unauthorized modifications. Collaborative development processes, such as model merging and handling services, require strict monitoring to prevent injection of vulnerabilities. Device-based LLMs introduce further risks, necessitating firmware integrity verification and encryption of edge-deployed models. Licensing compliance should be ensured by maintaining an inventory of licenses and automating audits to prevent unauthorized data usage. AI edge models must use integrity checks, vendor attestation APIs, and encrypted deployments to prevent tampering.

Operations

ID Operation Description Phase Agent
SSS-02-05-03-01-01 Vet and audit third-party models and suppliers Review third-party models, datasets, and licenses to ensure compliance with security standards. Regularly audit supplier security and access controls. Development Security team, Procurement team, AI research team
SSS-02-05-03-01-02 Maintain an SBOM for model and software components Use Software Bill of Materials (SBOMs) to track dependencies and detect vulnerabilities in LLM models and associated software components. Deployment Security team, Development team
SSS-02-05-03-01-03 Perform AI red teaming and model evaluations Use adversarial testing and AI red teaming techniques to detect model weaknesses, backdoors, and data poisoning before deployment. Development AI research team, Security team
SSS-02-05-03-01-04 Monitor and secure collaborative model development environments Apply automated monitoring tools, anomaly detection, and access restrictions to prevent model tampering in shared development environments. Development Infrastructure team, Security team
SSS-02-05-03-01-05 Encrypt and verify models deployed at the edge Use encryption, integrity verification, and vendor attestation APIs to protect LLMs deployed on local devices and prevent tampered applications. Deployment Infrastructure team, Security team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM03:2025)

[OWASP] Prevent data and model poisoning (SSS-02-05-04)

Protect LLMs from data poisoning attacks that introduce vulnerabilities, biases, or backdoors into models during pre-training, fine-tuning, or embedding processes. Implement data validation, anomaly detection, and controlled data ingestion to mitigate poisoning risks and ensure model integrity.

[OWASP] Mitigate data poisoning and ensure training data integrity (SSS-02-05-04-01)

Data poisoning occurs when adversaries manipulate training, fine-tuning, or embedding data to introduce vulnerabilities, biases, or backdoors into LLMs. This can compromise model accuracy, lead to biased or toxic outputs, and create sleeper agent behaviors that activate under specific triggers. Attackers may inject harmful content into training data, introduce malware via malicious pickling, or exploit external data sources to manipulate LLM behavior. To mitigate these risks, organizations must implement data tracking and validation using tools like OWASP CycloneDX or ML-BOM to verify data provenance and detect tampering. Anomaly detection should be applied to filter adversarial inputs, and strict sandboxing should be enforced to isolate models from unverified data sources. Organizations should employ data version control (DVC) to track dataset changes and detect manipulation. Continuous model robustness testing using adversarial techniques and federated learning can help identify poisoning attempts. During inference, Retrieval-Augmented Generation (RAG) and grounding techniques should be integrated to reduce the risk of hallucinations. Monitoring training loss and model behavior for unexpected deviations can further detect signs of poisoning.

Operations

ID Operation Description Phase Agent
SSS-02-05-04-01-01 Track data origins and transformations Use ML-BOM or CycloneDX to verify the source and integrity of training and fine-tuning data, preventing poisoned datasets from influencing the model. Development Security team, AI research team
SSS-02-05-04-01-02 Implement anomaly detection on data inputs Apply automated anomaly detection to identify adversarial or manipulated data before it reaches the training pipeline. Development Security team, AI/ML engineers
SSS-02-05-04-01-03 Enforce data version control (DVC) Use data version control to track all dataset changes, ensuring transparency and the ability to roll back to verified datasets. Development Data scientists, Infrastructure team
SSS-02-05-04-01-04 Conduct adversarial robustness testing Perform red team simulations and adversarial model testing to evaluate LLM resilience against poisoning attacks. Development Security team, Red team
SSS-02-05-04-01-05 Integrate retrieval-augmented generation (RAG) for inference security Use RAG-based techniques during inference to limit reliance on potentially poisoned training data and enhance response accuracy. Deployment Development team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM04:2025)

[OWASP] Prevent improper output handling (SSS-02-05-05)

Ensure that outputs generated by LLMs are properly validated, sanitized, and encoded before being passed to downstream systems to prevent security vulnerabilities such as remote code execution, SQL injection, and cross-site scripting (XSS). Adopt a zero-trust approach to model outputs and implement strict security controls, including context-aware output encoding, input validation, and anomaly detection.

[OWASP] Implement strict validation and sanitization for LLM-generated outputs (SSS-02-05-05-01)

Improper output handling occurs when LLM-generated responses are passed to other components without proper validation, sanitization, or security controls. This can lead to critical security vulnerabilities such as XSS, SQL injection, remote code execution (RCE), and privilege escalation. Attackers can manipulate prompts to generate malicious outputs that interact with backend systems, bypass security controls, and execute unintended actions. Some high-risk scenarios include LLM-generated content being injected into system shells, web browsers rendering JavaScript or Markdown without escaping, and LLM-based SQL queries being executed without parameterization. To mitigate these risks, organizations should treat LLM outputs as untrusted data, applying strict input validation, output encoding, and context-aware escaping. Following OWASP ASVS guidelines for input validation ensures that model responses do not trigger undesired executions in different contexts. Content Security Policies (CSP) should be strictly enforced to prevent XSS attacks, and robust logging and monitoring should be implemented to detect anomalies or suspicious behavior in LLM outputs. Using parameterized queries for database interactions further reduces the risk of SQL injection.

Operations

ID Operation Description Phase Agent
SSS-02-05-05-01-01 Validate and sanitize LLM-generated outputs Implement context-aware sanitization and validation checks for all LLM-generated outputs before passing them to downstream applications. Development Security team, Development team
SSS-02-05-05-01-02 Enforce output encoding based on usage context Apply HTML encoding for web content, SQL escaping for database queries, and JavaScript escaping for client-side execution to prevent exploitation. Development Development team
SSS-02-05-05-01-03 Adopt parameterized queries for database interactions Use prepared statements and parameterized queries for all database operations involving LLM outputs to prevent SQL injection. Development Security team, Dvelopment team
SSS-02-05-05-01-04 Implement strict content security policies (CSP) Apply CSP rules to prevent LLM-generated content from executing unintended JavaScript code, reducing XSS risks. Deployment Security team, DevOps team
SSS-02-05-05-01-05 Monitor and log LLM-generated outputs for anomalies Deploy automated anomaly detection and real-time logging to track and respond to suspicious LLM outputs, preventing exploitation attempts. Post-deployment Security team, DevOps team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM05:2025)

[OWASP] Limit and control LLM agency (SSS-02-05-06)

LLM-based systems interact with various extensions, tools, and external systems. Without strict control, excessive functionality, permissions, or autonomy can lead to unintended or damaging actions, including unauthorized data modification or system compromise. Minimizing extension capabilities, enforcing strict permissions, and requiring human approval for critical actions mitigate these risks.

[OWASP] Implement strict controls on LLM extensions, permissions, and autonomous actions (SSS-02-05-06-01)

Limit the functions and extensions available to LLMs to only those essential for their intended operations. Implement principle of least privilege by restricting the permissions granted to extensions and ensuring they do not perform unintended actions. Avoid open-ended extensions that could be manipulated to execute unauthorized commands. Use human-in-the-loop control mechanisms for high-impact actions, requiring user approval before execution. Apply complete mediation by enforcing security checks in downstream systems rather than relying on the LLM to make authorization decisions. Monitor all interactions between LLM agents and external systems, logging actions to detect and respond to anomalies. To further strengthen security, sanitize both inputs and outputs to prevent prompt injection attacks from altering the behavior of LLM-based applications.

Operations

ID Operation Description Phase Agent
SSS-02-05-06-01-01 Minimize extension functionality Restrict extensions and tools to include only the necessary functions required for the LLM’s intended operation. Development Development team, Security team
SSS-02-05-06-01-02 Enforce least privilege for extensions Ensure extensions interact with downstream systems using the minimum permissions necessary, preventing unauthorized actions like data deletion or modification. Deployment Security team, Operation team
SSS-02-05-06-01-03 Require user approval for high-impact actions Implement human-in-the-loop mechanisms that require manual user confirmation for high-risk operations like financial transactions or data deletion. Post-deployment Product team, Security team
SSS-02-05-06-01-04 Implement monitoring and anomaly detection Log and analyze LLM-driven interactions with external systems to detect excessive or unexpected actions. Apply rate-limiting to prevent rapid unauthorized operations. Post-deployment Security team, Operation team
SSS-02-05-06-01-05 Sanitize LLM inputs and outputs Apply OWASP ASVS-based validation and sanitization techniques to prevent prompt injection and command execution vulnerabilities. Development Security team, Operation team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM06:2025)

[OWASP] Prevent system prompt leakage (SSS-02-05-07)

Ensure that system prompts do not contain sensitive information such as API keys, credentials, role structures, or internal rules that could be exploited if leaked. System prompts should not be treated as security controls, nor should they be used to enforce authorization or privilege management. Proper separation of sensitive data from prompts and implementing external guardrails can mitigate risks associated with system prompt leakage.

[OWASP] Implement safeguards to prevent system prompt exposure and enforce security independently (SSS-02-05-07-01)

System prompt leakage occurs when LLM system prompts reveal sensitive details that were never intended for exposure, such as authentication credentials, internal business rules, or security policies. Attackers can extract this information through prompt engineering, reverse engineering techniques, or prompt injection attacks to gain unauthorized access or bypass security controls. To mitigate system prompt leakage risks, organizations should never store sensitive data directly in system prompts and should externalize security-critical configurations. Additionally, avoid relying on system prompts for enforcing strict security behaviors, such as content filtering or authorization, as LLMs can be manipulated into bypassing their own instructions. Instead, independent security guardrails should be implemented outside the model. Critical security controls, such as authorization checks and privilege separation, must be handled independently from the LLM. LLM-based applications should rely on external enforcement mechanisms rather than system prompts for defining access permissions. Where multiple levels of access are required, separate LLM agents should be used, each with only the minimum permissions needed to perform their task. Regularly monitor and test system prompts to ensure they do not inadvertently reveal internal logic, filtering criteria, or business-sensitive information.

Operations

ID Operation Description Phase Agent
SSS-02-05-07-01-01 Separate sensitive data from system prompts Ensure API keys, authentication details, and role-based access controls are not embedded in system prompts, using secure externalized storage instead. Development Security team, DevOps team
SSS-02-05-07-01-02 Implement independent security guardrails Security measures such as access control, privilege checks, and sensitive content filtering must be enforced outside the LLM using deterministic and auditable methods. Deployment Security team
SSS-02-05-07-01-03 Limit system prompt exposure through monitoring and testing Regularly test and audit LLM-generated outputs to detect unexpected disclosures of system prompt details, using automated detection mechanisms. Post-deployment Security team
SSS-02-05-07-01-04 Avoid reliance on system prompts for strict behavior control Implement external mechanisms for enforcing security behavior rather than relying solely on LLM system prompts to enforce rules. Development Security team, AI engineers
SSS-02-05-07-01-05 Use separate agents for different privilege levels Where different levels of access are required, use separate LLM agents with the least privilege necessary for their Developmentated tasks. Deployment Security team, Development team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM07:2025)

[OWASP] Strengthen security of vectors and embeddings (SSS-02-05-08)

Ensure that vectors and embeddings used in LLM-based applications are securely managed, accessed, and validated to prevent unauthorized data access, information leaks, poisoning attacks, and behavioral alterations. Implement robust access controls, data validation mechanisms, and continuous monitoring to mitigate the risks associated with vector-based knowledge retrieval and augmentation techniques.

[OWASP] Implement secure vector and embedding management to prevent exploitation (SSS-02-05-08-01)

Vectors and embeddings play a critical role in retrieval-augmented generation (RAG) systems, enabling LLMs to access external knowledge sources. However, mismanagement of vectors and embeddings can introduce serious vulnerabilities, such as unauthorized data access, embedding inversion attacks, and data poisoning, which can compromise confidentiality, integrity, and trustworthiness of LLM applications. To mitigate these risks, fine-grained access controls must be enforced to prevent unauthorized retrieval of embeddings and to restrict cross-context information leaks in multi-tenant environments. Data validation pipelines should be established to ensure only vetted, trustworthy sources contribute to the knowledge base, preventing manipulation via poisoned data or adversarial inputs. Embedding inversion attacks, where attackers attempt to extract sensitive data from stored embeddings, should be countered with differential privacy techniques and encryption methods to obscure relationships between raw data and vector representations. Logging and monitoring of all retrieval activities should be maintained to detect anomalous patterns, unauthorized access attempts, and unexpected data leakage incidents. In addition, retrieval augmentation should be evaluated for behavioral alterations, as improper tuning can reduce the model’s effectiveness, empathy, or decision-making reliability. Continuous testing and auditing of augmented models should be performed to ensure they retain their intended functionality without introducing biases, conflicting knowledge, or undesirable responses.

Operations

ID Operation Description Phase Agent
SSS-02-05-08-01-01 Enforce strict access control for vector storage and retrieval Implement fine-grained permission controls for vector databases to ensure that users and applications can only access the data relevant to their scope. Prevent unauthorized cross-group access in multi-tenant environments. Development Security team, AI engineers, DevOps team
SSS-02-05-08-01-02 Implement robust data validation and filtering Develop automated pipelines to validate, sanitize, and classify input data before embedding into the vector database. Implement filtering mechanisms to detect hidden adversarial content, such as invisible text-based poisoning attacks. Development AI governance team, Data engineers, Security team
SSS-02-05-08-01-03 Apply differential privacy and encryption to embeddings Use differential privacy techniques to prevent attackers from extracting meaningful data from stored embeddings. Encrypt sensitive vector data to mitigate embedding inversion risks. Deployment Security team, Infrastructure team
SSS-02-05-08-01-04 Monitor embedding retrieval activities for anomalies Maintain detailed, immutable logs of all vector retrievals to detect and respond to suspicious queries or unauthorized access attempts. Implement anomaly detection algorithms to flag unusual embedding interactions. Post-deployment Security team, Operation team
SSS-02-05-08-01-05 Evaluate the impact of retrieval augmentation on model behavior Continuously analyze whether retrieval-augmented knowledge affects the model’s performance, empathy, or decision-making consistency. Adjust augmentation parameters to maintain desired response quality while mitigating unintended alterations. Post-deployment AI governance team, Data engineers, Development team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM08:2025)

[OWASP] Mitigate misinformation (SSS-02-05-09)

Ensure LLM-generated content is accurate, verifiable, and reliable by implementing fact-checking mechanisms, retrieval-augmented generation (RAG), and human oversight. Reduce overreliance on AI-generated content by educating users, integrating validation mechanisms, and designing interfaces that clearly communicate limitations and risks of LLM outputs.

[OWASP] Implement safeguards to detect and prevent misinformation in LLM outputs (SSS-02-05-09-01)

Misinformation in LLM-generated content can lead to security risks, reputational damage, and legal liability when false or misleading information is presented as fact. The primary cause of misinformation is hallucination, where the model fabricates content based on statistical patterns rather than verifiable facts. However, biases in training data, lack of external validation, and improper user reliance further exacerbate the risks. To prevent misinformation, retrieval-augmented generation (RAG) should be used to ensure real-time referencing of accurate, vetted sources. Fine-tuning techniques, such as chain-of-thought prompting and parameter-efficient tuning (PET), can improve LLM response quality and factual consistency. User interfaces and APIs should be designed to clearly indicate when content is AI-generated, and automatic validation mechanisms must be incorporated to cross-check responses against trusted databases. Additionally, secure software development practices should be followed to prevent LLMs from suggesting unsafe code that could introduce vulnerabilities into critical systems. For sensitive applications, human oversight and domain-specific fact-checking must be in place to ensure generated outputs align with expert-verified knowledge. Finally, education and training programs should be provided to raise awareness about LLM limitations, ensuring users apply critical thinking and independent verification when interacting with AI-generated content.

Operations

ID Operation Description Phase Agent
SSS-02-05-09-01-01 Integrate retrieval-augmented generation (RAG) for verified responses Enhance LLM response reliability by retrieving accurate, vetted information from trusted external sources. Implement live referencing to minimize hallucination risks. Development AI engineers, Data science team, Security team
SSS-02-05-09-01-02 Implement cross-verification and human oversight Development fact-checking workflows where human reviewers verify critical or sensitive LLM-generated responses before they are used in decision-making. Deployment Compliance team, Domain experts
SSS-02-05-09-01-03 Develop automatic validation mechanisms for output accuracy Implement automated tools that validate AI-generated content against known facts, scientific databases, and expert-reviewed knowledge. Deployment AI research team, Deveopment team
SSS-02-05-09-01-04 Development user interfaces that communicate AI-generated content limitations Clearly label AI-generated content, add disclaimers about potential inaccuracies, and provide guidance for verifying critical information. Development Development team
SSS-02-05-09-01-05 Train users and developers on misinformation risks Educate users about LLM limitations, the risks of hallucinations, and best practices for verifying AI-generated outputs. Implement domain-specific training for industries such as healthcare, finance, and legal sectors. Post-deployment Training team, Legal team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM09:2025)

[OWASP] Prevent unbounded consumption (SSS-02-05-10)

Implement mechanisms to prevent Large Language Models (LLMs) from being exploited through uncontrolled inputs, excessive queries, or adversarial attacks. Ensure that resource allocation, access controls, rate limiting, and monitoring systems are enforced to mitigate denial of service (DoS), financial exploitation, model theft, and system degradation.

[OWASP] Implement controls to mitigate unbounded consumption risks in LLM applications (SSS-02-05-10-01)

Unbounded consumption vulnerabilities occur when LLM applications allow excessive, uncontrolled resource usage, leading to denial of service (DoS), economic drain, model extraction, and operational degradation. Attackers can manipulate input size, processing complexity, or frequency of queries to exhaust system resources and exploit the model. To mitigate these risks, input validation and rate limiting must be implemented to prevent oversized queries and high-volume API calls. Resource allocation management and automated scaling should be enforced to ensure fair usage and maintain performance under load spikes. Timeouts, throttling, and sandboxing must be utilized to restrict model access to external resources and internal systems, reducing potential attack surfaces. To prevent model extraction, techniques like logit restriction, watermarking, and adversarial robustness training should be applied. Strict access control policies and centralized ML model inventories will ensure that only authorized entities can access and deploy models, minimizing unauthorized use or replication risks. Comprehensive monitoring and anomaly detection will enable rapid response to suspicious activity and emerging threats. By implementing multi-layered defenses, organizations can protect LLM applications from financial exploitation, service degradation, and intellectual property theft while maintaining reliable, controlled operations.

Operations

ID Operation Description Phase Agent
SSS-02-05-10-01-01 Enforce rate limiting and request quotas Implement strict limits on the number of requests per user or API key to prevent excessive queries from overwhelming system resources. Deployment AI engineers, Security team, Infrastructure team
SSS-02-05-10-01-02 Implement input validation and query restrictions Validate input sizes, content, and formatting to prevent oversized or adversarial queries from consuming excessive resources. Development Development team, Security team
SSS-02-05-10-01-03 Monitor resource consumption and detect anomalies Deploy real-time logging and monitoring tools to track computational usage, detect unusual patterns, and prevent excessive resource drains. Post-deployment Operation team, Security team, DevOps team
SSS-02-05-10-01-04 Prevent unauthorized model extraction with watermarking and logging Use watermarking to detect unauthorized use of generated content and log API interactions to monitor for potential model theft attempts. Deployment Security team
SSS-02-05-10-01-05 Enforce privilege-based access controls for model interactions Restrict LLM access using role-based access control (RBAC) and enforce least-privilege policies for API and system-level permissions. Preparation Security team

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1923)
OWASP Top 10 for LLM
OWASP Top 10 for LLM (LLM10:2025)