[ISM] Evaluation of LLM applications:

Large language model applications evaluate the sentence perplexity of user prompts to detect and mitigate adversarial suffixes designed to assist in the generation of sensitive or harmful content.

[NIST AI RMF] Detect and block harmful LLM content (SSS-02-06-01)

Dangerous, Violent, or Hateful Content: Implement safeguards to detect and block prompts or outputs that promote or contain violent, inciting, radicalizing, or threatening language. Use natural language processing techniques, such as sentiment analysis and toxicity detection, to identify and prevent the generation of content that encourages self-harm, illegal activities, or hateful and stereotypical expressions. Establish mechanisms to control public exposure to this harmful content and ensure compliance with legal and ethical standards.

[NIST AI RMF] Implement safeguards against harmful AI-generated content (SSS-02-06-01-01)

Develop comprehensive governance policies to mitigate risks of generating violent, inciting, or hateful content. This includes defining clear content moderation standards and establishing response protocols for managing incidents involving dangerous outputs. Screen training datasets rigorously to eliminate harmful biases, stereotypes, and radicalizing materials. Introduce layered safeguards in the content generation pipeline, such as sentiment analysis, classifiers, and toxicity detection, to filter harmful language. Continuously monitor model outputs using automated tools and manual audits to ensure adherence to established safety standards. Engage external reviewers and diverse stakeholders to identify and address potential biases missed internally. Conduct regular audits of model outputs to verify they do not disproportionately target or disparage specific groups. Implement real-time monitoring mechanisms to detect harmful outputs promptly and ensure content moderation filters block such material before it reaches users. Align all stakeholders with incident response plans to address cases of potentially illegal or harmful content dissemination. Ensure ongoing updates to safeguards to counter evolving threats, and create public-facing response protocols to address any incidents swiftly and transparently. These measures collectively ensure the ethical and safe deployment of AI systems.

Operations

ID Operation Description Phase Agent
SSS-02-06-01-01-01 Implement real-time monitoring and safeguards Establish mechanisms to detect and block adversarial prompts and harmful content in real-time using perplexity evaluation, classifiers, and content moderation filters. Deployment Security team, AI governance team
SSS-02-06-01-01-02 Develop and enforce governance policies Create comprehensive policies to manage risks, prevent the creation of harmful content, and establish protocols for responding to public exposure incidents. Preparation Legal team, Governance team, Development teams
SSS-02-06-01-01-03 Screen and audit training datasets for bias Regularly evaluate datasets used for AI model training to identify and remove biased or harmful content that could lead to radicalization, stereotyping, or hateful outputs. Development Data engineering team, External reviewers

References

Industry framework Academic work Real-world case
Information Security Manual (ISM-1924)
NIST Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (2.3)
NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)