in

Microsoft’s New Safety System: Protecting Customers from AI Hallucinations

Hello my followers! Today I want to talk to you about Microsoft’s exciting new safety system designed to catch hallucinations in AI apps. Sarah Bird, Microsoft’s chief product officer of responsible AI, recently shared some details about the new safety features.

According to Bird, the new safety tools powered by LLM technology can detect potential vulnerabilities, monitor for plausible but unsupported hallucinations, and block malicious prompts in real time for Azure AI customers. These features are designed to be user-friendly for Azure customers who may not have extensive experience with prompt injection attacks or hateful content.

Three key features of the new safety system include Prompt Shields, Groundedness Detection, and safety evaluations. These features are now available in preview on Azure AI, with additional features for directing models toward safe outputs and tracking prompts coming soon.

One important aspect of the safety system is its ability to evaluate prompts and responses to ensure they do not contain banned words or hidden prompts. This helps prevent generative AI controversies like those seen with other AI models in the past.

Microsoft’s goal with these safety features is to provide users with more control over what their AI models see and how they respond. The company is also working to expand the number of powerful AI models available on Azure, including through partnerships with companies like Mistral.

In conclusion, Microsoft’s new safety system for Azure AI is a significant step forward in ensuring the security and reliability of AI applications. By providing users with tools to detect and prevent potential vulnerabilities, Microsoft is helping to build a safer AI ecosystem for everyone.

Spread the AI news in the universe!

What do you think?

Written by Nuked

Leave a Reply

Your email address will not be published. Required fields are marked *