AI startup Mistral has launched an API to moderate possibly toxic — or otherwise problematic — text in a range of languages.
Mistral AI launches a powerful multilingual content moderation API to challenge OpenAI, addressing growing concerns about AI safety with advanced tools to detect harmful content across nine categories ...
This API powers the moderation service in Mistral’s Le Chat. Powered by a fine-tuned model (Ministral 8B), it can be tailored to specific applications and safety standards. Mistral is releasing two ...
French start-up Mistral AI has launched a new API for content moderation, which it claims can be tailored to various safety standards.
Mistral AI is trying to position itself in the AI field as the secure alternative to OpenAI and other AI tools. The company ...
The Mistral API can classify text into nine different categories. These include sexual content, hate and discrimination, ...
Mistral claims that its moderation model is highly accurate — but also admits it's a work in progress. Notably, the company didn't compare its API's performance to other popular moderation APIs ...