shieldtext integrates a customized use of a large language model (LLM) to filter offensive messages in real time. The users can customize their AI to filter offensive messages with a double meaning.
When an offensive messages is not filtered by the AI, the recipient of the message can indicate to the AI the reason why the message is offensive, and the AI immediately improves his filter to avoid similar messages learning from his mistakes.
Our AI-based technology is especially useful with offensive messages with a double meaning.
The AI scans the user's profile picture to prevent violent or sexually explicit images.
A smart shadow ban automatically activates the AI for users that have been reported for toxic messages.
The AI handles all the processes: preventing toxic messages, shadow banning users, reviewing profile pictures, etc.