TOXINATOR!

Identify rude, harmful, or hateful comments instantly to keep conversations respectful, safe, and engaging.

CHECK TOXICITY NOW!

THE SCOREBOARD!

MOST TOXIC!

0.00%

LEAST TOXIC!

100%

AVERAGE TOXICITY

0.00%

⚡ SYSTEM SPECS

This system relies on a Bidirectional LSTM (BiLSTM) deep learning model for fast, accurate classification.

KEY PERFORMANCE METRICS:

  • Model Size: The engine contains over 6.49 Million Parameters, enabling it to capture complex language patterns.
  • Accuracy (Precision): Approximately 92.8% precision on held-out evaluation data.
  • Core Components: Bidirectional LSTM for deep context understanding, paired with dense layers for final multi-label scoring.

POWERED BY: TensorFlow (BiLSTM) for speed and Gemini for contextual commentary.

GitHub: LINK