Note: Navigation pane for portfolio visualization.
Here are plain-language definitions for terms you'll see on the dashboard.
A key insight that identifies when a student knows "what" to answer but not "why." It flags cases where a student gives a factually correct answer (high Correctness score) but provides a weak or confusing explanation (low Explanation Quality score).
This evaluates how well the student explains their reasoning. It looks for clarity, logical reasoning, and original thought, not just memorized facts.
This evaluates if a student's answer is factually correct when compared to the approved lesson materials.
If the data used to teach an AI is unfair, incomplete, or reflects human biases, the AI will also produce unfair results.
Sometimes, AI models can confidently state things that are completely wrong or made up. This is why all answers are based on approved school content only.