Writing Architecture
Forget basic spell-check. The RALE writing module executes a complex 14-step forensic pipeline designed to deconstruct paragraph structure, isolate cohesive devices, and map grammatical density to standard grading rubrics.
1. The "Red Pen" Middleware
A core innovation of RALE is our Prompt Hardening and Red Pen middleware. We dynamically inject exhaustive error extraction rules and severity scoring bounds (0.0 - 1.0) into the LLM prompts at runtime. This forces the model to justify every single correction mathematically, eliminating vague feedback like "awkward phrasing."
2. Span-Based Deduplication
To prevent cognitive overload for the student, RALE employs strict span-based taxonomy deduplication. If a student writes a poorly constructed sentence, the engine isolates the exact string index and identifies the root pedagogical failure (e.g., `GRAM.AGREEMENT.SUBJ_VERB`), preventing overlapping, redundant flags that plague generic AI wrappers.
// Deduplication Engine Trace
Span: [142:156] "The datas shows"
Raw Flags: [SPELLING, SUBJ_VERB, PLURALITY]
Resolution: Condense to GRAM.PLURAL.IRREGULAR (Severity: 0.9)
3. Intelligent RAG & Document Modeling
Writing assessments aren't generated in a vacuum. By leveraging Retrieval-Augmented Generation (RAG), RALE compares the student's essay against the exact source materials, teacher instructions, and historical class trends. This provides deep context, allowing the system to determine if an omission was a stylistic choice or a fundamental failure to answer the prompt.