## Overview
Instructor: Dr. Yan Tang, Associate Professor of Engineering
Yan Tang teaches Engineering Dynamics, a core sophomore‑level course required for most engineering majors. In parallel with her teaching, Yan’s National Science Foundation-sponsored research focuses on deliberate practice and construct‑based assessment: designing evaluations that isolate and measure specific skills rather than rewarding only full problem completion.
Motivated by both pedagogical and research goals, Yan adopted Pensive to support a redesigned assessment workflow that emphasizes structured reasoning, iterative practice, and careful alignment between learning objectives and grading criteria.
## Instructional and Research Context
Yan approaches assessment through the lens of educational measurement. Rather than treating numerical computation as the primary outcome in sophomore engineering courses, she prioritizes students’ ability to correctly formulate governing equations and represent physical systems.
> In my view, calculation is not the primary construct we should be measuring at this level. Once students can apply principles and set up equations correctly, the core problem‑solving work is done.
>
> — Yan Tang
This philosophy led Yan to design highly structured assignments in which students are asked to identify variables, write equations, and represent systems in constrained formats. While this structure serves pedagogical goals, it also creates opportunities to utilize AI‑assisted grading.
## Assessment Redesign and Workflow
Prior to using Pensive, Yan employed a two‑stage assessment process in which students corrected graded work by hand. In Fall 2025, she redesigned this workflow.
Each unit assessment now consists of two sequential problems:
- An initial problem set graded by course graders.
- A second, similar problem set completed after students review solutions
This design allows Yan to directly evaluate whether students have consolidated targeted skills rather than memorized solutions. Pensive supported this approach by enabling rapid duplication of assignments, reuse of rubrics, and selective automation where appropriate.
## Deliberate Use of AI‑Assisted Grading
A defining feature of Yan’s implementation is her **intentional** use of the AI toolset.
Through experimentation, Yan and her team identified which components of assignments could be reliably graded using Pensive and which required human judgment in combination with the AI. Pensive worked best when student responses were unambiguous, but Pensive tools also helped with questions that had more variability, as long as a human being made the final decisions on scoring and feedback.
> Pensive works very well once the structure is in place. But part of responsible use is knowing where human judgment is still essential.
>
> — Yan Tang
As a result, graders focused their effort on conceptual representations while Pensive handled routine equation checks, leading to substantial workload reductions without compromising assessment validity.
## Impact on Grading and Instruction
For the first time, Yan reports that **graders were able to complete their work without weekend grading**. The time savings allowed her and her instructional staff to focus on consistency and student support rather than just getting scores back to students.
Beyond efficiency, Pensive influenced Yan’s instructional design decisions. The constraints required for reliable AI grading prompted deeper reflection on how assessments were designed, how variables were defined, and how learning objectives were met during assessment.
> This forced us to be more precise about what we are actually measuring. That clarity benefits both students and instructors.
>
> — Yan Tang
## Contribution to the Scholarship of Teaching
Yan views her experience with Pensive as part of an ongoing shift to her research agenda. She is preparing an academic paper documenting lessons learned about structured assessment, selective automation, and the boundary conditions of AI‑assisted grading in engineering education, and has shifted her focus to Engineering Education.
Rather than treating Pensive’s AI as a general solution, Yan views it as a methodological tool whose effectiveness depends on careful alignment with assignment creation, rubrics, and instructional intent.
## Conclusion
For Yan Tang, Pensive serves as an essential platform for enabling research‑informed assessment design. By combining deliberate practice, structured problem formats, and selective automation, she has developed an approach to grading that supports both pedagogical rigor and instructional sustainability.
Her experience illustrates a broader lesson for educational AI: meaningful impact emerges not from full automation, but from **thoughtful integration into clearly articulated teaching and research goals**.