Projects

Take a look at the innovative works from our partners!

Explore projects

Glasses on top of a book

Ongoing Projects

Louisiana Department of Education & CMU

To realize the full potential of through-year assessments, it is necessary to develop models and measures that can provide information about the student progress throughout the year, predict end-of-year achievement, and be sensitive to intervention and individual student differences. Towards this goal, the Louisiana Department of Education (LDOE) will partner with Carnegie Mellon University experts in student learning models and measurement of learning, Dr. Paulo Carvalho and Dr. Ken Koedinger to develop an initial series of models. Together, they will recruit a postdoc with relevant expertise to work directly with LDOE using existing data and assessments to develop and validate the models under the supervision of Drs. Carvalho and Koedinger.

South Carolina Department of Education & EA

The state of South Carolina seeks to create a first of its kind, best in class research and policy analysis database, and execute a corresponding research and policy analysis agenda. Leveraging the technology and infrastructure of Ed-Fi, the database will house annual and midyear data to understand key elements of the student experience (e.g., enrollment, demographics, program participation, course-taking) and student outcomes (e.g., grades, test scores, attendance, retention, graduation).

Proposed Projects

Workforce Readiness Amongst Students

by Nia Dowell

Our proposed projects focus on investigating ways to assess mastery of skills and evaluate the best teaching practices to equip the future workforce with the necessary skill sets. In addition, we seek to identify differences in CPS interaction dynamics across gender and ethnic lines, modalities (virtual vs in-person), and group compositions. Finally, we aim to explore how these differences impact group and individual outcomes such as sense of belonging, self-efficacy, STEM retention, team performance, and whether these outcomes can be improved. Taken together, these research questions will begin to tackle the current gap in students’ workforce readiness.

Translating Assessments to Actionable Data

by Danielle McNamara

This project will leverage cutting-edge advances in machine learning and natural language processing to help state agencies improve their standards-aligned assessments of writing and reading comprehension particularly with respect to their relevance to teachers for the purpose of instructional planning: providing actionable feedback regarding students’ assets and needs. The embedded postdoctoral scholar and the Science of Learning and Educational Technology (SoLET) Laboratory directed by Danielle McNamara will work with the state agency to collaboratively design novel solutions toward translating their standards-aligned assessments of writing and reading into actionable information.

Improving Teacher Assessments

by Justin Reich

Teacher assessment plays a vital role in initial licensure, but the current suite of tools for assessing teachers is frustratingly limited. We propose exploring ways to fill this gap with digital clinical simulations that provide simple, inexpensive methods to evaluate teacher practice. Teacher Moments is a web-based platform for digital clinical simulations: with simple narrative storytelling and opportunities for candidates to use their voice to articulate how they would solve teaching dilemmas, Teacher Moments can provide reliable evidence on teaching competencies at scale at reasonable costs. We look forward to partnering with states interested in exploring alternative approaches to assessment in teacher licensing.

Better Understand Open-ended Response Answers

by Xu Wang

Formative assessments provide valuable data for teachers to make instructional decisions and help students actively manage their progress and learning. Multiple-choice questions (MCQ) and free-text open-ended questions are typically employed as formative assessments. While MCQs have the benefit of ease of grading and visualizing student answers, they lack capabilities in revealing diverse student ideas and reasoning beyond the options. On the other hand, open-ended tasks and free-text submissions may elicit students' perspectives more comprehensively, though it requires laborious work for instructors to analyze such responses.In this project, we explore interactive clustering approaches powered by natural language processing that supports teachers to visualize and cluster students’ open-ended textual response. Different from AI-based methods, we explore human-AI collaborative techniques that allow teachers to define rules and modify AI outcomes.

Interested in our work?

Get Involved