By Emily Hodge, Rachel Garver, and Drew Gitomer 


In the wake of New Jersey’s 2022 legislative removal of the edTPA as a required performance-based assessment for teacher candidates, educator preparation programs (EPPs) have had newfound freedom in their performance assessments. How are programs exercising this new discretion? What type of performance assessments are they choosing? Are these assessments similar to the edTPA or very distinct?

Overall, looking across New Jersey, programs are taking a range of approaches, but we see several common trends.

The first is the use of clinical observation ratings as part of a multi-element performance assessment or, in a few cases, serving as the performance assessment. Rubrics used for clinical observations vary—one group of New Jersey EPPs are using the Clinical Competency Inventory (CCI), a tool developed and collaboratively revised by a set of EPPs. Another group of EPPs is using a version of Charlotte Danielson’s Framework for Teaching rubric, with the idea that this prepares candidates for how they might be evaluated by districts as new teachers. A few other EPPs use in-house observation rating tools that they have developed based on research or collaborative input from stakeholders.

The second trend is that programs are often relying on performance assessments that they had in place prior to the edTPA that may have been used for accreditation but not as a culminating performance assessment. Programs using observations have typically adopted observation practices that had already been in place, aggregating across observations from university supervisors and/or cooperating teachers to determine a candidate’s appropriateness for licensure.

Several programs have long had candidates complete portfolios demonstrating their attainment of the New Jersey Professional Standards for Teachers. In one case, this portfolio remains outside of the mandated performance assessment, but in others, the portfolio is now included as part of the performance assessment. Similarly, apart from the edTPA tasks, many programs already required candidates to reflect on their planning, instruction, and assessment, including artifacts such as lesson plans, student work, and sometimes video-recorded instruction—elements that have now moved into a performance assessment.

Although less common, some programs are developing new assessment components, often to be used in combination with existing assessments (e.g., a new action research project combined with existing observation ratings; a new candidate reflection on diversity, equity, and inclusion combined with observations and video analysis). Overall, however, many programs are using or adapting existing performance assessments for this new summative purpose.

The third trend we have noted in many programs is their decision to uphold the legacy of the edTPA in their performance assessment. Many described a new performance assessment as preserving elements of the edTPA that they found helpful while eliminating aspects that were not. Some EPPs are continuing to ask candidates to video record classroom instruction, institutionalized as a requirement by edTPA. Programs are adapting and streamlining edTPA prompts for student reflection, developing a consistent task and set of instructions across licensure areas (rather than edTPA’s subject-specific requirements), and/or reducing the number of lessons that need to be video recorded as part of performance assessment.

Based on our interviews with EPPs, programs seem to appreciate having the flexibility to use other program assessments as the performance assessment or to be able to adapt some aspects of the edTPA that makes it a better experience for candidates. In future blog posts, we will continue to examine the rationales that EPPs have used for these decisions about performance assessment, as well as the implications of their decisions about the scoring and measurement components of these assessments.