The Question Candidates Don't Ask

Most candidates preparing for the EPPP focus entirely on content: which domains to study, which resources to use, how many questions to practice. Far fewer stop to ask a more fundamental question: what does my doctoral program actually predict about my likelihood of passing?

The research literature has some answers — and they're more nuanced than the conventional wisdom suggests.

What Ortiz & Callahan (2015) Found

The most rigorous nationwide study of EPPP predictors is Ortiz & Callahan (2015), published in Training and Education in Professional Psychology. The study examined what actually predicts EPPP performance across a nationwide sample. The findings challenged some common assumptions:

  • Not associated with EPPP scores: Clinical hours accumulated during training, research productivity (publications, presentations), internship site prestige
  • Associated with EPPP scores: GRE-Q (quantitative reasoning) scores

The interpretation the authors offered: the EPPP may be measuring general academic aptitude and test-taking ability to a greater degree than clinical competence specifically developed through training.

This finding has significant implications. If clinical hours don't predict EPPP performance, then candidates from programs with heavy clinical emphasis are not at an inherent advantage — and candidates from research-intensive programs may be at an advantage for reasons that have more to do with academic skill than clinical preparation.

The APA Accreditation Factor

APA-accredited programs consistently produce higher average EPPP pass rates than non-accredited programs. This pattern is documented across multiple data sources and is significant enough that at least one state has made it the basis for policy change.

In early 2025, the Nevada Board of Psychological Examiners enacted a regulatory change eliminating the EPPP Part 2 (Skills) requirement specifically for graduates of APA-accredited doctoral programs. EPPP Part 1 (Knowledge) remains required for all candidates. This is the first state to draw a formal regulatory distinction based on accreditation status — a signal that the accreditation differential is recognized at the policy level, not just in the academic literature.

Why does accreditation predict better outcomes? Several mechanisms are plausible: more rigorous standardized curricula, better alignment with exam content domains, stronger preparation in research methods and statistics (which aligns with the GRE-Q finding), and self-selection of academically stronger candidates into APA programs.

PhD vs. PsyD: What the Data Shows

PhD programs, which emphasize research training, tend to produce higher average EPPP scores than PsyD programs. This gap has been documented in the literature and is consistent with the Ortiz & Callahan finding that GRE-Q (a measure of quantitative/research reasoning) predicts EPPP performance.

But this requires careful interpretation:

  • The gap reflects averages, not ceiling or floor. PsyD candidates pass the EPPP at significant rates.
  • PsyD programs tend to produce more diverse candidate pools, which intersects with documented racial disparities in EPPP performance (Sharpless, 2019) in ways that make direct comparisons complex.
  • The EPPP's correlation with GRE-Q rather than clinical hours raises questions about what the degree type gap actually measures — program quality, candidate selection, or exam design.

The practical takeaway: if you graduated from a PsyD program, you are not at an insurmountable disadvantage. You may need to invest more deliberately in the test-taking skills and quantitative reasoning strategies that the exam rewards — but those are learnable.

The Saldaña et al. (2024) Finding

A 2024 study by Saldaña, Callahan, and Cox in Training and Education in Professional Psychology found that EPPP scores contain more construct-irrelevant variance than relevant variance — meaning factors unrelated to clinical competence are influencing scores. The study found that Hispanic/Latino candidates were systematically disadvantaged even after controlling for other factors.

This adds a layer to the program-type question: program type differences in EPPP outcomes may partly reflect differences in how well various programs prepare candidates to navigate construct-irrelevant item features, rather than differences in actual clinical competence.

What This Means for Preparation Strategy

The research points to a clear strategic implication: preparation should target the exam's actual structure, not just its content.

If the EPPP rewards academic test-taking skills and quantitative reasoning — as the GRE-Q correlation suggests — then candidates benefit from explicit training in how questions are constructed, how distractors are designed, and how to apply qualifier sensitivity under time pressure. These are skills that can be developed regardless of your program background.

Candidates from programs with less research emphasis can close the gap through deliberate practice on the test-construction elements of the exam. Candidates from research-intensive programs shouldn't assume their academic advantage will carry them through without specific EPPP preparation.

The exam rewards a specific kind of cognitive flexibility — the ability to apply clinical knowledge through the lens of standardized item construction. That skill doesn't come automatically from either clinical hours or research training. It has to be built deliberately.

Sources

  • Ortiz, S.O., & Callahan, J.L. (2015). EPPP performance and predictors: A nationwide examination. Training and Education in Professional Psychology, 9(1). DOI: 10.1037/tep0000068
  • Sharpless, B.A., & Barber, J.P. (2013). Predictors of EPPP performance. Training and Education in Professional Psychology.
  • Sharpless, B.A. (2019). Racial and ethnic differences in EPPP performance. Training and Education in Professional Psychology, 13(4).
  • Saldaña, Callahan, & Cox (2024). Training and Education in Professional Psychology.
  • Nevada Board of Psychological Examiners regulatory change, effective 2025. NRS 641 / NAC 641.