The Evidence Gap CTE Has Been Avoiding
Career and technical education has enjoyed a renaissance over the past decade. Enrollment is up, bipartisan support is strong, and industry partnerships have become a standard talking point in every school district’s strategic plan. But beneath the enthusiasm lies an uncomfortable truth: the field has remarkably little rigorous evidence connecting the industry credentials students earn in high school to what actually happens to them after graduation.
That’s not a minor gap. It’s the gap. CTE programs across the country — and especially in Pennsylvania — have built entire accountability frameworks around credential attainment rates. The Perkins Act measures program quality partly by how many students earn industry certifications. State report cards celebrate the numbers. Districts tout them in press releases. But the underlying question — do these credentials actually lead to better jobs, higher earnings, or more college success? — has mostly been answered with anecdote and assumption.
The IES-funded study now underway in Pennsylvania is designed to change that. Titled High School CTE Credentials and Postsecondary Outcomes in Pennsylvania: Implementation, Impact, and Cost, the research project is tracking up to 14,000 CTE completers from the class of 2026. It combines administrative records from the Pennsylvania Department of Education with baseline surveys of graduating seniors and a planned follow-up survey in 2028. The ambition is straightforward: produce peer-reviewed causal estimates of what credentials actually do for students in the years immediately after high school.
For a field that has operated on conviction more than evidence, this study represents something close to a reckoning.
What the Study Actually Measures
The research design matters here, because not all CTE studies are created equal. Many existing studies rely on correlational data — comparing CTE students to non-CTE students and attributing differences to program participation. That approach is plagued by selection bias: students who self-select into CTE are already different from those who don’t, making it impossible to isolate the effect of the program itself.
This IES study aims higher. By combining longitudinal tracking of a defined cohort with administrative data on postsecondary enrollment, employment, and earnings, the researchers can build a more credible picture of credential effects. The inclusion of a cost analysis — examining per-student expenditures across different program models — adds a dimension that most CTE research ignores entirely: return on investment.
The study also includes a qualitative component: site visits at 12 high schools selected to represent diverse CTE program types across Pennsylvania. Researchers will interview administrators, CTE instructors, students, and local employers. These case studies won’t produce statistical generalizations, but they should capture something the numbers can’t — how credential programs actually work on the ground, where implementation quality varies enormously from one school to the next.
The combination of quantitative impact estimates, cost data, and implementation case studies makes this one of the most comprehensive CTE credential studies ever attempted at the state level.
Why Philadelphia Is the Critical Test Case
Philadelphia’s School District runs one of the largest urban CTE operations in the country. Students earn roughly 3,000 industry certifications annually across 43 occupational programs. According to testimony from Children First PA, 89% of students in Perkins-supported programs earn industry-recognized credentials — a number the district frequently cites as evidence of program quality.
But here’s what Philadelphia doesn’t have: rigorous local evidence that those 3,000 annual credentials translate into better labor market outcomes. The district knows students are getting certified. It doesn’t systematically know whether those certifications led to jobs in the relevant industry, higher wages compared to non-credentialed peers, or smoother transitions into postsecondary education.
That information gap has real consequences. When a district allocates limited resources across dozens of CTE program areas — from health sciences to automotive technology to culinary arts — it’s making bets about which programs deliver the most value. Without outcome data tied to specific credentials, those bets are informed by tradition, employer relationships, and institutional inertia rather than evidence.
The IES study should begin to fill that gap. If the findings show that certain credentials — say, healthcare certifications — produce strong employment outcomes while others show weak returns, Philadelphia’s CTE leadership faces a clear imperative to rebalance its program portfolio. That’s the kind of hard conversation that data makes possible and anecdote never will.
The 2028 follow-up timeline means the most actionable findings are still two years away. But the baseline survey happening now is already capturing data that could reveal important patterns — which students are earning which credentials, how those choices correlate with demographic factors, and whether program access is equitable across the district.
What CTE Leaders Should Do While Waiting for Results
The study’s findings won’t arrive until after the 2028 follow-up survey is complete and analyzed, but CTE leaders don’t need to wait passively. Several productive actions are available now.
First, district CTE directors should audit their current credential offerings against the study’s framework. Understanding which credentials are being tracked — and why — can help programs prepare for the eventual findings rather than being caught off guard. If a program’s signature credential isn’t part of the study’s analysis, that’s worth knowing now.
Second, the cost-analysis component should prompt every CTC to examine its own per-credential spending. Even before the study publishes its numbers, local cost accounting can reveal inefficient spending patterns — test fees for credentials with low pass rates, expensive equipment for programs with declining enrollment, or credential offerings that haven’t been updated to reflect current industry standards.
Third, the employer focus groups embedded in the study design signal an important methodological point: credentials are only valuable if employers recognize and reward them. CTE programs that haven’t recently validated their credential menu with local employers should do so independently — not wait for the study to reveal a disconnect.
Fourth, Philadelphia’s CTE community should engage directly with the research team where possible. The study includes site visits at 12 schools, and Philadelphia’s scale and diversity make it a natural candidate for inclusion. Schools that participate will have earlier access to findings and more influence on how the research captures their program’s reality.
The good, the bad, what’s best?
The good: This study is exactly what CTE policy has needed for years — rigorous, longitudinal, state-level evidence on credential effectiveness that goes beyond enrollment counts and completion rates. The inclusion of cost analysis is a bonus that could transform how districts talk about CTE funding.
The bad: The timeline is long. Meaningful findings won’t emerge until 2028 at the earliest, meaning current programming decisions will continue to be made without this evidence. There’s also a risk that the study captures a cohort disrupted by unusual circumstances — pandemic recovery, labor market volatility — in ways that limit the generalizability of findings.
What’s best: CTE leaders should treat this study as both a resource and a prompt. Use the wait time to build internal data capacity, audit credential offerings against employer demand, and prepare for the possibility that some cherished programs may not survive evidentiary scrutiny. The districts that engage with this research proactively — rather than defensively — will be best positioned to act on its findings.
✅ Recommended: Engage Proactively with the IES Credential Study
CTE programs in Philadelphia and across Pennsylvania should embrace this study as a tool for improvement, not a threat to existing programs. The districts that audit their credential portfolios now, validate employer recognition independently, and build data systems capable of tracking outcomes will be ready to act when the findings land. Those that wait for the results to arrive before thinking about implications will be two years behind.

