We use non-experimental data from a large panel of schools and districts in Indiana to evaluate the impacts of math curricula on student achievement. Using matching methods, we obtain causal estimates of curriculum effects at just a fraction of what it would cost to produce experimental estimates. Furthermore, external validity concerns that are particularly cogent in experimental curricular evaluations suggest that our non-experimental estimates may be preferred. In the short term, we find large differences in effectiveness across some math curricula. However, as with many other educational inputs, the effects of math curricula do not persist over time. Across curriculum adoption cycles, publishers that produce less effective curricula in one cycle do not lose market share in the next cycle. One explanation for this result is the dearth of information available to administrators about curricular effectiveness.