Fragment Grammars (FG) are a computational framework for studying the related problems of linguistic productivity and reuse (ODonnell, 2011; ODonnell et al., 2009). They treat determining which structures should be computed on the fly and which should be retrieved from memory as an optimal Bayesian inference. Elsewhere, we show how the model correctly captures the adult patterns of productivity and reuse in English derivational morphology and verbal inflection (ODonnell, 2011; ODonnell et al., 2009). Here we evaluate the models ability to capture U-shaped development for the English past tense. We compare performance to several competing computational models and show that only FG is able to accurately capture key aspects of the empirical data, including low, but substantial rates of overregularization and gradual improvement in the use of the regular rule (Hoeffner, 1996; Marcus et al., 1992). We discuss the relationship of this work to earlier rule-based, connectionist, and dual-mechanism accounts.