Judgmental detection of changes in time series is an ubiquitous task. Previous research has shown that human observers are often relatively poor at detecting change, especially when the series are serially dependent (autocorrelated). We present two experiments in which participants were asked to judge the occurrence of changes in time series with varying levels of autocorrelation. Results show that autocorrelation increases the difficulty of discriminating change from no change, and that observers respond to this increased difficulty by biasing their decisions towards change. This results in increased false alarm rates, while leaving hit rates relatively intact. We present a rational (Bayesian) model of change detection and compare it to two heuristic models that ignore autocorrelation in the series. Participants appeared to rely on a simple heuristic, where they first visually match a change function to a series, and then determine whether the putative change exceeds the variability in the data.