In this paper we compare several mechanisms for using distributional statistics to derive word class information. We contrast three different ways of computing statistics for independent left and right neighbours with the notion of a frequent frame. We also investigate the role of utterance boundaries as context items and weighting of frequency information in terms of the successful simulation of the noun-verb asymmetry. It is argued that independent contexts can classify items with a higher degree of accuracy than frequent frames, a finding that is more pronounced for larger input sets. Frequent frames classify a larger number of items, but do so with lower accuracy. Utterance boundaries are useful for the development of a noun category, particularly at intermediate levels of frequency sensitivity.