Lexical dependencies abound in natural language: words tend to follow particular words or word categories. However, artificial language learning experiments exploring word segmentation have so far lacked such structure. In the present study, we explore whether simple inter-word dependencies influence the word segmentation performance of adult learners. We use a continuous testing paradigm instead of an experiment-final test battery to reveal the trajectory of learning and to allow detailed comparison with three computational models of word segmentation. Adult performance on languages with dependencies is equal or lower to those without. Of the models tested, all perform worse on languages with dependencies, though a novel particle filter-based lexical segmentation model produces learning curves most similar to human subjects.