Human infants learn meanings for words in interaction with their environment. Individual learning scenarios can be ambiguous due to the presence of several words and possible meanings. One possible way to overcome ambiguity is called cross-situational learning (XSL), where information is gathered over several learning trials. Experimental studies of human XSL have shown that cognitive constraints, such as attention and memory limitations, decrease human performance when compared to computer models that can store all available information. In this paper, we approach modeling of human performance with a novel computational XSL algorithm, FAMM (Familiarity preference, Associative learning, Mutual exclusivity, Memory decay), equipped with the four main components motivated by experimental research. The model is evaluated based on a number of earlier XSL experiments that probe different aspects of learning. FAMM is shown to provide a better fit to the behavioral data than the earlier proposed model of Kachergis et al. (2012).