You nailed it, Tullus!
It's also important to note that running massively parallel simulations of cognitive models is a paradigm shift for the field, in part because large-scale resources were (and are) difficult to come by for most researchers. These models were not developed with performance and optimization in mind, but rather, as Tullus pointed out, to leverage pre-existing libraries and knowledge. Furthermore, some models depend on cognitive architectures (e.g., ACT-R) written in a specific language and developed over decades, and the cost of porting such an architecture from its native language to another far outweighs any potential benefits. Still, some do exists (see http://jactr.org/).
A lot of our models are written in Lisp (using ACT-R), a language which since it's inception has been closely linked to the artificial intelligence research community. In our case, the Lisp models are dependent on "specialized libraries" in ACT-R. But we also have a good dose of models that leverage tools provided by a specific language, such as the statistical libraries provided by Matlab and R. Recently, the modeling and simulation community, as well as the A.I. community, have moved towards Python as one of its primary languages (again, for the reasons Tullus outlined).
Our system also supports compiled C applications, Matlab (run on local licensed computers), R, and Java, but ultimately it depends on the modeler's background (e.g. academic affiliation, advisor, etc.) and their specific needs that determine which language they program in. Our goal from the start was to minimize the barrier to entry as much as possible with the goal of supporting as many cognitive researchers and communities as possible.