How do we use the motion of animate objects to make inferences about their intentions? We investigate this question using displays containing a number of autonomous, independently programmed agents moving about the screen and interacting with each other. Each agent behaves according to an independent autonomous program, controlled by a small number of parameters that define its "personality." We probe subjects' impressions of the similarities among the behaviors of the various agents, and then use multidimensional scaling to recover the subjective parameters defining the mental space of agent types. The most important variable turns out to be one that determines how the agent reacts to a nearby agent at one critical distance. A followup experiment suggests that variation along this parameter contributes to modulating a higher-level percept of how "hostile" or "friendly" the agents appear to be.