Until we rollout a new interface that links our jobs with overarching project descriptions, here is a bit of info on our current projects (Note: some have been explained previously in other threads):
When humans switch among tasks, they are temporarily less proficient in returning to a task after a distraction. A person performing a task (A) will switch to another task (B) and then back to the first (A), and he performs at a lower level than he did originally, or even than the level he reached on task B. This is referred to an N-2 repetition cost ('N' being any current task). In order to understand this phenomenon, our project aims to match human performance data with a cognitive model, varying things like the scale of inhibition, decay of inhibition over time, and base-level learning. The goal, as with all cognitive modeling, is to gain greater understanding of the inner workings of the human mind by reproducing human results artificially.
The adaptive control of eye movements in reading
We are interested in understanding how people move their eyes while reading text, as a function of both their own individual cognitive constraints and reading goals. We do so in the framework of bounded optimal control -- we make the assumption that people are attempting to adjust their behavior to optimize or maximize some payoff under the joint constraints of their individual limitations and tasks.
Under this assumption, we ask two questions: first, what cognitive constraints are necessary for certain properties of reading behavior to be near-optimal? That is, why do people choose the eye movement strategies they do in the service of reading? And second, what are the sequences of information processing actions that underly the behavioral strategies we see?
This is a model that tries to detect whether there is a visual target (for example, a red X) present among a few non-targets (for example, green Xs and red Os). The model can move visual attention and its eyes around to decide whether the target is there or not, and then makes a decision. MindModeling allows us to search the parameter space to answer questions like 'how sure should the model be about the display before it moves its eyes?'
Integrated Learning Models (ILM) is a computational framework integrating multiple learning and decision mechanisms that are commonly found in psychological literature. At the core of the framework are Associative Learning, Reinforcement Learning, and Chunk Learning mechanisms.
Model of successive and simultaneous task
Models of successive and simultaneous vigilance tasks along with obtaining and correlating measures of cerebral blood flow.