log in

Advanced search

Message boards : Science : Three wishes: more info on research?

Author Message
Profile Hoelder1in
Avatar
Send message
Joined: 19 Jan 13
Posts: 12
Credit: 1,230,073
RAC: 0
Message 2477 - Posted: 15 Feb 2013, 5:32:24 UTC

Dear MindModeling team

It would be great if we were given a bit more feedback on the kind of research all of those jobs, which we run on our computers, are tying to accomplish. I did most of my many years of BOINCing at rosetta@home so far, where they have this nice habit of the researchers briefly introducing themselves in the forum and explaining - in laypersons terms - what their research is about. It would be great if we could have something similar over here. Also, if I perhaps have another wish free, would it be possible to update the MindModeling Wiki page? In the moment the Presentations and Publications section doesn't contain any links to presentations or publications and none of the links to any of the feeder projects seems to work. I understand that MindModeling@home is a combined effort of several institutions, but in the moment we don't even know which institution the jobs running on our computers are from. It's usually three free wishes, isn't it? So my third one is that MindModeling@home will eventually show up on a page like this one: http://boinc.berkeley.edu/wiki/Publications_by_BOINC_projects.

Thanks, -H.

Profile Tom
Volunteer moderator
Avatar
Send message
Joined: 23 Jun 08
Posts: 490
Credit: 238,767
RAC: 0
Message 2478 - Posted: 15 Feb 2013, 16:16:38 UTC - in response to Message 2477.

One of our research scientists intends to discuss his research in the forums very soon, so look out for that. More generally though, we have plans within the next 3-4 weeks to roll out a slightly new front page that will link each job to it's related scientific project, contributor, and related/produced papers. We haven't locked down any designs, but I believe this will cover most of your wishes.

Mike@UMich
Project scientist
Send message
Joined: 7 Feb 13
Posts: 3
Credit: 0
RAC: 0
Message 2479 - Posted: 15 Feb 2013, 17:33:33 UTC - in response to Message 2477.

Hi Hoelder1in -

I think Tom's talking about me. My name is Mike Shvartsman and I am a Ph.D Student at the University of Michigan. I've been using MindModeling for only a few weeks, but I've been meaning to post and engage with the volunteers a bit more, since I remember being on the volunteer side of BOINC projects in high school and college (now all my spare cycles go to my own research :)).

My research is trying to understand how people move their eyes when they read -- so any job you see that starts with readingSim is mine. You'd think it would be a simple problem -- look at a word, understand what it means, integrate it into you understanding of the sentence, move on.

The first tricky part is, it takes time for you motor system to start moving the eyes even after you've decided to move them, so we think that people anticipate starting the eye movement process before they have a full understanding of the word they're looking at.
The second tricky part is, the information coming in is imperfect for a variety of reasons, so stuff that you see later in the sentence can make you change your mind about what you thought you saw earlier.
The third tricky part is, people can trade speed vs accuracy in their reading (think skimming vs reading for deep content), so there's no standard fixed reading strategy.

So we're trying to build a simulator that reads like people do and can read differently depending on whether it's more focused on reading quickly, or really getting more accurate information out.

I'm going to make some time in the next few weeks to post a longer overview of what we're doing, and link to some publications we have on the topic (not from MindModeling work, but from previous HPCs we ran things on). I'll also try to give you a basic idea of what my MindModeling workflow is (and why you don't often get enormous multi-month jobs that a lot of other BOINC projects get), and take questions.

Mike.

Profile Hoelder1in
Avatar
Send message
Joined: 19 Jan 13
Posts: 12
Credit: 1,230,073
RAC: 0
Message 2480 - Posted: 16 Feb 2013, 5:54:56 UTC - in response to Message 2478.

Tom, wow, this sounds great. I am looking forward to the improvements you described.

Mike, thanks for the feedback! It's great to know some of the people behind the project. Just had a look at your Web page. I guess, I have two questions for now: somewhere, I believe on your profile page, you mention that you are doing your work "in the context of a broader cognitive architecture". Would that be any of the more widely known ones like say, ACT-R? Presumably you would be using a different one since your jobs on MindModeling@home are not ACT-R jobs? Another thing which puzzles me is this: you say "people anticipate starting the eye movement before they have a full understanding of the word". But presumably your programs don't have the complexity and computing power which the brain employes to actually understand written text? So how can your model then, without doing the real thing, the actual comprehension of the text, know how long it takes for the brain to understand a word or phrase? Don't want to keep you from your work for too long with my questions, though. I am at any rate looking forward to the other information which you said you would provide. -H.

Mike@UMich
Project scientist
Send message
Joined: 7 Feb 13
Posts: 3
Credit: 0
RAC: 0
Message 2482 - Posted: 17 Feb 2013, 19:39:27 UTC - in response to Message 2480.

...you mention that you are doing your work "in the context of a broader cognitive architecture". Would that be any of the more widely known ones like say, ACT-R?

I don't think my website actually says this, but it sounds like the kind of thing it would say so let me respond anyway. A particular behavior we see can be a function of some underlying cognitive process of interest (e.g. "short words are read quicker because the word recognition mechanism recognizes them faster"), or from an interaction of a cognitive process, other cognitive processes, and the output mechanisms we see (e.g. "short words are read quicker because more of their information is close to the higher-resolution center of vision"). If that's true, then a model of word recognition (process of interest) should be embedded in a model of all the other stuff (memory system, motor control system etc). When I talk about architecture, that second part is usually what I mean, and when I talk about working "in the context of a broader cognitive architecture" I talk about caring about whether the explanation I provide is of the first kind (process of interest alone) or the second kind (process plus all the other stuff). For more on this, take a look at work by Allen Newell (esp. http://homepages.rpi.edu/~grayw/favorites/papers/Newell73_20-Qs.pdf).

Presumably you would be using a different one since your jobs on MindModeling@home are not ACT-R jobs?

When people talk about ACT-R, they can mean one of two things: ACT-R the theory of human cognition, and ACT-R the piece of software implementing ACT-R the theory (plus a bunch of other stuff needed to run simulations and otherwise do science with ACT-R the theory). ACT-R jobs on MindModeling use ACT-R the piece of software (I think the standard implementation from CMU written in LISP), and implicitly adopt ACT-R the theory. You can work on ACT-R the theory but not use ACT-R the piece of LISP software -- for example, there are Java and Python implementations, so someone could run Python ACT-R as a Python job on MindModeling. You can also work on ACT-R the theory and not use a full ACT-R suite at all -- for example, if I care about the computational efficiency of the memory system in ACT-R, then I might want to run it separately from the full suite.

As far as our model -- there are some assumptions / claims our model shares with ACT-R (for example, our motor control system is inspired by EPIC's, which is what ACT-R's motor system is also inspired by). There are some it doesn't (for example, our model is not a production rule system). There are some it makes no claims / assumptions about (for example, our model makes no claims about how memory retrieval works).

Another thing which puzzles me is this: you say "people anticipate starting the eye movement before they have a full understanding of the word". But presumably your programs don't have the complexity and computing power which the brain employes to actually understand written text? So how can your model then, without doing the real thing, the actual comprehension of the text, know how long it takes for the brain to understand a word or phrase?

The short answer is, I don't need to know how long it takes to understand a word or phrase -- I only need to know that from the point I decide to move the eyes, I'll get more information before I actually move the eyes. If I'm trying to hit some level of understanding / accuracy by the time the eyes actually move, I will decide to move the eyes before I've hit that level of understanding (because I will get more information and hit the desired level of understanding by the time the eyes move). When I post more info about the model, it may make it clearer.

Profile Jose [Team Musketeers]
Send message
Joined: 31 Jul 12
Posts: 2
Credit: 104,142
RAC: 0
Message 2483 - Posted: 17 Feb 2013, 21:35:50 UTC

Thanks Tom & Mike!!
You will have my time machine to support their projects. :-)))))

Profile Hoelder1in
Avatar
Send message
Joined: 19 Jan 13
Posts: 12
Credit: 1,230,073
RAC: 0
Message 2485 - Posted: 18 Feb 2013, 11:25:47 UTC - in response to Message 2482.

Mike, thanks a lot for the detailed and interesting answers. Some things I was puzzling over have become clearer to me now. Will keep my computers crunching at 100% for mm@home...

The sentence about the "broader cognitive architecture" is from your department profile page, btw. ;)

StandardbredHorse
Send message
Joined: 11 Nov 11
Posts: 1
Credit: 8,075
RAC: 0
Message 2499 - Posted: 13 Mar 2013, 9:09:21 UTC - in response to Message 2479.

Hi Hoelder1in -

I think Tom's talking about me. My name is Mike Shvartsman and I am a Ph.D Student at the University of Michigan. I've been using MindModeling for only a few weeks, but I've been meaning to post and engage with the volunteers a bit more, since I remember being on the volunteer side of BOINC projects in high school and college (now all my spare cycles go to my own research :)).

My research is trying to understand how people move their eyes when they read -- so any job you see that starts with readingSim is mine. You'd think it would be a simple problem -- look at a word, understand what it means, integrate it into you understanding of the sentence, move on.

The first tricky part is, it takes time for you motor system to start moving the eyes even after you've decided to move them, so we think that people anticipate starting the eye movement process before they have a full understanding of the word they're looking at.
The second tricky part is, the information coming in is imperfect for a variety of reasons, so stuff that you see later in the sentence can make you change your mind about what you thought you saw earlier.
The third tricky part is, people can trade speed vs accuracy in their reading (think skimming vs reading for deep content), so there's no standard fixed reading strategy.

So we're trying to build a simulator that reads like people do and can read differently depending on whether it's more focused on reading quickly, or really getting more accurate information out.

I'm going to make some time in the next few weeks to post a longer overview of what we're doing, and link to some publications we have on the topic (not from MindModeling work, but from previous HPCs we ran things on). I'll also try to give you a basic idea of what my MindModeling workflow is (and why you don't often get enormous multi-month jobs that a lot of other BOINC projects get), and take questions.

Mike.


I saw another tricky article - actually, I believe it was a news documentary recently - have to wrack my own brain here; if you detect eye movements of any sort, cool, but much of the world (Asians from upward to down, many languages from right to left, incl. Arabic and both ancient Hebrew and the type that was re-introduced to the world after 1000s of years as a dead language - unfort. by the Zionist movement, but I'll refrain from injecting politics despite my team's orientation towards that - RDFRS for those interested, Richard Dawkins Foundation for Reason and Science). Long story short, given limited budget resources (though located in MI, where you may be able to get a decent sample of many aforementioned ethnic/religious/language groups), does your project either differentiate R-L, L-R, up/down eye movements as a parameter aside from complexity, etc., or is that part of the imperfections of the study?

Thanks, if you're still here - I tried to keep the question as short and understandable as possible, which for me is hard - brevity's not my strong point, lol.

Take care and best of luck analysing,

StandardbredHorse
(founder)
Richard Dawkins Foundation for Reason and Science

Mike@UMich
Project scientist
Send message
Joined: 7 Feb 13
Posts: 3
Credit: 0
RAC: 0
Message 2500 - Posted: 13 Mar 2013, 15:10:59 UTC - in response to Message 2499.

Short of end-of-line wrap-arounds, reading very much proceeds on one axis (as you noted, LR or RL or TD), so we haven't worried much about eye movements in 2 dimensions. I work exclusively with American English speakers currently.

Getting non-English speakers in the lab is typically much harder than getting English speakers in, for a variety of reasons. One of my colleagues works on bilingual Spanish speakers and there's a lot of legwork to finding them, convincing them to come to the lab, finding money to compensate them etc. And that's with what's probably the most dominant minority language group in the country. There are also more scientific issues -- with monolingual English speakers in the U.S., you can be reasonably sure that they've had similar language backgrounds in terms of amount of exposure and overall language knowledge. With bilinguals, there are additional confounds in terms of which language they learned first, which language they use more, and so on.

This does create a problem -- much of what we know about the mind comes from studies of upper-middle-class 18-22 year old English-speaking undergrads. There are some efforts to mitigate against this as the field begins to realize that some so-called "universals" are more population-specific than we thought. For example, a group out in CA has actually gotten their experimental EEG gear loaded up into a big bus and take it to a variety of communities, and there's another researcher who's been looking at how basic perceptual facts (like the Müller-Lyer illusion) behave cross-culturally. Details of both escape me but I can dig them up.

Profile Hoelder1in
Avatar
Send message
Joined: 19 Jan 13
Posts: 12
Credit: 1,230,073
RAC: 0
Message 2551 - Posted: 21 Jun 2013, 6:53:21 UTC - in response to Message 2478.
Last modified: 21 Jun 2013, 6:54:16 UTC

...we have plans within the next 3-4 weeks to roll out a slightly new front page that will link each job to it's related scientific project, contributor, and related/produced papers. We haven't locked down any designs, but I believe this will cover most of your wishes.


Hi Tom, so are the improvements of the front page you told me about still in the works? It seems we are again running some jobs from a researchline, which hasn't yet been discussed in the forum. Getting to know, at least on a very basic level, what you guys are working on (the science outreach part of BOINC) is actually my main motivation for participating... Keep up the good work, -H.

Profile Tom
Volunteer moderator
Avatar
Send message
Joined: 23 Jun 08
Posts: 490
Credit: 238,767
RAC: 0
Message 2563 - Posted: 9 Jul 2013, 13:43:45 UTC - in response to Message 2551.
Last modified: 9 Jul 2013, 13:44:52 UTC

Although we've primarily been addressing other issues related to our backend engine the last few months, I do have progress to report on this front. As of last week, we have a developer working full-time to associate project descriptions with each job. I'm very confident we'll have something deployed in the next few weeks, so keep your eyes open.

Profile Hoelder1in
Avatar
Send message
Joined: 19 Jan 13
Posts: 12
Credit: 1,230,073
RAC: 0
Message 2566 - Posted: 11 Jul 2013, 5:25:29 UTC - in response to Message 2563.

I am delighted to hear that - thanks a lot! And I understand of course that working on the backend is more important for you. Will look out for the home page updates...

Profile Tom
Volunteer moderator
Avatar
Send message
Joined: 23 Jun 08
Posts: 490
Credit: 238,767
RAC: 0
Message 2595 - Posted: 12 Aug 2013, 13:42:25 UTC

A quick update: we currently have the backend infrastructure put together for linking running jobs/workunits with research projects, but we're going to deploy it along with other UI updates in a much larger push. This will be part of our longer term goal of moving MIndModeling from Beta to production.

Profile Hoelder1in
Avatar
Send message
Joined: 19 Jan 13
Posts: 12
Credit: 1,230,073
RAC: 0
Message 2597 - Posted: 17 Aug 2013, 7:21:47 UTC - in response to Message 2595.

Thanks Tom. It's good to hear that things are moving forward... -H.

Message boards : Science : Three wishes: more info on research?


Main page · Your account · Message boards


Copyright © 2018 MindModeling.org