Two case studies are presented that concern the assessment of scientific discourse in undergraduates research papers. An assessment methodology was developed capable of tracking and evaluating the level and kinds of changes that result from students participation in laboratory experiences. An upper-level performance limit was established by analyzing journal articles written by the students faculty mentors. Students were compared to mentors in terms of the frequency of use of higher-level (e.g., stating a hypothesis) and lower-level discourse functions (e.g., stating background information), as well as with respect to the syntactic complexity of their respective sentence constructions. In self-reports of research knowledge and skills, students express gains that are not evident in their papers, suggesting that the written form poses specific challenges. We consider prospects for automating the assessment of students research papers by using electronic means to assist in the identification and enumeration of the types and frequencies of discourse functions in these papers.