Initial Publication Date: April 3, 2012

Assessment of Undergraduate Research

By Jill Singer, SUNY - Buffalo State and Dave Mogk, Montana State University, Bozeman

Jump down to: Group Discussions to Explore Student and Mentor Reactions | Evaluation Instruments to Assess Student Gains and Facilitate Student-Mentor Structured Conversations | Research Skill Development (RSD) framework | Undergraduate Research Student Self-Assessment (URSSA) | Electronic Portfolios to Measure Student Gains | Case Studies | Supporting Resources

Students and their mentors at the 2011 Student Research and Creativity Celebration at SUNY-Buffalo State. Image provided by J. Singer, SUNY-Buffalo State.
Undergraduate research programs can be evaluated to assess student gains, mentor experiences, and determine the overall success and impact of the program. Faculty involved in mentoring students conducting research and coordinators running larger research programs often collect formative evaluation data to inform decisions for improving the program and summative evaluation data that are used to document the impact and success of the program. A range of instruments and methodology are available ranging from largely perception based to those evaluations that strive to anchor perceptions to observations and behaviors. Some methodologies facilitate structured conversations between the students and mentors so each has an opportunity to share their experiences and better help students explore their strengths and weaknesses. While smaller scale programs may not have the resources to hire an external/independent evaluator, there are many benefits to partnering with an evaluator to ensure that the data are collected by someone independent of the research activity and that the data can be analyzed and reported using appropriate statistical measures. An evaluator also can offer advice about strategies for further program improvement.

Documenting the impact of an undergraduate research experience begins with identifying the desired student learning outcomes and program goals. The next step involves identifying the instruments and methodology for measuring progress toward these outcomes. This page offers information about available instruments and methodologies for evaluating undergraduate research and assessment of student gains and mentor experiences. Case studies also are provided to illustrate how others have evaluated their research programs.

Assessment Instruments and Methods

Group Discussions to Explore Student and Mentor Reactions

Used by David Mogk

The main purpose of the group discussion is to provide an in-depth exploration of student and mentor reactions to the research program and their experiences. If you plan to conduct group discussions with student researchers and faculty mentors at the end of a research experience (AY or summer), consider using some or all of the student and mentor candidate questions found at the end of this section.

The discussions should be facilitated by an evaluator or a facilitator experienced in conducting focus groups. The role of the facilitator is to raise issues and ask questions that the students and mentors can address, ensure that everyone gets a chance to speak, ensure that the conversation stays focused and does not wander off into irrelevant areas, and ensure that all of the topics of interest get covered in the time allowed. Although the discussion leader may take notes, it is recommended that a recorder be present in order to capture as much of the conversation as possible. It is very useful to include direct quotes when possible. The recorder should not participate in the discussions. Following the discussion the recorder should code the student and mentor remarks into discrete categories and prepare a summary of the student and mentor responses organized according to those categories. This draft should be shared with the facilitator to check against their notes and the summary revised as needed. Items that could be coded can be found in Table 5 (Lopatto, 2004), Table 1 (Hunter et al., 2006), and Tables 2 and 4 (Seymour et al., 2004); see 'Resources' for links to these articles.


Evaluation Instruments to Assess Student Gains and Facilitate Student-Mentor Structured Conversations

Developed by Daniel Weiler and Jill Singer

Used by Jill Singer (refer to Case Study for Buffalo State for more information)

A methodology for measuring student learning and related student outcomes has been developed at SUNY Buffalo State. The purposes of the evaluation are to obtain a reliable assessment of the program's impact on participating students and provide information to participating students that helps them assess their academic strengths and weaknesses. Working with faculty from a wide range of disciplines (including arts, humanities, and social sciences, as well as STEM faculty), the evaluation selected 11 student outcomes to be measured: communication, creativity, autonomy, ability to deal with obstacles, practice and process of inquiry, nature of disciplinary knowledge, critical thinking and problem solving, understanding ethical conduct, intellectual development, culture of scholarship, and content knowledge skills/methodology. A detailed rubric describes the specific components of interest for each outcome, and faculty mentors assess students on each component, using a five-point scale. Students evaluate their own progress using the same instrument, and meet with the faculty mentor to compare assessments as a way to sharpen their self-knowledge. A range of complementary instruments and procedures rounds out the evaluation. A preliminary version of the methodology was field-tested with a small number of faculty mentors and students during the summer of 2007 and a refined evaluation has been implemented since 2008. The surveys can be found on the Undergraduate Research web page from Buffalo State College and include:

  • The student survey, which is completed by the student before the research experience begins. This survey is designed to help mentors understand the views, expectations, interests, and knowledge and skills of the student researcher. This is the basis for completing the Pre-Research Assessment Survey.
  • The Pre-Research Assessment Survey (student and mentor versions). This survey is completed by the student and mentor at the beginning of the summer research program and is intended to help define the pre-research baseline measure.
  • Mid-Research Assessment Survey (student and mentor versions). The same rubric completed in the pre-research survey is completed in the middle of the program.
  • Post-Research Assessment Survey (student and mentor versions). This is completed at the end of the summer. Changes in scores will help determine growth (or absence thereof) and impact of the summer research program.
  • Student on-line journal designed to help document the experience.

Research Skill Development (RSD) framework

Student performing field work on the Buffalo River. Image courtesy of J. Singer, SUNY-Buffalo State.
Developed by J. Willison and K. O'Regan at The University of Adelaide.

The framework and other resources are available at The Research Skill Development (RSD) homepage, hosted by the University of Adelaide.

The Framework considers six aspects of research skills (listed below). Courses are developed to provide students opportunities to move from Level I to Level V with an increasing level of student autonomy at each successive level. Levels I, II, and III are structured experiences with Levels IV and V providing open inquiry. The RSD website provides an example of how the framework has been used in a human biology course.

  1. Students embark on inquiry and determine a need for knowledge/understanding
  2. Students find/generate needed information/data using appropriate methodology
  3. Students critically evaluate information or data and the process to find or generate the information/data
  4. Students organize information collected or generated and manage the research process
  5. Students synthesize and analyze and apply new knowledge
  6. Students communicate knowledge and the processes used to generate it, with an awareness of ethical, social and cultural issues

Undergraduate Research Student Self-Assessment (URSSA)

Developed by Anne-Barrie Hunter, Timothy Weston, Sandra Laursen, and Heather Thiry, University of Colorado, Boulder.

The Undergraduate Research Student Self-Assessment (URSSA) is an online survey instrument used to evaluate student outcomes of research experiences in the sciences. URSSA is hosted by salgsite.org (SALG – Student Assessment of their Learning Gains – is a survey instrument for undergraduate course assessment). URSSA supports collection of information about what students gain or do not gain from participating in undergraduate research in the sciences. A set of core items is fixed and cannot be changed, but users can customize an existing survey. URSSA is designed to measure:

  • Personal/professional gains, such as gains in confidence and establishing collegial relationships with faculty and peers
  • Intellectual gains, including the application of their knowledge and critical thinking skills to research work
  • Gains in professional socialization, such as changes in students' attitudes and behaviors that indicate adoption of professional norms
  • Gains in various skills (communication skills, technical skills, computers skills, etc.)
  • Enhanced preparation for graduate school and the workplace
  • Gains in career clarification, confirmation and refinement

Electronic Portfolios to Measure Student Gains

Developed by Kathryn Wilson, J. Singh, A. Stamatoplos, E. Rubens, and J. Gosney, Indiana University-Purdue University Indianapolis, Mary Crowe, University of North Carolina at Greensboro, D. Dimaculangan, Winthrop University, F. Levy and R. Pyles, East Tennessee State University, and M. Zrull, Applachian State University

Electronic portfolios (ePort) is an evaluation tool to examine student research products before and after a research experience. The criteria used in ePort to assess student intellectual growth are:

  • Core communication and quantitative skills
  • Critical thinking
  • Integration and application of knowledge

In addition to uploading examples of research products, students use an evaluation tool to evaluate research skills; mentors also use this tool to rate student products. Other surveys developed as part of ePort collects information about the student's relationship with their mentor and demographic information. Information can be viewed for this NSF-funded project, including the http://www.cur.org/assets/1/7/spring09wilson.pdf.

Case Studies

Case Study 1 (SUNY Buffalo State) - Developing Instruments to Evaluate a Summer Research Program

Student performing field work in El Salvador. Image courtesy of J. Singer, SUNY-Buffalo State.
The summer research program at SUNY – Buffalo State accounts for a major portion of Buffalo State's Office of Undergraduate Research's operational budget. After determining that existing instruments were not adequate for our purposes because most were designed to assess laboratory science research experiences, a process and timeline was established that would result in instruments and an evaluation protocol that could be used across all academic disciplines. We initiated this process by holding a two-day evaluation workshop in June 2006 led by Daniel Weiler (Daniel Weiler Associates).



Supporting Resources