General Education Evaluation Tools and Rubric

Criterion Initial Emerging Developed Highly Developed
GE Outcomes GE learning outcomes have not yet been developed for the entire GE program; there may be one or two common ones, e.g., writing, critical thinking. Learning outcomes have been developed for the entire GE program, but list is problematic (e.g. too long, too short, unconnected to mission and nonassessable values.) Outcomes are well organized, assessable, and focus on the most important knowledge, skill, and values of GE. Work to define levels of performance is beginning. Outcomes are reasonable, appropriate, and assessable. Explicit criteria, such as rubrics, are available for assessing student learning. Exemplars or student performance are specified at varying levels for each outcome.
Curriculum Alignment with Outcomes No clear relationship between the outcomes and the GE curriculum. Students may not have opportunity to develop each outcome adequately. Students appear to have opportunities to develop each outcome. Curriculum map shows opportunities to acquire outcomes. Sequencing and frequency of opportunities may be problematic. Curriculum is explicitly designed to provide opportunities for students to develop increasing sophistication re each outcome. Curriculum map shows “beginning,” “intermediate,” and “advanced” treatment of outcomes. Curriculum, pedagogy, grading, advising, are explicitly aligned with GE outcomes. Curriculum map and rubrics are well known and consistently used. Co-curricular viewed as resources for GE learning and aligned with GE outcomes.
Assessment Planning No formal plan for assessing each GE outcome. No coordinator or committee that takes responsibility for the program or implementation of its assessment plan. GE assessment relies on short-term planning: selecting which outcome(s) to assess in the current year. Interpretation and use of findings are implicit rather than planned or funded. No individual or committee is in charge. Campus has a reasonable, multi-year assessment plan that identifies when each outcome will be assessed. Plan addresses use of findings for improvement. A coordinator or committee is charged to oversee assessment. Campus has a fully articulated, sustainable, multi-year assessment plan that describes when and how each outcome will be assessed. A coordinator or committee leads review and revision of the plan, as needed. Campus uses some form of comparative data (e.g., own past record, aspirational goals, external benchmarking).
Assessment Implementation Not clear that potentially valid evidence for each GE outcome is collected and/or individual reviewers use idiosyncratic criteria to assess student work. Appropriate evidence is collected; some discussion of relevant criteria for assessing outcome. Reviewers of student work are calibrated to apply assessment criteria in the same way, and/or faculty check for inter-rater reliability. Appropriate evidence is collected; faculty use explicit criteria, such as rubrics, to assess student attainment of each outcome. Reviewers of student work are calibrated to apply assessment criteria in the same way; faculty routinely checks for inter-rater reliability. Assessment criteria, such as rubrics, have been pilot-tested and refined and typically shared with students. Reviewers are calibrated with high inter-rater reliability. Comparative data used when interpreting results and deciding on changes for improvement.
Use of Results Results for GE outcomes are collected, but not discussed Little or no collective use of findings. Students are unaware of and/or uninvolved in the process. Results are collected and discussed by relevant faculty; results used occasionally to improve the GE program. Students are vaguely aware of outcomes and assessments to improve their learning. Results for each outcome are collected, discussed by relevant faculty, and regularly used to improve the program. Students are very aware of and engaged in improvement of their learning. Relevant faculty routinely discusses results, plan improvements, secure necessary resources, and implement changes. They may collaborate with others to improve the program. Follow-up studies confirm that changes have improved learning.

Guidelines for Using the General Education Rubric

For the fullest picture of an institution’s accomplishments, reviews of written materials should be augmented with interviews at the time of the visit. Discussion validates that the reality matches the written record.

Dimensions of the Rubric:

  1. GE Outcomes. The GE learning outcomes consists of the most important knowledge, skills, and values students learn in the GE program. There is no strict rule concerning the optimum number of outcomes, and quality is more important than quantity. Do not confuse learning processes (e.g., completing a science lab) with learning outcomes (what is learned in the science lab, such as ability to apply the scientific method). Outcome statements specify what students do to demonstrate their learning. Criteria for assessing student work are usually specified in rubrics, and faculty identify examples of varying levels of student performance, such as work that does not meet expectations, that meets expectations and that exceeds expectations.
    Questions: Is the list of outcomes reasonable and appropriate? Do the outcomes express how students can demonstrate learning? Have faculty agreed on explicit criteria, such as rubrics, for assessing each outcome? Do they have exemplars of work representing different levels of mastery for each outcome?
  2. Curriculum Alignment. Students cannot be held responsible for mastering learning outcomes without a GE program that is explicitly designed to develop those outcomes. This design is often summarized as a curriculum map—a matrix that shows the relationship between courses and learning outcomes. Pedagogy and grading aligned with outcomes help encourage student growth and provide students’ feedback on their development. Relevant academic support and student services can also be designed to support development of the learning outcomes, since learning occurs outside of the classroom as well as within it.
    Questions: Is the GE curriculum explicitly aligned with program outcomes? Does faculty select effective pedagogies and use grading to promote learning? Are support services explicitly aligned to promote student development of GE learning outcomes?
  3. Assessment Planning. Explicit, sustainable plans for assessing each GE outcome need to be developed. Each outcome does not need to be assessed every year, but the plan should cycle through the outcomes over a reasonable period of time, such as the period for program review cycles. Experience and feedback from external reviewers can guide plan revision.
    Questions: Does the campus have a GE assessment plan? Does the plan clarify when, how, and how often each outcome will be assessed? Will all outcomes be assessed over a reasonable period of time? Is the plan sustainable? Supported by appropriate resources? Are plans revised, as needed, based on experience and feedback from external reviewers? Does the plan include collection of comparative data?
  4. Assessment Implementation. Assessment requires the collection of valid evidence that is based on agreed-upon criteria that identify work that meets or exceeds expectations. These criteria are usually specified in rubrics. Well-qualified judges should reach the same conclusions about a student’s achievement of a learning outcome, demonstrating inter-rater reliability. If two judges independently assess a set of materials, their ratings can be correlated and discrepancy between their scores can be examined. Data are reliable if the correlation is high and/or if discrepancies are small. Raters generally are calibrated (“normed”) to increase reliability. Calibration usually involves a training session in which raters apply rubrics to preselected examples of student work that vary in quality, then reach consensus about the rating each example should receive. The purpose is to ensure that all raters apply the criteria in the same way so that each student’s product would receive the same score, regardless of rater.
    Questions: Do GE assessment studies systematically collect valid evidence for each targeted outcome? Does faculty use agreed-upon criteria such as rubrics for assessing the evidence for each outcome? Do they share the criteria with their students? Are those who assess student work calibrated in the use of assessment criteria? Does the campus routinely document high inter-rater reliability? Do faculty pilot-test and refine their assessment processes? Do they take external benchmarking (comparison) data into account when interpreting results?
  5. Use of Results. Assessment is a process designed to monitor and improve learning. Faculty can reflect on results for each outcome and decide if they are acceptable or disappointing. If results do not meet faculty standards, faculty (and others, such as student affairs personnel, librarians, and tutors) can determine what changes should be made, e.g., in pedagogy, curriculum, student support, or faculty supports.
    Questions: Do faculty collect assessment results, discuss them, and reach conclusions about student achievement? Do they develop explicit plans to improve student learning? Do they implement those plans? Do they have a history of securing necessary resources to support this implementation? Do they collaborate with other campus professionals to improve student learning? Do follow-up studies confirm that changes have improved learning?
Rev 8/16/13

Click here to download a PDF of Evaluation Tools and Rubric