Educational Evaluation and ID

  • Period: to

    Industrial Revolution’s impact

    Prior to 1900: The Industrial Revolution, spanning from approximately the mid-18th century through the mid-19th century, has led to major societal reforms, including changes in education.
  • Period: to

    Impact of science on education

    1900s-1930s: Aspects of scientific management are applied to education and industry, with a focus on systemization, standardization, and efficiency.
  • Origin of formative evaluation?

    Origin of formative evaluation?
    Early 1920s: Potential origin of formative evaluation, when researchers used a variety of techniques to assess the effectiveness of an instructional film.
  • Period: to

    Tyler's impact on education

    1930-1945: Ralph W. Tyler’s work on education, specifically educational evaluation (a term he coined) and testing, has a wide impact on education.
  • Period: to

    Formative evaluation beginnings in instruction

    1930s-1950s: Processes similar to formative evaluation (but using different names) are used in many instructional projects, especially educational films.
  • Period: to

    Evaluating instructional materials

    1940s-1950s: Educators such as Arthur Lumsdaine, Mark May, and C. R. Carpenter described procedures for evaluating instructional materials that were still in their formative stages.
  • Tyler describes formative evaluation in education

    Tyler describes formative evaluation in education
    1942: Ralph W. Tyler is one of the first individuals to describe the formative role of evaluation activities in educational programs.
  • Period: to

    Period of educational expansion

    1946-1957: A time of major expansion of educational offerings, personnel, and facilities, and a time of rapid expansion of standardized testing.
  • Tyler defines evaluation

    Tyler defines evaluation
    1950: Ralph W. Tyler defines evaluation as “The process of determining to what extent the educational objectives are actually being realized.”
  • Bloom’s Taxonomy

    Bloom’s Taxonomy
    1956: Benjamin Bloom publishes the Taxonomy of Educational Objectives, where Evaluation is identified as the highest cognitive domain. See also <a href="http://www.krummefamily.org/guides/bloom.html" target="_blank"> http://www.krummefamily.org/guides/bloom.html</a>
  • Period: to

    Space Race impact

    1958-1972: After the Russian launch of Sputnik in 1957, there was a call for evaluations of large-scale curriculum development projects funded by the government in the United States.
  • Kirkpatrick model

    Kirkpatrick model
    1959: Donald Kirkpatrick proposes a four-level model for evaluating training programs. The levels included the following steps: reaction; learning; behavior; and results.
  • Period: to

    Programmed Instruction

    1960s: Programmed instructional materials developed during this time were tested while they were being developed.
  • Period: to

    Evaluation in ID models

    1960s-1980s: Analysis, design, production, evaluation, and revision steps are included in virtually all ID models created in the 60s, 70s, and 80s.
  • Cronbach defines evaluation

    Cronbach defines evaluation
    1963: Lee J. Cronbach defines evaluation as “the collection and use of information to make decisions about an educational program. The program may be a set of instructional materials distributed nationally, the instructional activities of a single school, or the educational experiences of a single pupil…Course improvement: deciding what instructional materials and methods are satisfactory and where change is needed.”
  • Elementary and Secondary Education Act

    Elementary and Secondary Education Act
    1965: The Elementary and Secondary Education Act (ESEA) is passed in the United States, and leads to a significant shift of focus from the objects of educational evaluation as being students to projects, programs, and instructional materials.
  • Formative and summative evaluation

    Formative and summative evaluation
    1967: Michael Scriven marks the distinction between the terms “formative evaluation” and “summative evaluation” (and coins these terms) as serving two major roles or functions of evaluation. Formative evaluation is for the improvement of an ongoing activity, program, person, product, etc.; summative evaluation is used for accountability, certification, or selection.
  • Stake’s Countenance Model

    Stake’s Countenance Model
    1967: Robert E. Stake’s shares his Countenance Model of evaluation, which suggests that two sets of information be collected regarding the evaluated object: descriptive and judgmental. The evaluation process should include (a) describing a program, (b) reporting the description to relevant audiences, (c) obtaining and analyzing their judgments, and (d) reporting the analyzed judgments back to the audiences.
  • Markle and evaluation

    Markle and evaluation
    1967: Susan Markle asserts there is a lack of vigor in testing processes. She prescribes detailed procedures for evaluating instructional assessment materials both during and after the development process.
  • CIPP Model

    CIPP Model
    1971: The CIPP evaluation model is developed by the Phi Delta Kappa Commission on Evaluation (D. L. Stufflebeam, W. J. Foley, W. J. Gephardt, E. G. Guba, H. D. Hammond, H. O. Merriman, and M. M. Provis). It divides evaluation into four distinct strategies: Context evaluation, Input evaluation, Process evaluation, and Product evaluation.
  • Discrepancy Evaluation Model

    Discrepancy Evaluation Model
    1971: Malcolm M. Provus proposes the Discrepancy Evaluation Model (DEM), a five-step evaluation process including (a) clarification of the program design, (b) assessing the implementation o f the program, (c) assessing its in-term results, (d) assessing its long-term results, and (e) assessing its costs and benefits.
  • IDI includes evaluation

    IDI includes evaluation
    1971: The teacher training package known as the Instructional Development Institute (IDI) that was created by the National Special Media Institute could be considered as having three major phases: analysis, design, and evaluation.
  • Stufflebeam clarifies evaluation

    Stufflebeam clarifies evaluation
    1972: Daniel L. Stufflebeam suggests the distinction between proactive evaluation intended to serve decision making and retroactive evaluation to serve accountability.
  • Alkin defines evaluation

    Alkin defines evaluation
    1972: Marvin C. Alkin defines evaluation in instruction as “the process of ascertaining the decision areas of concern, selecting appropriate information, and collecting and analyzing information in order to report summary data useful to decision-makers in selecting among alternatives”.
  • Goal-free evaluation

    Goal-free evaluation
    1973: Michael Scriven advocates for goal-free evaluation to be included in education rather than just focusing on “writing tests and digesting data” in order to help foster student success in education. This hybrid of formative and summative evaluative approaches “may be a realistic picture of what usually happens in supposedly more standardized situations.”
  • Period: to

    Evaluation as a profession

    1973-1980s: Evaluation emerges as a professional field that is related to, but distinct from, the fields of research and testing.
  • Joint Committee on Standards for Educational Evaluation

    Joint Committee on Standards for Educational Evaluation
    1975: The Joint Committee on Standards for Educational Evaluation is created as a coalition of major professional organizations concerned with the quality of evaluation.
  • Responsive Education Model

    Responsive Education Model
    1975: Robert E. Stake’s Responsive Education Model is shared, which suggests a continuing “conversation” between the evaluator and all other parties associated with the evaluand. Stake specified 12 steps of dynamic interaction between the evaluator and his audiences in the process of conducting an evaluation.
  • Dick: More formative evaluation

    Dick: More formative evaluation
    1980: Walter Dick recommends that more research is needed into the use of formative evaluation in instructional research.
  • Carey & Carey model

    Carey & Carey model
    1980: James O. Carey and Lou M. Carey offer a two-phase process for instructional materials selection that involves formative evaluation.
  • Gerlach & Ely model

    Gerlach & Ely model
    1980: Vernon S. Gerlach and Donald P. Ely present a classroom-oriented ID model that involves a mix of linear and concurrent development activities, including an evaluation of learner performance.
  • JCSEE defines evaluation

    JCSEE defines evaluation
    1981: The Joint Committee on Standards for Educational Evaluation defines evaluation as “the systematic investigation of the worth or merit of some object.” They suggest 30 standards for evaluators, divided into four major groups: utility, feasibility, propriety, and accuracy.
  • Guba & Lincoln model

    Guba & Lincoln model
    1981: Egon G. Guba and Yvonna S. Lincoln’s model of evaluation suggests the evaluator apply five kinds of information: (a) descriptive information regarding the evaluation object, its setting, and its surrounding conditions, (b) information responsive to concerns of relevant audiences, (c) information about relevant issues, (d) information about values, and (e) information about standards relevant to worth and merit assessments.
  • Evaluation: a mix of approaches

    Evaluation: a mix of approaches
    1981: Gary D. Borich and Ronald P. Jemelka recommend that instructional evaluation should not choose between decision-oriented, applied research, value-oriented, and systems-oriented definitions, but should be a mix of these as appropriate to the context.
  • Evaluating instructional software

    Evaluating instructional software
    1990: Robert A. Reiser and Walter Dick share a new model for evaluating instructional software that focuses on the extent to which students learn the skills a software package is intended to teach.
  • 5 professional evaluation organizations

    5 professional evaluation organizations
    1990: Approximately five major evaluation professional organizations exist.
  • Constructivistic evaluation

    Constructivistic evaluation
    1991: David H. Jonassen argues that constructivistic learning is better judged by goal-free evaluation methods which take context into consideration and function as “more of a self-analysis and meta-cognitive tool.”
  • Evaluating an evaluation model

    Evaluating an evaluation model
    1992: Barbara J. Gill, Walter Dick, Robert A. Reiser, and Jane E. Zahner evaluate Reiser and Dick’s 1990 model for software evaluation. This model for evaluating instructional software “emphasizes the collection of student performance data to determine the extent to which students learn the knowledge or skills a software package intends to teach.” It was revealed that teachers should be responsible for implementing the model, collecting student data, and sharing the data with other teachers.
  • Evaluating interactive multimedia

    Evaluating interactive multimedia
    1992: Thomas C. Reeves defines interactive multimedia (IMM) as “a computerized database that allows users to access information in multiple forms, including text, graphics, video, and audio.” The effectiveness of IMM is constrained by the design of the user interface and the motivation and expertise of the users, and therefore it needs to be evaluated in context. Reeves suggests using formative experimentation, where a pedagogical goal is set and the process taken to reach the goal is observed.
  • ID model highlighting formative evaluation

    ID model highlighting formative evaluation
    1992: Lynn McAlpine suggests a model of instructional design that is based on real-life practice and highlights formative evaluation. The model is made up of: Needs Analyses, Goals/Purposes, Instructional Strategies, Learner Characteristics, Elements Specific to Context, Instructional Materials and Resources, Content/Task Analyses, Evaluation of Learning, Learning Objectives, and Constraints, and these components cycle recursively around the center of the model, formative evaluation.
  • Evaluation in Instructional Technology

    Evaluation in Instructional Technology
    1994: Barbara B. Seels and Rita C. Richey identify Evaluation as one of the domains of the field of Instructional Technology. (The other fields are Design, Development, Utilization, and Management, and all domains interact with theory and practice.)
  • Developmental evaluation

    Developmental evaluation
    1994: Michael Quinn Patton introduces the term “developmental evaluation,” which he defines, in part, as the following: "Evaluation processes and activities that support program, project, product, personnel and/ or organizational development (usually the latter). The evaluator is part of a team whose members collaborate to conceptualize, design, and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change."
  • ASSURE model

    ASSURE model
    1999: Robert Heinich, Michael Molenda, James D. Russell, and Sharon E. Smaldino present a classroom-oriented ID model, with the acronym of ASSURE. ASSURE stands for Analyze learners, State objectives, Select media and materials, Utilize media and materials, Require learner participation, and Evaluate and revise.
  • PIE model

    PIE model
    2000: Timothy J. Newby, Donald Stepich, James Lehman, and James D. Russell present a classroom-oriented ID model, PIE, in a book written primarily for pre-service teachers. Planning, Implementing, and Evaluating are the three phases of the PIE model.
  • Morrison, Ross, & Kemp model

    Morrison, Ross, & Kemp model
    2001: Gary R. Morrison, Steven M. Ross, and Jerrold E. Kemp present a classroom-oriented ID model with a focus on curriculum planning. ID is viewed as a continuous cycle encircled by formative evaluation and revision, and on a larger scale, surrounded by ongoing confirmative evaluation, planning, implementation, summative evaluation, project management, and support services.
  • NCLB Act

    NCLB Act
    2001: The No Child Left Behind (NCLB) Act is passed in the United States. It is based on the idea that having high standards and determining measurable goals (assessed through standardized testing) can improve individual students’ outcomes.
  • Evaluating e-learning

    Evaluating e-learning
    2002: Thomas C. Reeves recommends that viewing e-learning as outcomes rather than assessments of student learning will lead to higher levels of evaluation. Reeves suggests relevant questions to be asked about these outcomes, and recommends that the e-learning industry invest more into the evaluation of its products.
  • ID model with core of evaluation

    ID model with core of evaluation
    2004: Caroline Crawford suggests the use of the Eternal, Synergistic Design Model, which has Evaluation and Feedback at its core, as a representation of the non-linear nature of the instructional design process.
  • >50 professional evaluation organizations

    >50 professional evaluation organizations
    2006: Over 50 major evaluation professional organizations exist worldwide.
  • Race to the Top

    Race to the Top
    2009: The Race to the Top Assessment Program is authorized in the United States. The program aims to provides funding to a consortia of states to develop assessments that are valid, support and inform instruction, provide accurate information about what students know and can do, and measure student achievement against standards designed to ensure that all students gain the knowledge and skills needed to succeed in college and the workplace.