This posting shows similarities in wording of the
following article and wording in the works of other, earlier published authors:
Agbetsiafa, Douglas (2010)
“Evaluating Effective Teaching in college Level Economics Using Student Rating
of Instruction: A Factor Analytic Approach.” Journal of College Teaching and Learning 7(5): 57-66.
Douglas Agbetsiafa is Professor of Economics and Chair of the Economics Area at Indiana University South Bend (IUSB). He holds a PhD in Economics from the University of Notre Dame. That PhD also contains similar wording to the works of earlier published authors.
Note
that the UCLA references appear on a website (http://www.oid.ucla.edu/publications/evalofinstruction/index.html), but they were originally
also available as a PDF with a 2006 publication date: http://www.oid.ucla.edu/publications/evalofinstruction/evalguide
Agbetsiafa
article, page 57:
As a result, the
demands for increasing student enrollments, the pressure to satisfy the
students’ desires for higher grades, and using student evaluations of faculty
performance or student evaluations of teaching effectiveness have become
increasingly common on college campuses across the nation.
Compare
this to:
As the result, the demands for increasing student enrollments,
the pressure to satisfy the students’ desires for higher grades, and using
student evaluations of faculty performance (SEFP) or student evaluations of
teaching effectiveness (SETE) have become increasingly common on college
campuses across the nation.
Agbetsiafa
article, page
57:
Colleges
and universities use student evaluations to assess quality of instruction or
other aspects of a course. The data generated by these instruments assist an
instructor to improve instruction or a course (Worley and Casavant, 1995;
Boice, 1990-91). Administrators and tenure and promotion committees often use
the data to assist in making tenure and promotion decisions. Administrators may
rely on the data for helping make annual performance and salary decisions.
These data are also used to help provide evidence of teaching excellence when
faculty are nominated for teaching awards. Given the ways that student
evaluations data are used in colleges and universities, it is necessary that
the data derived from these instruments are valid and serve as reliable
measures of quality teaching and course development.
Compare
this to:
Student
evaluation of teaching (SET) instruments are commonly used in higher education
to assess quality of instruction or other aspects of a course. The data
generated by SETs can be used to assist an instructor to improve instruction or
a course (Worley and Casavant, 1995; Boice, 1990-91). Administrators and tenure
and promotion committees often use the data to make tenure and promotion
decisions. Administrators may rely on SET data for helping make annual
performance and salary decisions. SET data are also used to help provide
evidence of teaching excellence when faculty are nominated for teaching awards.
Given the ways that SET data are used in higher education, it is imperative
that SETs be valid and reliable measures of quality teaching and course
development.
[page 28]
Agbetsiafa
article, page
58:
There
continues to be robust debate and discussion about the findings of this
extensive body of research and what conclusions can be drawn about student
evaluations of teaching and their use. According to Algozzine et al, 2004,
student evaluation of teaching is a very complex and controversial issue with
inconsistent research findings (p.138), while Kulik, 2001 argues that some
studies on student evaluation of teaching SETs are “conflicting, confusing, and
inconclusive” (p.10). Nevertheless, Kulik agrees with other studies that show
that these evaluations are reliable, and valid measures of teaching
effectiveness (Centra, 2003; Marsh, 1987; Penny, 2003; Spoorens and Martelman,
2006).
Compare
this to:
There
continues to be robust debate and discussion about the findings of this
extensive body of research and what conclusions can be drawn about SETs and
their use. Algozzine et al (2004) state that “Student evaluation of teaching is
a very complex and controversial issue with inconsistent research findings” (p.138)
while Kulik (2001) states that some studies on SETs are “conflicting,
confusing, and inconclusive” (p.10). Nevertheless, Kulik concludes by agreeing
with other studies that claim to show that SETs are reliable, can be validly
used as a measure of teaching effectiveness and are useful in improving
teaching (Centra, 2003; Marsh, 1987; Penny, 2003; Spoorens and Martelman,
2006).
Agbetsiafa
article, page
58:
Research
shows that students tend to take teaching evaluations more seriously than
faculty and institutional members commonly believe. Students are more willing
to participate and offer meaningful feedback when they believe and can see that
their input is being considered and incorporated by their instructors and the institution.
In general, however, students do not perceive that their feedback is often
used. Some studies show that students place most value on evaluations for
formative purposes, but research also indicates that students believe their
input should be considered for summative purposes. Students would like to see
more specific items related to teaching effectiveness on student evaluation of
teaching instruments (Sojka & Deeter-Schmetz, 2002; Chen & Hoshower,
2003).
Compare
this to:
Research shows that students tend to take teaching evaluations more
seriously than faculty and institutional members commonly believe.
Students are more willing to participate and offer meaningful feedback when
they believe and can see that their input is being considered and incorporated
by their instructors and the institution. In general, however, students
do not perceive that their feedback is being used. Some studies show that
students place most value on evaluations for formative purposes, but research
also indicates that students believe their input should be considered for
summative purposes. Students would like to see more specific items
related to teaching effectiveness on student evaluation of teaching
instruments. (Sojka & Deeter-Schmetz, 2002; Chen & Hoshower,
2003)
Agbetsiafa
article, page
58:
Research
also shows that faculty often believes that students do not take evaluations
seriously and that ratings encourage grade leniency. Nonetheless, most faculty
do pay attention to student feedback. Further, when evaluations are used for
formative purposes, instructors show a high degree of motivation to improve
their teaching based on student input. Studies have emerged showing how
institutions and individual faculty members have begun using evaluations,
consultations, and portfolios to improve instruction qualitatively. When
faculty are well informed about the purposes of evaluation, much of their
anxiety dissipates and willingness to learn from student feedback increases
(Sojka, Gupta, & Deeter-Schmetz, 2002; Hativa, 1995; Gallagher, 2000; Bain,
2004).
Compare
this to:
…
faculty often
believe that students do not take evaluations seriously and that ratings
encourage grade leniency. Nonetheless, most faculty do pay attention to student
feedback. Further, when evaluations are used for formative purposes,
instructors show a high degree of motivation to improve their teaching based on
student input. Studies have emerged showing how institutions and individual
faculty members have begun using evaluations, consultations, and portfolios to
qualitatively improve instruction. When faculty are well informed about the
purposes of evaluation, much of their anxiety dissipates and willingness to
learn from student feedback increases (Sojka, Gupta, & Deeter-Schmetz,
2002; Hativa, 1995; Gallagher, 2000; Bain, 2004).
Agbetsiafa
article, page
58:
Teaching
evaluations are commonly considered for summative purposes, including tenure,
merit increase, retention for non-tenured faculty, promotion, and course
assignment decisions. While research generally agrees that teaching evaluations
offer an effective and meaningful way to inform these decisions, often such
data are misused, misinterpreted, or overused. Some institutions use student
ratings data as the sole criterion for evaluating teaching effectiveness, and
these institutions often use only global items on student ratings forms to
construct their evaluation. Such misuse can breed distrust between faculty and
administrators, resentment on the part of instructors for evaluations, and
hinder other formative uses of these data.
Compare
this to:
Teaching
evaluations are commonly considered for summative purposes, including tenure,
merit increase, retention for non- tenured faculty, promotion, and course
assignment decisions. While research generally agrees that teaching evaluations
can be used in an effective and meaningful way to inform these decisions, often
such data are misused, misinterpreted, or overused. Some institutions use
student ratings data as the sole criterion for evaluating teaching
effectiveness, and moreover, these institutions often use only global items on
student ratings forms to construct their evaluation. Such misuse can breed
distrust between faculty and administrators, resentment on the part of
instructors for evaluations, and hinder other formative uses of these data.
Agbetsiafa
article, page
58:
Using
evaluations to inform instructors of their teaching effectiveness and to aid
them in improving or enhancing their teaching constitute the formative purposes
of teaching evaluations. When used to inform teaching practices, specific
dimensions of teaching must be identified and focused upon in order to bring about
change. Research indicates that evaluations are most effective in improving
teaching when faculty members understand and value the importance of such
processes, and an institutional and departmental culture that supports and
respects teaching is evident.
Compare
this to:
Using evaluations to inform instructors of their teaching
effectiveness and to aid them in improving or enhancing their teaching
constitute the formative purposes of teaching evaluations. When used to
inform teaching practices, specific dimensions of teaching must be identified
and focused upon in order to bring about change. Research indicates that
evaluations are most effective in improving teaching when faculty members
understand and value the importance of such processes, and an institutional and
departmental culture that supports and respects teaching is evident.
Agbetsiafa
article, page
58:
Evaluation
systems for formative purposes often encompass more than just student ratings
of teacher effectiveness. Several colleges and universities have begun using
portfolios, peer observation, self-review, and more qualitative approaches to
improve teaching. Similarly, recent establishment of faculty development
centers on many campuses reveals a trend toward investing in the formative uses
of evaluations. See, Hobson & Talbot, 2001; Hoyt & Pallett, 1999;
Theall & Franklin, 2001; Kulik, 2001; Gallagher, 2000; Johnson & Ryan,
2000; Hativa, 1995; Bain, 2004.
Compare
this to:
Evaluation systems for formative purposes often encompass more than
just student ratings of teacher effectiveness. Institutions have begun
using portfolios, peer observation, self-review, and more qualitative
approaches to improve teaching. Similarly, recent establishment of
faculty development centers on many campuses reveals a trend toward investing
in the formative uses of evaluations. (Hobson & Talbot, 2001; Hoyt
& Pallett, 1999; Theall & Franklin, 2001; Kulik, 2001; Gallagher, 2000;
Johnson & Ryan, 2000; Hativa, 1995; Bain, 2004)
Agbetsiafa
article, page
59:
Some
of the myths about the usefulness of student ratings begin from faulty research
studies, conflicting findings within the research literature, or reluctance on
the part of some administrators and faculty to evaluate and be evaluated,
respectively. Some common myths include students are not able to make informed
and consistent judgments about their instructors; student ratings are
essentially a popularity contest; students cannot make accurate judgments
unless they have been away from the course for a while; student ratings are
negatively related to student learning; student ratings are based upon expected
grade in course.
Compare
this to:
Many myths exist about the usefulness of student ratings. Some
of these myths originate from faulty research studies, conflicting findings
within the research literature, or reluctance on the part of administrators and
faculty to evaluate and be evaluated, respectively. Some common myths of
student evaluations of teaching include: students are not able to make informed
and consistent judgments about their instructors; student ratings are
essentially a popularity contest; students cannot make accurate judgments
unless they have been away from the course for a while; student ratings are negatively
related to student learning; student ratings are based upon expected grade in
course.
Agbetsiafa
article, page
59:
While
these myths have been adequately disproved by research, some criticisms of
student-ratings of teaching have been long-standing and not resolved. These
criticisms center on issues of validity and reliability, and factors that may
bias teaching evaluations, including, student, course, and instructor
characteristics. For more discussion of these, see Hobson & Talbot, 2001;
Aleamoni, 1999; Theall & Franklin, 2001; Kulik, 2001; McKeachie, 2006;
Bain, 2004.
Compare
to:
While the above myths have been adequately disproved by research, some
criticisms of SET’s have been long-standing and not resolved. These
criticisms center on issues of validity and reliability, and factors that may
bias teaching evaluations, including, student, course, and instructor
characteristics. (Hobson & Talbot, 2001; Aleamoni, 1999; Theall &
Franklin, 2001; Kulik, 2001; McKeachie, 2006; Bain, 2004)
Agbetsiafa
article, page
59:
Reliability
refers to the consistency of ratings among different raters and the stability
of such ratings over time. Studies by Hobson & Talbot, 2001; Aleamoni,
1999; Marsh & Roche, 1997 conclude that student ratings of teaching show an
acceptable level of consistency, or inter-rater reliability, given a class size
of at least 15. The level of consistency among raters increases as class size
increases.
Compare
this to:
Reliability refers to the consistency of ratings among different
raters and also the stability of such ratings over time. Research has
shown that student ratings show an acceptable level of consistency, or
inter-rater reliability, given a class size of at least 15. The level of
consistency among raters increases as class size increases.
Agbetsiafa
article, page
59:
However,
other researchers like Aleamoni, 1999; Theall & Franklin, 2001; Marsh &
Roche, 1997; D’Apollonia & Abrami, 1997; and McKeachie, 1997 and critics of
student-ratings have suggested numerous factors which may bias student ratings
of teacher effectiveness including: class size, grade leniency, instructor
personality, gender, course workload, time that class meets, and type of class,
including the academic discipline and required/elective status of class. For
each of these factors, research has been somewhat inconclusive, with some
studies asserting a positive, negative, or no relationship between the
variables. Understanding the potential relationships, however, colleges,
universities, and researchers have begun controlling for certain student and
course characteristics before examining student ratings.
Compare
this to:
Researchers and critics of SET’s have suggested numerous factors which
may bias student ratings of teacher effectiveness including: class size, grade
leniency, instructor personality, gender, course workload, time that class
meets, and type of class, including the academic discipline and
required/elective status of class. For each of these factors, research
has been somewhat inconclusive, with some studies asserting a positive, negative,
or null relationship between variables. Understanding the potential
relationships, however, institutions and researchers have begun controlling for
certain student and course characteristics before examining student ratings.
(Aleamoni, 1999; Theall & Franklin, 2001; Marsh & Roche, 1997;
d’Apollonia & Abrami, 1997; McKeachie, 1997)
Agbetsiafa
article, page
59:
Student
rating forms generally contain both global and overall rating items and
specific items, which assess specific aspects of the instructor and course.
Research on the value of both of these types of items is mixed. Some correctly
argue that teaching is multi-dimensional and therefore requires specific items
in accurately evaluating different aspects of instruction. Others show that
when specific items are factor analyzed, they essentially reduce to one or two
items that are global in nature. Studies also reveal that responses on specific
and global items are highly correlated.
Compare
this to:
Student rating forms generally contain both global (or overall rating)
items and specific items, which assess specific aspects of the instructor and
course. Research is split on the value of both of these types of
items. Some argue that teaching is multi-dimensional and therefore
requires specific items to accurately assess different facets of
teaching. Others show that when specific items are factor analyzed, they
essentially reduce down to one or two items that are global in nature.
Studies also reveal that responses on specific and global items are highly
correlated.
Agbetsiafa
article, page
59:
With
regard to the uses of these types of items, researchers warn against making
summative decisions based solely on ratings on global items. In addition,
formative purposes seem better informed by having data on specific areas that
faculty can target in order to improve their teaching. For more discussion see,
Gallager, 2000; D’Apollonia & Abrami, 1997; Young & Shaw, 1999; and
Bain, 2004.
Compare
this to:
With regard to the uses of these types of items, researchers warn
against making summative decisions based solely on ratings on global
items. In addition, formative purposes seem better informed by having
data on specific areas that faculty can target in order to improve their
teaching. (Gallager, 2000; d’Apollonia & Abrami, 1997; Young & Shaw,
1999; Bain, 2004)
Agbetsiafa
article, page 59:
Broadly,
factor analysis enables the definition of an underlying or latent structure in
a data matrix or data set. It facilitates the analysis of the structure of the
interrelationships (correlations) among a large number of variables by defining
a set of common underlying dimensions, usually called factors. Thus, it is
possible to reorient the data so that the first few dimensions account for as
much of the available information as possible. If there is much (or any)
redundancy in the data set, then it is possible to account for most of the
information in the original data with a considerably reduced number of
dimensions.
Compare
this to:
Broadly,
factor analysis, or more particularly in this case, principal components
analysis, enables the definition of an underlying or latent structure in a data
matrix or data set. It facilitates the analysis of the structure of the
interrelationships (correlations) among a large number of variables by defining
a set of common underlying dimensions, usually called factors. Thus, it is
possible to reorientate the data so that the first few dimensions account for
as much of the available information as possible. If there is much (or any)
redundancy in the data set, then it is possible to account for the most of the
information in the original data with a considerably reduced number of
dimensions.
[page 52]
No comments:
Post a Comment