Assessment is an integral piece of curriculum design like the many strands of a fine fabric. When assessment design is not efficiently implemented into the planning process, it acts like a pulled thread in a fabric, marring the overall product. Assessment design requires multiple steps just as textiles. Stuyven, Dochy and Janssens (2005) state that assessment must be efficiently embedded into the curriculum with precision and thought to direct, guide and gauge student learning.
Assessment can take place at many different levels including subject, course, program, department or institutional level. They can also range in their size from a small sampling of students embedded within the course to departmental size, institutional size and even, rarely, to include students from several institutions. It can be broken into different types which are formative and summative (Garrison & Ehringhaus, 2007). There is also one other option which is rarely seen called process assessment.
Process assessment is heard of less than formative and summative but has just as much value in the educational arena. It is built around projects. First, a project needs to be identified. Within the project, milestones, goals, objectives and activities are determined, including due dates. Lastly, the product and its associated costs need to be evaluated. Mastery is evaluated through the alignment of schedules met, products completed and cost estimates within set boundaries (MIT Teaching and Learning Laboratory, 2010).
Throughout the learning process on each level of education whether it be in a course, department or institution, the assessment process must be embedded in the learning process. By using a variety of formative, summative and process assessment choices, students are invited into the assessment design which becomes a piece of the overall picture in creating an active learning environment (Shang, Shi & Chen, 2001). Additionally, the data provides a clearer picture of the learners’ acquisition of the knowledge while Shuler (2010) lists the different methods used in data collection. These methods include archival data, observations, surveys/questionnaires, focus groups and interviews. Using a variety of methods increases the validity and reliability of the assessment. It allows for a greater polling of students from a variety of backgrounds.
“Archival data are data that already exist that have been collected by someone other than your agency” (California Department of Health Services, 1998, para 1). Archival data is best used in longitudinal studies and especially when looking at trends in education. It is easy, cost effective and provides quantitative data. However, it also has some drawbacks. It can become expensive. Also, many times archival data lends to problems when the databases are poorly maintained (Shuler, 2010). This type of data collection can also include modified archival data when the researcher adds data they collected.
Observations is used often within education. It has the ability to capture details of any observed or pre-determined behaviors. The benefits to this option is that it occurs in a natural setting and also has the ability to note the circumstances surrounding the behavior. This option allows the researcher to find additional behaviors or unexpected consequences and patterns. There is also the added benefit to observe interaction between participants. Despite these benefits there are also some weaknesses such as the nature of the data which is qualitative. This type of data leads to a subjective nature in gauging responses and the increased time in evaluating the data (Cornell Laboratories, 2007).
Surveys and questionnaires are generally lumped into the same category. They are best used when attempting to obtain a large amount of information from a large sample. When used in an anonymous manner, they are best capable of achieving honest responses and to not require the subjects to be influences by their peers. It is relatively easy to acquire the data and easy to assimilate since the questionnaires are formatted similarly. One of the greatest difficulties in this method is the ability to create a reliable instrument. The data may also be difficult to interpret if the questions were not given multiple -choice options (Shuler, 2010).
Gibbs (1997) defines focus groups as research groups which are organized to gain information, views and experiences about a presented topic. It has also been defined as a collective activity, social event or interaction between selected individuals. It has the benefits of providing data quickly and generally inexpensive. The researcher can also modify the discussion topic to explore unanticipated issues as well as gaining insight into the participant’s manner of thinking. There is the additional benefit of examining the interaction between the participants. Yet, there are drawbacks. These include the researcher, even a trained one, running the risk of giving cues to the participants. There is also the tendency to generalize the results while producing the qualitative data.
Interviews is the last type of data collection mentioned by Shuler (2010). The greatest benefit in this option over the others is the ability to gain an extra insight into individuals who may feel intimidated in the other social interactions. There is also the ability to explore sensitive topics. Like the other methods, this one also produces some weaknesses. The top issues in this method is the time it takes to interview participants individually. This issues leads to the expensive nature equated with it. Additionally, it produces qualitative data which can be generalized by the researcher.
As the data collection process ends, the data analysis begins to obtain an adequate interpretation. This process requires efficiency, dedication and a systematic approach. There are many different methods to approach data analysis which vary from the type of data, whether it be qualitative or quantitative. In qualitative data, taxonomies and rubrics can be created for group discussions, interviews or focus groups. This data can also be manipulated by counting frequency of comments or noting change in behavior or circumstances among the study group. This type of manipulation entitled frequency data supplements the nature of qualitative results (MIT Teaching and Learning Laboratory, 2010).
Quantitative data is generally examined mathematically and the results are disseminated through statistic means. The results show what is typical or atypical throughout the data. It can show the degrees of difference or the relationships between variables. It can also describe the likelihood that the results are consistent for a population in comparison to only occurring by chance in the sample group (Mertler & Charles, 2008). The analysis of quantitative data tends to be more complex than qualitative in terms of statistical results (MIT Teaching and Learning Laboratory, 2010).
Disseminating the data and findings is an essential component of research and assessment. Making the information public can be accomplished through a diversity of means despite the misconception of the limited capabilities previously held by educators and researchers (McKinney, 2007). There are many ways to make the information public. They can be made through presentations, lectures, publications, productions, performances, portfolios and websites (MIT Teaching and Learning Laboratory, 2010). In an age of continued developments in technology, the methods of making research available are only increasing.
McKinney (2007) concludes that going public is now more than simply an announcement of research. It is “the shift from lecture and presentation to involvement, collaboration, and cooperation” (p. 84). Just as in teaching as educators are shifting from a one-way communication technique of lecturing, research is also following a parallel line in collaboration. Faculty members need to support and encourage effective teaching and learning by offering their expertise to others. This can be accomplished through peer review or through their willingness to contribute what they have learned and observed.
The essential nature of creating assessments within courses, departments and institutions does not need to be a complicated endeavor. It simply needs to be well considered and thoughtfully planned. Assessment design models have been created for the use of educators in assisting in the development of assessment plans. These models can be used as a guide (Vendlinski, Niemi, Wang & Monempour, 2008).
Effective assessment design increases the probability of reliable and valid results. In elementary education, it enhances the teacher’s instructional effectiveness by providing a snapshot of student learning. Within the realm of higher education, it has the potential to provide the same results. These assessments allow the instructor or administrator to recognize the student’s status in their acquisition of skills or knowledge. With this information, teachers can tailor their instructional methods to promote student learning and thereby student success (Popham, 2008).
Assessment possesses many roles in education as instructors and administrators use the tool for a variety of different reasons. It is used to determine the motivation behind students. It creates feedback for learning opportunities for both teachers and students. It can be used as a grading function and also to assure quality within a program. Many times assessments can overlap in these functional purposes. An appropriate and effective method for assessment must be chosen to get an accurate view of a student’s learning style, prior knowledge and even to determine the mastery of current content information as well as evaluating the overall success or effectiveness of a course.
Cornell Laboratories. (2007). Observational data. Retrieved from: http://www.avianknowledge.net/content/about/observational-data
Garrison, C. & Ehringhaus, M. (2007). Formative and summative assessments in the classroom. Retrieved from: http://www.nmsa.org/Publications/WebExclusive/Assessment/tabid/1120/Default.aspx
Gibbs, A. (1997). Focus groups. Retrieved from: http://sru.soc.surrey.ac.uk/SRU19.html
McKinney, K. (2007). Enhancing learning through the scholarship of teaching and learning. San Francisco: Jossey-Bass.
Mertler, C. & Charles, C. (2008). Introduction to educational research (6th ed.). San Francisco: Allyn and Bacon.
MIT Teaching and Learning Laboratory. (2010). Assessment and evaluation. Retrieved from: http://web.mit.edu/tll/assessment-evaluation/types.html
Popham, J. (2008). Classroom assessment: What teachers need to know. (5th ed.). San Francisco: Allyn and Bacon.
Shuler, L. (2010). Guide to data collections methods. Retrieved from: http://web.mit.edu/tll/assessment-evaluation/ae-datacollection-methods-lols.pdf
Shang, Y., Shi, H., & Chen, S. (2001). An intelligent distributed environment for active learning. Journal on Educational Resources in Computing, (1)2, 34-52.
Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment and Evaluation in Higher Education, (30)4, 325-341.
Vendlinski, T., Niemi, D., Wang, J., & Monempour, S. (2008). Improving formative assessment practice with educational information technology. Retrieved from: http://www.cse.ucla.edu/products/reports/R739.pdf
By Tracy Atkinson
Tracy Atkinson, mother of six, lives in the Midwest with her husband. She is a teacher, having taught elementary school to higher education, holding degrees in elementary education and a master’s in higher education. Her passion is researching, studying and investigating the attributes related to self-directed learners. She has published several titles, including Calais: The Annals of the Hidden, Lemosa: The Annals of the Hidden, Book Two, Rachel’s 8 and Securing Your Tent. She is currently working on a non-fiction text exploring the attributes of self-directed learners: The Five Characteristics of Self-directed Learners.