Thursday, June 22, 2017
Some great advice:
I came away from the conference with the following takeaways about how these other disciplines are using assessment:
- They use assessment data to improve student learning, both at an individual and macro level. They are less focused on using assessments to “sort” students along a curve for grading purposes. Driven in part by their accreditors, the sciences use assessment data to help individual students recognize their weaknesses and, by graduation, get up to the level expected for eventual licensure, sometimes through remediation. They also use assessment data to drive curricular and teaching reform.
- They focus on the validity and reliability of their summative assessments. This is probably not surprising since scientists are trained in the scientific method. They are also, by nature, accepting of data and statistics. They utilize item analysis reports (see bullet #3) and rubrics (for essays) to ensure that their assessments are effective and that their grading is reliable. Assessments are reused and improved over time. Thus, a lot of effort is put into exam security.
- They utilize item analysis data reports to improve their assessments over time. Item analysis reports show things like a KR-20 score and point biserial coefficients, which are statistical tools that can help assess the quality of individual test items and the exam as a whole. They can be generated by most scoring systems, such as Scantron and ExamSoft.
- They utilize multiple, formative assessments in courses.
- They collect a lot of data on students.
- They cooperate and share assessments across sections and professors. It is not uncommon for there to be a single, departmentally-approved exam for a particular course. Professors teaching multiple sections of a course collaborate on writing the exam against a common set of learning outcomes.
- They categorize and tag questions to track student progress and to assist with programmatic assessment. (In law, this could work as follows. Questions could be tagged against programmatic learning outcomes [such as knowledge of the law] and to content outlines [e.g., in Torts, a question could be tagged as referring to Battery].) This allows them to generate reports that show how students perform over time in a particular outcome or topic.
- They debrief assessments with students, using the results to help students learn how to improve, even when the course is over. Here, categorization of questions is important.
- They utilize technology, such as ExamSoft, to make all of this data analysis and reporting possible.
- They have trained assessment professionals to assist with the entire process. Many schools have assessment departments or offices that can setup assessments and reports. Should we rethink the role of faculty support staff? Should we have faculty assistants move away from traditional secretarial functions and to assisting faculty with assessments? What training would be required?