Wednesday, January 13, 2010
All of us in academic support want to know what is working and what is not. Our very nature as ASP'ers is that we want to improve our work so that we can help our students more effectively.
There are multiple ways to evaluate our programs: questionnaires/surveys, suggestion boxes, unsolicited e-mails and notes, academic improvement of students with whom we work, bar passage improvement (for those who do bar prep), attendance at programs. Some of these are qualitative, and some are quantitative. Some might use both indicators.
I agree that numerical data can be helpful. However, in nearly 25 years of working in higher education, I have never been won over by the view that numbers alone tell the story. Why do I take that view? Let me give you some examples:
- Our work matters one student at a time. I do more than just parrot study advice. I listen. I encourage. I challenge. I confront. I pick up the pieces. I search for answers to that student's success. I rejoice in breakthroughs. A number does not evaluate those encounters.
- Rarely is my work a "one hit wonder" phenomenon. My work is based on assessment of issues, individual plans for improvement, monitoring of progress, and relationships with students. One appointment as a number means little in that on-going process.
- A student who achieves a modest grade increment may be succeeding at a level that was a mountain climb away from the student's starting point. Another student may "knock the ball out of the park" because the problems were study management issues rather than understanding how to analyze legally. Both students have shown success for their abilities and their issues.
- Ultimately, my students must want to succeed by doing the hard work that is needed in law school. I cannot impact their success with strategies and techniques if they choose to not implement them. A small increase (or even a decrease) is not about me. Likewise, I cannot take all the credit for major successes.
- Academic success is a work in progress. One semester's improvement may not show the full story. I have students who have worked with me over a series of semesters. By the time we are done, their efforts have maximized their success with A's and B's rather than their initial D's and F's.
- Students vary in the number of appointments that they need to improve. One student may need four or five appointments during the semester to achieve more effective and efficient time management. Another student might be able to implement techniques fairly quickly.
- Some of the ways we help students may not be easily calculated. For example, I send out weekly e-mails of study tips to all law students. I know that students benefit from the advice even though they might never attend a workshop or ask for an appointment. I get enough informal feedback to know the service makes an impact. But, a question on a survey about "how often do you read the weekly study tips e-mails" or "how many times have you implemented a technique from the e-mails" would probably garner little useful (or accurate) information.
- Numbers do not necessarily reflect quality. If I held 2,000 30-minute appointments instead of 1,000 1-hour appointments in 12 months, would it really reflect anything positive? I doubt it. In some cases, it would merely indicate that I saw the same student twice to cover the material that I used to cover in one longer appointment. It would often mean that I was short-cutting on assistance to the student (handouts that do not match the individual problems; a canned speech on a topic that did not consider the individual learning styles; a lecture rather than a discussion).
- Numbers in bar success can monitor trends when a full bar study is done (that is those who passed as well as those who failed). We can find out lots of quantitative indicators: bad grades, 1L gpa, bar courses taken, and so forth. That information is helpful. But only by talking to each person individually would we ever truly know what equaled success or failure for that graduate beyond mere numbers: techniques that worked; bar review courses used; any medical or personal obstacles; diligence in study; timing of study; work while studying.
Four students at a workshop who delve deeply into strategies and ask questions focusing on their issues with the topic may be hugely successful as a workshop. Thirty students at a workshop who barely pay attention and do not really want to be there may produce little improvement.
So, I would say we should look for both quantitative and qualitative measures when we seek to find out how we can improvement our programs. (Amy Jarmon)