KC Wednesday: Deparment-wide Assessments
This week’s KC Wednesday post comes to us from the AER KC and is written by Ryan J. Keytack, College House Dean at the University of Pennsylvania and Region II’s representative to the KC.
Reflecting on my first department-wide assessment project.
Assessment provides a culture of evidence. Evidence then changes the culture. And we begin again. Or at least that is how I felt after a recent project came to fruition. I had the opportunity to coordinate the training and assessment of all 200+ graduate associates (GAs) and resident advisors (RAs) in the College House system. For the sake of this blog I will simply refer to this group as RAGA (pronounced “rah-gah”).
Learning is hard to capture when it comes to RAGA training – especially in a system where each House is designed to be unique and may do the work differently. But we did. It was a shift from seeking satisfaction to asking better questions. I learned a few lessons and decided to share them with you.
Spend the time on developing good student learning outcomes (SLOs). Engaging faculty and staff in discussions about SLOs can be difficult. For some the “ABCd method – audience, behavior, condition, and degree of proficiency” – is a foreign language. Providing a workshop and opportunity to practice writing SLOs set a group tone. It led to weeks of conversations and fine tuning of said outcomes. Here’s an example from one of the PhD students on the committee: “By attending the Safe Zone training (C), RAGAs (A) will be able to define common LGBTQ terminology and gain an understanding of the typical needs of LGBTQ university students (B) so that they are able to speak sensitively and accurately with residents about LGBTQ issues (D). Follow up questions asked staff to identify certain terms and connect to sample conversations held in residence. Mission accomplished.
Get other departments on board with change. After agreeing on SLOs, go door to do with the plan. I visited all campus partners who led a major session in our training. We discussed our unit’s goals for the session by looking at the SLOs together. Asking the departments to share any feedback or additions allowed for a negotiation of content. The end result was a more collaborative approach to presenting and assessing the content. Many sessions that fell in the “this is always how it’s been done category” received a mutual overhaul.
Use a pre/post test method. Our best data collection involved RAGA response to training on critical and non-critical incidents. Questions were at three stages – before any related training sessions, after the content-based session, and after the practice role scenarios. Each stage showed a dramatic increase in knowledge gained. RAGAs were answering questions with more confidence (rated on a scale) and more accuracy (what do you do when…) after each stage. The session needs some tweaking in the future, and we know exactly where to go next.
Give everyone a chance to provide feedback and review the data together. Enough said.
Nothing here is new news. For me, it was huge. I am certainly a novice in this area but found that our students were learning exactly what they were supposed to be learning from our training. There remains work to be done, but I’m fortunate to work in an environment supportive of change. Thanks for reflecting with me. I wish you all the best in your assessment journeys. Oh, and if you were reading in the hopes of finding a great resource, here’s one: http://www.learningoutcomesassessment.org/