Harnessing Passion to Improve Learning: Building Communities of Practice to Assess General Education
Co-authors: Robyne Elder and Bethany Alden-Rivers
Keywords: Learning Improvement; General Education Assessment; Assessment methods; Communities of Practice
Central to effective institutional assessment is the notion of community. Questions such as “How can a campus develop a culture of assessment?” and “How can we make the assessment process more meaningful?” point to the importance of community. Drawing on Lave and Wenger’s classic Communities of Practice framework and examples from other institutions, Lindenwood University transitioned from a wholesale approach to assessing general education outcomes to a communities of practice model for assessment. This blogpost outlines lessons learned and provides a generic protocol for adopting a communities of practice approach at your own institution.
General education assessment at Lindenwood University
Like many other higher education institutions, Lindenwood University has several institutional learning outcomes (ILOs) that reflect macro-level priorities for student achievement. A key challenge for assessing institutional learning outcomes is how to collect and analyze data in a systematic, reliable manner that leads to actionable insight. The blogpost outlines how Lindenwood University adopted a “communities of practice” model for general education assessment that enhanced the reliability of the assessment data and that promoted a focused approach to continuous improvement.
The former assessment process:
Institutional learning outcomes were implemented at Lindenwood in 2016. These ILOs are mapped to academic programs, where they are addressed during annual program-level assessment and are also assessed within the institution’s general education program. To date, general education assessment at Lindenwood has involved each instructor scoring a key assignment within their course using a general education assessment rubric. This process happens within the learning management system, which allows the university’s Assessment Office to collect and analyze these data quite easily.
This process, although seemingly quite streamlined, presents some challenges. First, with the large number of general education courses and sections, there are a lot of instructors participating in the institutional assessment. While this can help to create a culture of assessment, it also creates a situation where there is high variability for how the rubric criteria are interpreted and how the artifacts are scored. In other words, this process produces data that are considered unreliable for the purposes of drawing institution-level conclusions. Further, instructors score key assignments many times in isolation. Working individually to assess assignments using a rubric as it aligns to an ILO. This process often does not involve collaboration and conversation with colleagues or other instructors scoring the same assignments as a comparison. This type of collaboration is essential to improve student learning and promote faculty development.
Second, to enhance the reliability of these data, each of the general education course instructors would need to have frequent training and norming opportunities to reduce the variability in their approaches to scoring. The Assessment Office does not have the ability to scale its operations to provide this support to several hundred instructors each year.
Third, the data collected through this process were difficult to analyze by academic schools and departments. Although the Assessment Office provided a dashboard that could be filtered by multiple variables, the data represented such a range of courses and topics that instructors were left wondering, “So, how can I use these data as actionable insight?”
A new approach: Communities of Practice
Given the challenges of the former ILO assessment process, the Assessment Office piloted a “communities of practice” model that drew on the expertise of a small group of colleagues who shared a particular interest and passion for each ILO.
Image: Community of Practice for Written Communication
Rather than scoring artifacts from each course and section, the community of practice scored a stratified sample of student artifacts from the most relevant general education courses. And, rather than assessing all the ILOs each semester, the Assessment Office set up an ILO assessment schedule, whereby one community of practice met each semester to score a sample of artifacts focusing on one ILO.
Case Study: Institutional Assessment of Written Communication using a Communities of Practice Model
Lindenwood University has an ILO that claims “Lindenwood graduates are effective writers.” A community of practice for written communication was formed in Fall 2019 to pilot this new approach to institutional assessment.
A stratified sample of 100 assignments from ENGL 170 (Composition II) was collected from the learning management system and anonymized. Of these 99 assignments were deemed usable. This course was considered a helpful source of artifacts, since nearly all students are required to take this course, and because it is usually taken after the first semester of coursework.
Prior to the community of practice meeting, the Assessment Office facilitated a Rubric Development Workshop with faculty from the English Department. Starting with the AAC&U VALUE Rubric, faculty tested the rubric on three assignments. This process led to a revised rubric for institutional assessment of written communication.
Each of the 99 artifacts was scored by two members of the community of practice using the newly developed rubric. In 11 percent of the cases, the gap between the scores was greater than one, so a third scorer was used. A measure of central tendency (i.e. mean, median, or mode, depending on the situation) was used as the final score for each artifact.
(Image: Screenshot of data collection showing two and three rating per artifact)
(Image: Findings from the Community of Practice for Written Communication, Fall 2019, n=99)
Closing the loop
Members of the community of practice for written communication, visited various stakeholder groups (i.e. the Assessment Committee, the English Department, Deans Council) to share the data and to ask for reactions to these insights. Several themes emerged from these conversations that have now informed an Institutional Learning Improvement Plan for Written Communication. A report was then written describing the community of practice process, results, and themes and distributed to the entire campus. The community of practice continues in the current semester with a new ILO as a focus. Student focus groups will take place to provide input into the newly developed rubric and the Focused Learning Improvement Project (FLIP).
(Image: Closing the Loop Data Workshop)
Robyne Elder, Director of General Education Assessment
Robyne Elder is an Assistant Professor in the Educational Leadership Department at Lindenwood University and editor of the Journal of Educational Leadership in Action. She has been in a teacher and instructional leader in the field of education for 19 years. Her passion for assessment and working with faculty to continuously improve general education at the university level led her to her current role of Director of General Education Assessment at Lindenwood University.
Bethany Alden-Rivers, Associate Vice President for Institutional Effectiveness
Bethany Alden-Rivers serves as Associate Vice President for Institutional Effectiveness and Chief Assessment Officer for Lindenwood University. Prior to coming to Lindenwood in 2019, she served in leadership roles at several other US and UK institutions. She has taught in higher education for 18 years, working in three different countries. Her research spans a range of topics, including flexible learning, social innovation, and epistemological development in higher education.