top of page
  • Office of Institutional Effectiveness



Bethany Alden-Rivers, Associate Vice President, Institutional Effectiveness,

Mark Arant, Provost and Senior Vice President, Academic Affairs,



This paper draws on extensive literature and sector practices from the last four decades to present a case for rethinking academic program review as a more meaningful process. The paper outlines a new set of guiding principles and a possible process for academic program review at Lindenwood University.


One of the most pressing issues facing higher education in the United States is the quality of academic programming (Bresciani, 2006; Bok, 2013; Goff, 2017; Tandberg and Martin, 2019). The process of evaluating the quality of academic programs is central to the vitality of any university in terms of its ability to increase revenue, decrease expenses, enhance the quality of the student learning experience, and strengthen its reputation (Dickeson, 2010). Although program review processes have been in place in North American universities for more than 80 years (cf. Conrad and Wilson, 1985; Skolnik, 1989), it is only within the last 45 years that the quality assurance of academic programs has become more “prominent, organized, and influential” (p. 627). And, it has only been in the last two decades that program review has become a topic of interest and debate within the higher education sector (Halpern, 2013).

Commonly, universities dedicate much time and resource into reviewing and reporting the quality of their academic programs in order to satisfy the requirements of states, federal agencies, and accrediting bodies (Subramony, Wallace, and Zack, 2015; Brown et al., 2017). In an era of financial constraints and what some scholars describe as a state of “constant crisis” (Kretovics and Eckert, 2019), the program review process is perceived by many as a mechanism for justifying the existence of a program, or in other words, “survival of the fittest” and “punitive” (Patton et al., 2008, p. 3). Furthermore, when program review processes are carried out in isolation of other institutional effectiveness activities, such as strategic planning, budgeting, and assessment of learning, there is less potential for the program review process to be a catalyst for positive change (Barak and Mets, 1995).

However, academic program review, if designed and facilitated well, holds the potential to be “one of the most powerful and effective tools to shape and reshape an institution” (Jayachandran et al., 2019, p. 54). Effective program review can positively impact the student experience (Pascarella and Terenzini, 2005) and can be a cornerstone of a comprehensive framework for quality assurance (see Parvin, 2019). According to Bok (2006), “[t]hough the process of program review may not be perfect . . . program review, when thoughtfully carried out, is more reliable than hunches or personal opinions” (p. 320).


Academic program review is a process of gathering and analyzing information about a specific academic program for the purposes of guiding the broader activities of the institution. Patton et al. (2009) described the process as being a “miniature accreditation self-study within a designated area of campus” that helps universities take stock, celebrate successes, and plan for the future (p. 8). A more specific definition is offered by Brown University (2012).

The purpose of academic program review is to improve the quality of academic units

individually and the university as a whole. Academic review provides an opportunity for

each academic unit to reflect, self-assess, and plan; they generate in-depth

communication between the unit and the university administration, thus offering a

vehicle to inform planning and decision-making… By stimulating program planning and

encouraging strategic development, academic program reviews can be a central

mechanism to advance the University mission. (p. 4)

Just like there is no generally accepted definition for academic quality assurance (cf. Harvey and Green, 1993; Biggs, 2001; Newton, 2002; Kliejnen et al., 2013; Goff, 2017), nor is there a universally-accepted definition for academic program review (cf. Feikema, 2016; Tandberg and Martin, 2019). However, there are three common features of program review seen across the sector. These include: 1) an internal, faculty-driven self-study; 2) an external evaluation carried out by a peer or committee from another institution; and 3) a comprehensive evaluation of the two studies, resulting in an action plan. Most program reviews are carried out over a six- to 12-month span, but some can take much longer (Hanover, 2012, pp. 2-3).


Although the literature suggests a general consensus around the importance of academic program review, there is no agreement on how colleges and university leaders should approach this process (Badalyan, 2012; Feikema, 2016). Therefore, myriad approaches to program review are seen across the higher education sector, and these are often designed to serve a particular purpose.


Strategic program prioritization generally involves a data-informed review of an

institution’s portfolio of academic programs that focuses on market share and market

growth. Through categorization, ranking, rating, or weighting certain attributes of a

program, the review usually leads to recommendations to sustain, stop, or enhance

each academic program in the portfolio (Fannin and Saran, 2017). Dickeson (2010) is

commonly used as a reference for this approach. Dickeson suggested 10 factors or

dimensions to be considered when evaluating a program, which reflect internal and

external demand; quality of inputs, processes, and outcomes; revenue, costs, and

impact. Other models for strategic program prioritization promote the use of the Boston

Consulting Group’s (BCG) two-factor business model, which categorizes programs as

“Stars”, “Cash Cows”, “Questions Marks”, and “Dogs” (e.g., Debracht and Levas, 2014).

While others have proposed the use of the General Electric-McKinsey model (or

Industry Attractive-Business Competitiveness Model, which measures criteria across

two dimensions: competitive capabilities and industry attractiveness (see Hax and

Majluf, 1983; Wells and Wells, 2011, Udo-Imeh, Edet, and Anani, 2012).


Cyclical program reviews are the most common approach across the higher education

sector. Cyclical program review includes a data-informed, reflective self-study, which is

usually completed by the program faculty and reviewed by the department chairperson

and dean, before it is reviewed by both internal and external reviewers. Cyclical program

reviews also include an action planning element, which prompts continuous

improvements during the years between reviews. This type of program review is

generally expected by regional accreditors to take place on a regular basis and to be

integrated into the broader system of planning and budgeting (see Higher Learning

Commission, 2020).

Although there are common elements, there is no universally accepted process for

cyclical program review. Authors have suggested processes to include core

components. For example, Jayachandran et al. (2019) stated that an effective program

evaluation should include a curriculum review, an evaluation of teaching and learning,

evaluation of resources, and assessment of performance against quality indicators.

These authors also suggested that most program reviews, at a minimum, require data

on program demands, program resources, program efficiency, and program outcomes.


Systematic program review is an annual program review process that is less formal than

a cyclical program review. Shambaugh (2017) describes systematic program review as

a “semi-formal means to proactively involve higher education faculty, staff, students and

administrators in analyzing and making decisions about the future of their programs.

Shambaugh proposes that this type of frequent program review can be used to ask

more informal questions, such as “Who are our students?” and “What changes need to

be made or what gaps exist in our programs, gaps that students need?” Similarly,

Ludvik (2019) suggested that program review can be a lever toward closing equity gaps

in student learning and achievement.


The discourse around program review can create conflicting viewpoints and approaches. On the one hand, program review has been used, historically, as a way to defend or assure the quality of an academic program. On the other hand, program review also holds the potential to catapult faculty into a transformative mode, through problem-solving, continuous improvement, and innovation. The marriage between these two views is what Biggs (2001) characterized as “enhancing quality”. The literature points to two emerging methods for embracing problem-solving and innovation in the program review process.


Over the past 30 years, universities across the world have benefited from applying

principles of total quality management (TQM) that were previously only used within

industry (Balzer et al., 2016). Processes such as Kaizen, Six Sigma, and Lean have

allowed institutions to become more “flexible, flat, and fast” (Zimmerman, 1991, p. 10),

thereby becoming more responsive to the demands of the higher education

marketplace (Balzer et al., 2016). Findings from a systematic literature review by

Cudney et al. (2020) suggested that Six Sigma and Lean can be applied effectively

toward the improvement of teaching methods, administrative processes, and other

aspects of higher education.

Balzer (2020) presents a working definition of “Lean Higher Education (LHE)”.

Lean Higher Education (LHE) is a problem-solving framework used to increase the

value and performance of university processes. Grounded in the principles of

continuous improvement and respect for people, the successful application of LHE

will meet the expectations of those served by the processes, engage and develop

employees who deliver the processes, and enhance efficiency and effectiveness of

the university. (p. 16)

Central to LHE are Rapid Improvement Events (RIEs), which are facilitated problem-

solving activities leading participants through a series of ideation, solutions, planning,

and implementation. The University of North Alabama and the University of York are

examples of institutions that have embraced RIEs as part of the process improvement

approach (Key-Mathews and Fadden, 2019).


Recently, the notion of using design thinking for program evaluation has emerged within

the literature. Design thinking is a creative problem-solving methodology that focuses on

the people who will use or benefit from a solution. Generally speaking, design thinking

follows an iterative five-step model of developing insights, problematizing, ideating,

prototyping and testing (IDEO-U, 2020).

Boyko-Head (2019) documents a process for testing design thinking for academic

program reviews, and proposes this as a possible approach for overcoming traditional

challenges related to program review, such as a lack of buy-in and engagement. Boyko-

Head notes that through this process, the team was able to generate and nurture

stakeholder participation, envision more actionable opportunities for innovation and

renewal, provide faculty with professional development, enhance clarity and

transparency around the process, and create reproducible tools for implementing

program review findings.


The literature suggests there are common challenges for faculty who engage in program review, which include limited time and resources available to carry out a quality evaluation, lack of expertise or assistance with the process, skepticism about the benefits of program review, and concern they were bothering students when asking them to complete surveys to support the process (Bresciani, 2006; Germaine et al., 2013). Additionally, Bresciani (2006) points to a lack of shared understanding.

Faculty and practitioners do not understand the purpose, goals, or intended outcomes

of the activity or why they are being asked to participate. (p. 18)

Importantly, the literature suggests there is a significant challenge in integrating academic program review to planning and budgeting. Feikema (2016) stated that while there is some literature on the program review process and on implementing program review, there is very little known about how program review influences institutional planning, budgeting, and decision making (see Barak and Mets, 1995). Furthermore, the literature suggests that when university leaders do not consider academic program review during the strategic planning process, this continues to undermine the value of program review. This appears to be particularly relevant when external market forces influence the strategic planning process without taking into account the findings from program review (see Ahmad, Farley, and Naidoo, 2012).


Currently, each academic program at Lindenwood University completes a comprehensive program review every seven years. The process begins in the spring semester of calendar year 1 when the department faculty members meet to begin the self-study process.

  • The draft self-study report is submitted to the School/Division Curriculum Committee (SDCC), School Dean and System Provost in fall of year 1, and is revised and finalized before the end of the fall term.

  • The Program Chair and System Provost identify an appropriate external reviewer for the program who is invited to visit campus to review the program in the spring of calendar year 2.

  • The reviewer’s report is submitted in the summer of year 2 and the department’s response to the review is submitted to the SDCC, School Dean, and System Provost in the fall of year 2.

  • The Academic Program Advisory Committee (APAC) reviews all of the documents associated with each review and makes recommendations for the program.

  • These recommendations are framed as action items on which the program faculty are expected to report by the winter of year 3.

Appendix A provides a detailed outline of what the current self-study report includes for each academic program undergoing program review.


Despite federal, regional, and local agencies promoting program review as an important and influential process, and despite widespread acknowledgement of the inherent potential for program review to be a highly effective mechanism, the literature presents a compelling argument that academic program review, largely speaking, is poorly conceived, not communicated clearly, undervalued, and detached from the broader systems of planning and budgeting. Additionally, there is no common definition or process for academic program review for institutions to adopt. Many of the commonly cited challenges for effective academic program review, such as lack of time, lack of training, lack of integration into planning and budgeting, are also challenges for Lindenwood University.

Therefore, the problem for Lindenwood University, as well as most other institutions, is how to develop a process that gives meaning to academic program review for all stakeholders involved.


Within learning organizations, it is particularly true that processes of evaluation and continuous improvement that intend to assure the quality of service—or in the case of universities, the student experience—are considered inherently purposeful. In learning organizations, the notion of ‘quality assurance’ is characterized as moving beyond demonstrating quality toward a transformative position of quality enhancement (Biggs, 2001; Goff, 2017). As Lindenwood University competes on the basis of academic quality, it is essential to harness the purposefulness and potential for academic program review as a mechanism for transformation.


Simply put, meaningfulness means having purpose and value. When considered in

terms of academic program review, the notion of meaningfulness becomes more

complex because of the many stakeholders affected by this process. Table 1 outlines

the purpose and value of academic program review for six key stakeholder groups.

Table 1. Purpose and value of academic program review by stakeholder group


Stakeholder Purpose and value of academic program review


Regional Accreditor Program review provides evidence of systematic, continuous (HLC) improvement of academic programming and evidence of an

integrated approach between the quality assurance of

academic programs and institutional planning and budgeting.


Institution Program review provides a systematic process to provide

assurance that the institution’s academic programming is

relevant to the mission, suitable for its target market, and fit

for purpose.


Academic School/ Program review provides a mechanism to assess the

Department effectiveness of academic programs within school or

department by identifying what is working well and what is

needed to improve. Program review is an opportunity to

formally request support and resources for maintaining

program effectiveness.


Program Faculty Program review is an opportunity to receive and review data

related to a program and to demonstrate the effectiveness of

the curriculum and teaching. Through program review, faculty

have the opportunity to outline ideas for continuous

improvement and request support and additional resource.


Students Program review is a process that evaluates the effectiveness

of the academic program in achieving its goals and in

supporting the mission of the institution. Program review,

therefore, can have a significant impact on the student

experience, even if students are not aware of this process. By

engaging in this process, students can provide important

insights and contributions toward continuous improvement.


Community Program review is a process to assure employers and other

members of the community that the academic programming

is relevant to the workforce and societal needs of the 21st

century. By engaging in the process, employers, alumni, and

other community members can provide insight into how well

the program is meeting the needs of the community.



Table 2. Guiding principles toward meaningful academic program review



Figure 1. A proposed seven-year process for academic program review at Lindenwood University


Ahmad, A. R., Farley, A., and Naidoo, M. (2012) ‘Funding crisis in higher education institutions: Rationale for change’, Asian Economic and Financial Review, vol. 2, no. 4, pp. 562-576.

Badalyan, A. Y. (2012) Program review institutionalization as an indicator of institutional effectiveness in the California community colleges, Doctoral Dissertation, Available online at:;sequence=1 [accessed on August 10, 2020].

Balzer, W.K. (2020) ‘Lean higher education: Increasing the value and performance of university processes’,

Balzer, W.K., Francis, D.E., Krehbiel, T.C. and Shea, N. (2016) ‘A review and perspective on Lean in higher education’, Quality Assurance in Education.

Barak, R. J. and Mets, L. A. (1995) Using Academic Program Review, San Francisco, Jossey-Bass.

Bash, K. C. (2015) ‘Evolution of program review and integration of strategic planning’, in 2015 Collection of Papers: Accreditation and Planning, Higher Learning Commission, Available online at: [accessed on August 15, 1017].

Bers, T. H. (2004) ‘Assessment at the program level’, New Directions for Community Colleges, no. 126 (Summer), pp. 43-52.

Biggs, J. (2001) ‘The reflective institution: Assuring and enhancing the quality of teaching and learning’, Higher Education, vol. 41, pp. 221-238.

Bok, D. (2006) Our Underachieving Colleges: A Candid Look at How Much Students Learn and Why They Should Be Learning More, Princeton, Princeton University Press.

Bok, D. (2013) Higher Education in America, Princeton, Princeton University Press.

Boyko-Head, C. (2019) Designing4Engagement: Design Thinking in the Program Quality Review Process, Blogpost available online at:

Bresciani, M. J. (2006) Outcomes-Based Academic and Co-Curricular Program Review: A Compilation of Institutional Good Practices, Sterling, VA, Stylus.

Brown, J., Kurzweil, M. and Pritchett, W., 2017. Quality Assurance in US Higher Education. Quality Assurance.

Brown University (2012) Academic Program Review Guidelines and Procedures, Available online at: [accessed on August 15, 2017].

Conrad, C.F. and Wilson, R.F., 1985. Academic Program Reviews: Institutional Approaches, Expectations, and Controversies. ASHE-ERIC Higher Education Report No. 5, 1985.

Cudney, E.A., Venuthurumilli, S.S.J., Materla, T. and Antony, J. (2020) ‘Systematic review of Lean and Six Sigma approaches in higher education’ Total Quality Management and Business Excellence, vol. 31, nos. 3-4, pp.231-244.

Debrecht, D. and Levas, M. N. (2014) ‘Using the Boston Consulting Group Portfolio Matrix to Analyze Management of a Business Undergraduate Student Program at a Small Liberal Arts University’, Journal of Higher Education Theory and Practice, vol. 14, no. 3.

Dickeson, R. C. (2010) Prioritizing Academic Programs and Services: Reallocating Resources to Achieve Strategic Balance, San Francisco, Jossey-Bass.

Faculty Senate Southeast Missouri State University (2011) Bill 11-A-3 Academic Program Review Procedures (Approved), Available online at: [accessed on August 15, 2017].

Fannin, W. and Saran, A. (2017) ‘Strategic academic program prioritization: in theory and practice’, International Journal of Business and Public Administration, vol. 14, no. 1.

Feikema, J.L. (2016) Best practices for academic program review in large post-secondary institutions: A modified Delphi approach, Doctoral Dissertation, Available from ProQuest Dissertations and Theses Database (UMI No. 10107002).

Gentemann, K. M., Fletcher, J. J., and Potter, D. L., (1994) ‘Refocusing the academic program review on student learning: The role of assessment’, New Directions in Institutional Research, vol., 1994, no. 84, pp. 31-46.

Goff, L. (2017) ‘University administrators’ conceptions of quality and approaches to quality assurance’, Higher Education, vol. 74, no. 1, pp.179-195.

Groen, J.F. (2017) ‘Engaging in Enhancement: Implications of Participatory Approaches in Higher Education Quality Assurance’ Collected Essays on Learning and Teaching, 10, pp.89-100.

Halpern, D. F. (2013) ‘A is for assessment: The other scarlet letter’, Teaching of Psychology, vol 40, no. 4, pp. 358–362.

Hanover Research (2012) Best Practices in Academic Program Review, Washington, D.C., Hanover Research.

Harvey, L. and Green, D. (1993) ‘Defining Quality’, Assessment and Evaluation in Higher Education, vol. 18, pp. 9-34.

Hax, A. C. and Majluf, N. (1983) ‘The use of the growth-share matrix in strategic planning’, Interfaces, vol. 13, no. 1, pp. 46-60.

Higher Learning Commission (2020) HLC Policy: Criteria for Accreditation, Available online at: [accessed on: August 10, 2020].

IDEO-U (2020) ‘What is design thinking?’, Blogpost available online at [accessed on August 10, 2020].

Key-Mathews, L. and Fadden, J. B. (2019) ‘Cultural Change Through Rapid Improvement Events: Five Successful Case Studies’, 5th International Conference on Lean Six Sigma for Higher Education, Harriot Watt University, June 24-25, 2019. [Available online at].

Kleijnen, J., Dolmans, D., Willems, J., and Van Hout, J. (2013) ‘Teachers’ conceptions of quality and organisational values in higher education: compliance or enhancement?’, Assessment & Evaluation in Higher Education, vol. 38, no. 2, pp. 152–166.

Kretovics, M.A. and Eckert, E., 2020. Business Practices in Higher Education: A Guide for Today's Administrators, New York, Routledge.

Ludvik, M.J.B., 2019. Outcomes-based program review: Closing achievement gaps in and outside the classroom with alignment to predictive analytics and performance metrics, Sterling, Stylus.

Missouri Code of State Regulations (1996) Division 10 Commissioner of Higher Education, Chapter 4 Submission of Academic Information, Data and New Programs, Available online at: [accessed on: August 15, 2017].

Missouri Department of Higher Education (2011) Statewide Academic Program Review: Report to the Governor, Available online at: [accessed on: August 15, 2017].

Newton, J. (2002) ‘Views from below: Academics coping with quality’, Quality in Higher Education, vol. 8, no. 1, pp. 39-61.

Newton, J. (2010) ‘A tale of two “qualities”: Reflections on the quality revolution in higher education’, Quality in Higher Education, vol. 16., no. 1, pp. 51-53.

Parvin, A. (2019) ‘Leadership and management in quality assurance: insights from the context of Khulna University, Bangladesh’, Higher Education, vol. 77. no. 4, pp.739-756.

Pascarella, E. T. and Terenzini, E. T. (2005) How College Affects Students: A Third Decade of Research, Vol. 2, San Francisco, Jossey-Bass.

Patton, J. et al. (2009) Program Review: Setting a Standard, The Academic Senate for California Community Colleges, Available online at: [accessed on: August 15, 2017].

Sadler, D.R. (2017) ‘Academic achievement standards and quality assurance’, Quality in Higher Education, vol. 23, no. 2, pp.81-99.

Shambaugh, N. (2017) ‘Ongoing and systematic academic program review’, Handbook of research on administration, policy, and leadership in higher education, pp.141-156.

Skolnik, M. (1989) ‘How academic program review can foster intellectual conformity and stifle diversity of thought and method’, Journal of Higher Education, vol. 60, no. 619-643.

Spooner, M., 2019. Performance-based funding in higher education. CAUT Education Review.

Steyn, C., Davies, C. and Sambo, A. (2019) ‘Eliciting student feedback for course development: the application of a qualitative course evaluation tool among business research students’, Assessment and Evaluation in Higher Education, vol. 44, no. 1, pp.11-24.

St. John’s University (2014) Academic Program Review: Guidelines and Procedures, Available online at: [accessed on: August 15, 2017].

Subramony, R., Wallace, S., and Zack, C. (2015) ‘Toward an aligned accreditation and program review process: Institutional case study’, in 2015 Collection of Papers: Accreditation and Planning, Higher Learning Commission, Available online at: [accessed on August 15, 2017].

Taleb, A., Namoun, A. and Benaida, M. (2016) ‘A holistic quality assurance framework to acquire national and international accreditation: The case of Saudi Arabia’

Tandberg, D.A. and Martin, R.R. (2019) Quality Assurance and Improvement in Higher Education: The Role of the States, State Higher Education Executive Officers Association, Available online at: [accessed on August 10, 2020].

UC Berkeley (2015) Guide for the Review of Existing Instructional Programs, Available online at: [accessed on August 15, 2017].

University of Northampton (2017) Periodic Subject Review Handbook, Available online at: [accessed on August 15, 2017].

Udo-Imeh, P. T., Edet, W. F., and Anani, R. B. (2012) ‘Portfolio Analysis Models: A Review’, European Journal of Business and Management, vol. 4, no. 18.

Wells, R. and Wells, C. (2011) ‘Academic program portfolio model for universities: Guiding Strategic decisions and resources allocations’ Research in Higher Education, pp. 1-19.



[AS OF MAY 2020]

The program self-study report includes the following sections, with data provided to the program representatives by the university’s Institutional Research Office.

I. Status of the Discipline – brief description of the national status of the discipline, including

emerging issues and trends relevant to higher education.

a. Regional/national trends in student enrollment*

b. Employment opportunities for graduates*

II. Program

a. Brief overview of the program – Major vs. General Education, if applicable

b. Mission statement for the program; reference its relationship to the university mission


c. Goals and objectives of the program with regard to teaching, scholarship, and

service; assessment of the program with respect to these aspects.

d. Program Learning Outcomes and a curriculum map indicating which Learning

Outcomes are addressed by each required course (Appendix A)

e. Brief summary of degree requirements (Appendix B, complete list of degree


f. Community college articulation

g. Involvement of advisory board(s), if applicable

III. Program Evaluation

a. Briefly describe the means of assessing student learning outcomes and recent

improvements based on the results of such assessment.

b. If applicable, provide a brief analysis of the grade patterns of courses with high

D/F/W rates* and an action plan for student improvement in these areas.

c. Online course and program offerings

i. Comparison of student performance – on-ground vs. online*

d. Quantitative indicators – 7 year trends*

i. Number of students majoring in the program

ii. Student-credit hours (upper vs. lower division vs. grad)

iii. Number of sections offered (upper vs. lower division vs. grad)

iv. Average class size (upper vs. lower division vs. grad)

v. Percentage of courses taught by adjunct instructors (upper vs. lower division vs.


e. Involvement of students in “high impact practices” such as individual research,

community service, study away/abroad, or internships**

f. Planned changes in curriculum

g. Options for new majors, minors, or emphasis areas, if applicable

h. Assessment of graduate level courses against Lindenwood University’s Course Level

Rigor Standards

IV. Students

a. Academic profile – ACT, high school GPA, transfer GPA - FTFTF vs. Transfer

students (undergrad only)* - Undergraduate cumulative GPA (grad only)*

b. Retention in the major – internal vs. external transfers (undergrad only)*

c. Graduates per year*

d. Terms to degree completion - FTFTF vs. Transfer students (undergrad only)*

e. Summary of most recent assessment report (Appendix C, full report)

f. Outcomes information on graduates*

g. Summary of student survey results (Appendix D, full report*)

V. Faculty

a. Brief summary of qualifications and experience of Full-time Faculty and any adjunct

instructors who teach key majors courses (Appendix E, 2-3 page cv of each faculty

member described)

b. Teaching productivity and activities designed to enhance teaching and the curriculum

c. Summary of student course evaluations for key courses, by course rather than


d. Average number of advisees per faculty advisor – majors vs. non-majors*

e. Scholarly productivity

f. Service, including university committees, student groups, and work with K-12


g. Professional development

VI. Facilities and resources

a. Classrooms and laboratories

b. Equipment

c. Space

d. Library

e. Support personnel

VII. SWOT – Strengths, Weaknesses (internal factors), Opportunities, and Threats (external


VIII. Vision and plans for the future of the program

a. Provide a vision statement of the program in 7 years assuming no additional financial

investment beyond maintaining current resource levels.

i. Actions to be completed within the next 3 years

ii. Actions to be completed within the next 7 years

b. Provide a vision statement of the program in 7 years assuming that additional

financial investments are made beyond current resource levels.

i. Actions to be completed within the next 3 years & associated major expenditures

ii. Actions to be completed with the next 7 years & associated major expenditures

IX. Program faculty recommendations for improvement

a. Changes that are within the control of the program & school

b. Changes that require action at the Dean, Provost, or higher levels

127 views0 comments

Recent Posts

See All
bottom of page