Skip to content Skip to main navigation Report an accessibility issue

Department Assessment Plan

Assessment is a requirement for University accreditation from the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), abbreviated “SACS.” Assessment is a process by which departments collect data designed to measure the effectiveness of its academic programs, analyze that data, and make adjustments to their programs in order to improve their effectiveness.

The University as a whole is accredited on a ten year cycle. In the spring of 2013, departments were given guidelines and asked to formulate assessment plans that would meet SACS requirements. There should be one plan for each degree program. For the Mathematics Department, that means the BS, MM, MS, and Ph.D. programs. For each degree program, we formulated 3-5 “Learner Outcomes” that we want our students to achieve by the end of their degree program, and we developed means to assess how well our students have achieved these desired Learner Outcomes. Each Learner Outcome must have a “Direct Assessment,” meaning data collected directly from student work, not indirect data such as acceptance to grad schools or being hired for postdoctoral positions. Each year we analyze the results of some of these assessments (on a rotating basis), and take action based on this analysis. Annually we submit an Assessment Report which consists of the following for each Learner Outcome that is scheduled to be assessed that year: (i) Learner Outcome, (ii) Assessment Method, (iii) Assessment Results and Analysis, and (iv) Action Taken.

Our original assessment plan was formulated on short notice in the spring of 2013 by the Department Head. It was then submitted for approval first through the Dean’s Office (primarily handled by Associate Dean R. J. Hinde) and through a group of university administrators specifically dedicated to compliance with SACS accreditation requirements, headed by Mary Albrecht. The Head’s plan received initial approval in spring 2013, with a short timeline for implementation designed to yield the first year’s report in the summer or fall of 2013.

In the summer of 2013, the Head asked Professor Frazier to oversee the Mathematics Department’s Assessment efforts, and Dr. Frazier accepted the role of “Assessment Czar.” Responsibilities of the position include making changes to our assessment plans as required by the assessment administration, making sure that the required data collection takes place, tabulating and presenting the data to the department, facilitating the department discussion leading to the programmatic changes that are recorded in each year’s assessment report, and writing each year’s report. The Math Department’s first reports were submitted in July 2013. The administration requested modifications to the reports for the MM, MS, and Ph.D. programs in the fall of 2013, and these reports were submitted in final form in October 2013. For 2014, the Mathematics Department reports were submitted earlier, by the end of May, 2014, as requested because of a critical SACS university site visit scheduled for the fall of 2014. The usual schedule in the future is expected to be that reports will be due at the end of the summer. It is hoped that after the first few years, assessment procedures will become sufficiently established that they can be handled by the Associate Heads, so that the position of Assessment Czar can be retired.

For the BS degree, we have formulated 5 Learner Outcomes. Four of these (roughly: calculus competency, ability to apply math to real world problems, ability to solve multi-step problems, and breadth of problem-solving skills) are measured annually via the Major Field Tests in mathematics created by the Educational Testing Service (ETS). Graduating senior mathematics majors are asked to take this exam. The ETS provides us with individual results and with student averages in each of the areas corresponding to our Learner Outcomes, compared and ranked relative to other participating departments nationally. The fifth Learner Outcome (logic and proof skills) is measured via a sampling of student final examination papers for Math 307, our course for math majors introducing them to proofs. Rubrics for scoring these papers were written and included in our 2014 Assessment report. Our greatest difficulty with assessment of our BS degree has been to get a large enough sample of students to take the Major Field Test, because it is not linked to any grade or requirement currently. Students were given a gift card as an incentive, but participation was low. In spring 2014 we asked instructors of senior level courses to provide incentives in each class. The number of students taking the major field test increased, but not to the desired level. We have since taken steps to make the taking of the Major Field Test a graduation requirement for Math majors, but it will be another year before that requirement take effect. Actions taken based on assessment data include adding an Honors version of Math 231 (elementary differential equations) and a course coordinator for Math 141 (Calculus I).

The MM program is a program for teachers in Tennessee high schools to improve their content knowledge of mathematics. This program is currently transitioning from being an on-campus program to being fully online. There are 3-5 graduates in a typical year. The original plan was to base assessment primarily on a survey of graduates to see if these teachers found what they learned in the MM program helpful in their teaching. Assessment administrators determined that such a measurement is indirect, and requires in addition a direct Learner Outcome assessment procedure. In summer 2014 the Graduate Director David Anderson and the Assessment Czar analyzed MM student performance on the mandatory overall MM final exam on various questions chosen to reflect different Learner Outcomes. This process constitutes a direct assessment method for these Learner Outcomes. The results of the first year’s scoring will serve as baseline data for comparison with students in future years. There will be special emphasis on comparing current scores, from students who were in the on-campus program, with future scores of students receiving the online delivery.

Since assessment is supposed to measure the success of the overall program in achieving its objectives by the time of students’ graduation, assessment procedures for the MS and Ph.D. programs focused on Master’s or Ph.D. theses. Especially in the Ph.D. program, the thesis is the ultimate goal and measure of the success of the program. Assessment is done through questionnaires completed by members of the thesis committee for each graduate. There are 3 Learner Outcomes for the MS program (roughly: specialized knowledge, ability to communicate mathematics in writing, and ability to communicate mathematics orally). The Ph.D. Learner Outcomes include these 3 plus one requiring the ability to produce significant original ideas. Rubrics were written and distributed to the faculty to define the categories “outstanding,” “good,” “acceptable,”, “not acceptable,” and “poor” for each survey question, as required by the university assessment administration. Actions taken based on analysis of the data obtained from these surveys include enhancing the seminar requirements for graduate students, and adding summer problem-solving sessions related to the graduate preliminary exams.

Although the initial emphasis in Assessment has been on our degree programs, the assessment administration has begun a procedure for Assessment of our GenEd courses, in anticipation of an expansion of SACS assessment to the GenEd level. Malissa Peery, the Mathematics Department Lower Division Chair, has been in charge of GenEd assessment. The first report on GenEd assessment was submitted in May, 2014. In the first year, representative questions from exams from Math 113, 115, 123, 141, and 151 were collected. Two samples of student work from each section (there were only 4 sections of 151, but the other courses ranged from 18 to 23 sections) of each of these courses were collected and scored according to agreed-upon rubrics. This data was intended to supply a baseline for comparison with future years. However, it was felt that a more systematic process for collecting data is needed, and initial efforts are focusing on finding better data collection procedures.