Quantitative Reasoning Assessment Report
Overview and Audience
During the Fall 2014 Semester the UHH Congress Assessment Committee administered a quantitative reasoning assessment to multiple freshmen level classes. Given that most freshmen enroll in a Math class during their first semester, per FREGAS, the committee chose to administer the assessment across all math classes numbered 205 and below. Administering them all in Math courses also eliminated the possibility of duplication of students taking the assessment. Math 205 is Calculus 1, which includes about half freshmen, while all other courses were 100 level math classes. The 21 courses chosen represented approximately 60% STEM track and 40% non-STEM. The same assessment was then administered in the Spring 2015 semester to juniors and seniors across most majors. Each major was asked to choose for the assessment an appropriate course, one required for majors and generally attended by juniors and seniors. Obviously, this was not a longitudinal assessment.
Student Learning Outcomes Assessed
The committee wished to assess both the “analysis” and “visual representation” aspects of the QR rubric, including critical thinking skills. Analysis and visual representation were chosen as opposed to computation, since the study was intended for all students, not just STEM majors who are generally more adept at computation, and visualization and analysis incorporate a higher degree of critical thinking. The committee wanted to know:
- Can students interpret quantitative information represented graphically? (Visual)
- Can students extrapolate such quantitative information to solve related problems that one might encounter in everyday life? (Visual and Critical Thinking)
- Are our graduating students more proficient at quantitative critical thinking and visual representations than incoming freshmen?
This was a Direct True/False Assessment, administered in classes, without prior notice. Students were allotted 15 – 20 minutes. The QR Assessment Committee provided the assessments to the instructors, who then administered the assessment in their classes and returned the assessments to the committee for scoring and analysis of the results. Most of the freshmen assessments were administered in the same week, which was possible since all were given in the same department. The senior level assessments were administered at various points throughout the Spring semester, according to the instructors’ preferences.
The assessment mechanism was developed by the math department, primarily through the collaboration of the QR Assessment Chair and the Mathematics Department Chair. It was decided, in accordance with the QR Core Competency Rubric that scores would be interpreted as 0 = “Beginning,” 1 = “Emerging,” 2 = “Competent,” and 3 = “Advanced.” Ideally, we wanted to see a high percentage of students earning a score of 2 or 3.
The QR Assessment included two graphs, representing the cost of gold and silver, respectively, and three related statements. Students were asked to choose which of the statements could be reasonably concluded from the graphs; as such this was essentially a three-problem true/false assessment. The assessment mechanism is attached to this report, as well as the QR Rubric. Two of the questions only required the use of one graph, while one question required a comparison between results from both graphs.
Statement 1 simply required students to interpret the meaning of points on a single graph. It required students to identify from the gold-graph that the price of gold at the beginning of 2008 was approximately the same as at the beginning of 2009. This skill is repeatedly covered in Common Core 9th grade Algebra I, and then again throughout 11th grade Algebra II.
Students who missed Statement 1 demonstrate a disturbing deficiency in interpreting simple graphs, something commonly used in everyday life. It could also point to a reading deficiency.
Statement 2 was intended as the most difficult problem. This problem requires students to compare values across two different graphs, and to use analysis and critical thinking to determine how profit is made when you buy and sell items. The statement describes two individuals purchasing the same monetary value of both gold and silver, at the same instant, and selling their purchase years later, again at the same instant. They both purchase $1000 worth of metal at the beginning of 2002 and sell it at the beginning of 2007. The statement then claims that the one who purchased silver would receive more money at the time of the sale. The unit for gold is grams, while the unit for silver is ounces. Further, the prices listed for silver are less (per oz.) than for gold (even though gold sells by the gram.) These facts are inconsequential, but unwary students with limited critical thinking skills may have missed this interpretation. Determining who receives the most at the time of the sale simply requires determining which price, gold or silver, increased by the largest percentage. According to the graphs, the gold was purchased at approximately $9 – 10/gram, and then sold at $20/gram. Thus, the $1000 turned into $2000 (if we assume $10 purchase price) or as much as $2222.22 (if we assume $9 purchase price, which yields a percentage increase of 20/9 = 2.22). The silver was purchased at a cost of $5/ounce, and sold at an amount at least equal to $12.50, an increase of at least 2.5 times. Thus, this is a true statement; the silver will increase from $1000 to $2,500, which is more than that for gold.
Students missing Statement 2 demonstrate a deficiency in analysis and critical thinking skills. Assuming students have the basic skills associated with Statement 1 (i.e. that they can read information off the graphs) this statement requires students to interpret the visual results in the context of re-selling their purchase at a later date. Common sense reveals that if the price doubles, the amount you receive at the sale will also double. Similarly, the most profit is earned when the purchase price increases by the highest percentage. In the real world, this concept is demonstrated by the following: Suppose two stocks both went up by $5 one day. Is that a lot? If the price at the beginning of the day was only $5, this means the value doubled in one day! If the price was $600 (e.g. for Apple stock before their price split in 2014) at the beginning of the day, then this increase represents less than a 1% increase. It is important for students to be able to interpret the consequences of a statement like: “Oh my, the stock went up by $5.”
Statement 3: This statement is similar to statement 2, but instead of reading information off of two separate graphs, the student is asked to gauge the gain across two different parts of the gold graph. Gold from 2002 to 2008 increased by over 2.5 times its initial value, rising from less than $10 to more than $25. Thus, the purchase price of $1000 yield over $2500 at the time of the sale. Gold purchased in 2010 and sold in 2012 only rose from approximately $35/gram to $55/gram, less than double. Thus, it is not true that the latter yields more than the former.
Students missing this statement undoubtedly mistakenly equated a higher selling price with higher profit. Profit is determined by percentage increase, not the units or the selling price. Without the skills assessed herein, students are at a definite disadvantage when dealing with quantitative information represented graphically.
The results were analyzed as follows:
- The mean, standard deviation, and median score for each question was computed for each class standing. The standard deviation is not included herein, but the range was between .83 and .95, which is not surprising, and is not very useful for an assessment with only four possible outcomes.
- The mean, median, and mode were also computed for the assessment as a whole, disaggregated by class standing, and in the aggregated as well.
- The number and percentage of students scoring 0, 1, 2, and 3 was also computed, disaggregated by class standing, as well as for Freshman/Sophomore and Junior/Senior.
The full results are summarized below. Two-thirds (308 out of 466) of the students taking the assessment in Freshmen courses were Freshmen and Sophomores, and 98% (402 out of 410) of students taking the assessment in their major courses were Juniors and Seniors. The results below do not entirely combine the two assessments (i.e. Juniors and Seniors in the Freshmen courses are not included, and Freshmen and Sophomores in the major courses are not included), since it is possible that some Seniors took both assessments, for example.
Assessment Results by Class Standing and in the Aggregate
Count and Percent of Students with each Possible Aggregate Score
Analysis of Results
It’s difficult putting a positive spin on such results, particularly in light of the fact that if a student were to flip a coin for each question, the expected/average result would be 1.5. The Junior Class was the only class with a mean higher than 1.5, and it also had the smallest representation. The results of Question 1 reveal that most students can read basic information from a graph (i.e. this positively answered our first question), and it also reveals that students tend to improve this skill as they progress from Freshman standing (positively answering part of our third question). The results of Question 2 appear to indicate that we have our work cut out for us with respect to critical thinking (i.e. this partly answers our second question in the negative). The Junior Class scored notably higher, but none of the classes performed as well as simply flipping a coin. The results of Question 3 are particularly disturbing in terms of the third question we wanted answered, with the Juniors and Seniors noticeably under-performing the Freshmen and Sophomores. However, as indicated by the last column in the “Count” table above, 25% more of the Juniors and Seniors achieved the “Competent” level, answering at least 2 of the 3 questions correctly. This is possibly the most revealing statistic in terms of demonstrating increased critical thinking skills as students progress in their studies, which answers our third question in the positive.
Distribution of Results
This report will be posted on the Assessment Web Site, and a summary presented to Congress at one of its regular meetings. The results for individual majors were returned to the individual programs but were not made public (to avoid unpleasant comparisons across majors). It is hoped that this information will be of use to the major programs in adjusting their curriculum as they move forward. The highest mean score was a two, with 8 out of 21 majors scoring a mean higher than 1.5, and 8 out of 21 had more than half the majors reaching the “Competent” level. An analysis of each problem, as presented herein, was also included along with the results, to better inform the majors of their strengths and areas that need improvement. The assessment committee is still discussing the nature of the results and how best we might close the loop from an institutional perspective. This item is included in the Assessment Committee agenda for 2015-16.
Program Specific Reports: