Written Communication 2022-2023

Analysis of Results for Written Communication and Critical Thinking (AY 2022-2023)

The Assessment Support Committee read/evaluated 147 student papers and received data for an additional 130 papers from departments who utilized their own readers for a combined total of 277 papers (n = 277). This represents twenty-four academic programs. Of the 277 papers reviewed, 181 were from courses that were certified for Writing Intensive, a system graduation requirement that is administered through Kilohana, the Academic Success Center. This report constitutes a combined academic and student-support (co-curricular) review of quality.

All papers were blind read by two independent readers with about 20% going to a third reader when scores were more than 1-point apart.  Readers utilized the Written Communication Rubric that has been in use since AY 2013-2014; if courses were certified for Writing Intensive, the same paper was also read using the Writing Intensive Rubric.

Statistical Data (Results)

Results for Written CommunicationLine of ReasoningOrganization and StructureContentLanguage Prose Syntax
Valid (n)277277277277
Mean3.1203.0453.0833.019
Median3.752.753.253.25
Mode3333
Std Deviation0.69850.69460.70090.6733
Minimum1111
Maximum4444
% of students performing below minimum competency (score of less than 3)23%(66)25% (71)27% (75)28%(80)
% of students “needing improvement” (score of less than 2)2% (8)2%(8) 2% (6)3% (9)
AVG for Committee Readers3.0362.9422.9552.979
AVG for internal assessment3.2233.1613.2263.065

 

Results for Writing IntensiveLearning of course materials (vocabulary)Prose/DiscourseAnalysis/Insight
Valid (n)181181181
Mean2.5852.3722.429
Median32.52.5
Mode322
Std Deviation0.5400.5400.534
Minimum111
Maximum333

 

Comparison of Results for Written CommunicationLine of ReasoningOrganization and StructureContentLanguage Prose Syntax
2022-2023
n=277
3.1203.0453.0833.019
2017-2018
n=232
2.8682.8292.8342.873
2013-2014
n=229
2.7162.6912.7752.853

 

A comparison of current scores with those from AY 2017-2018 and 2013-2014 show student performance has been gaining ground, especially in the two categories that are designated as critical thinking. Only 16 papers out of the total of 277 scored less than “2” in any given column, with 11 of the 16 showing problems in two or more  areas. The percentage of students who seem to need remediation is only at 2-3% with more than 95% performing at or around minimal competency. Data for writing intensive corroborates student learning via writing.

There was a slight difference in scores from the assessment committee versus those from program-generated scoring—this is known as the “halo effect” and is a natural form of bias, especially if teachers of the courses participate in assessment.  Best practice is to have both types of evaluators undertake assessment as it: (1) serves as “external validation” of scoring and helps gauge bias in assessment, while (2) promoting teacher analysis of student work by those who are best enabled to make adjustments in assignments and curriculum (aka data-informed “closing of the loop”).

This is welcome news for faculty who have been sharing their assignments as part of the reporting for our accreditation webpage; many have related how changes to assignments have been made over the years (such as more specific instructions on the use of thesis statements and line of reasoning).

Nevertheless, while data for written communication and writing intensive show that students are better at critical thinking, scores now indicate academic prose may now be the skill posing some challenges for students. More importantly, the assessment support committee reports its first identification of papers suspected of being AI-generated. Around a dozen papers were flagged, run through Turnitin.com, and pulled from the larger batch of artifacts.  Readers noted certain characteristics that made such papers questionable:

    • Sentence patterns are repetitive and lack complexity (or variety)
    • Lack reference to individual experience or knowledge
    • Content was overly generic and generalized didn’t necessarily answer the question posed by the assignment
    • Lack of thesis and conclusion; lacks any awareness of intended audience
    • There’s no apparent premise for the writing (does not convey a point of view or an opinion)

The flagged papers were in response to general topic questions in the Humanities and Social Sciences that are already prone to plagiarism.  Members of the Assessment Support Committee are currently working on a longer report on what AI can and cannot do and will present that to Faculty Congress shortly.

 

Submitted by Seri I. Luangphinith

September 2023

Chair of Assessment Support Committee (AY 2022-2023)

Accreditation Liaison Officer

 

Undergraduate Programs: