Who, What, When
Skidmore's $200,00 question Putting a price on value
An expensive college education Is it worth it?
Keeping up for the Joneses As tuitions rise, so does financial aid
Proving ground Quantifying quality learning
An educator's education One woman's story
Reaping rewards and paying forward The benefits of a liberal arts education
Both sides now Alumni instructors take stock
First came Consumer Reports, then Carfax, then Angie’s List online. Hoping for similar advice about the value of colleges, most students and their families look to the U.S. News & World Report college guide or other rankings. In 2007, amid widespread criticism of U.S. News’s methodologies, many private colleges (including Skidmore) formed the University and College Accountability Network, to report their statistics more directly and provide college shoppers with more apples-to-apples comparisons.
Register your opinions, suggestions, or warnings about this issue here.
Still, neither U.S. News nor UCAN presents much in the way of proof that any college’s facilities, student-to-faculty ratio, endowment per student, or curricular offerings actually work—that is, demonstrably succeed in educating students the way the viewbook promises. For that (and for their own reasons), colleges have been conducting formal assessments of their student learning, especially over the past decade or so.
Can intangibles like liberal learning and intellectual growth be reliably calculated? In a 2006 Washington Monthly article, “Is Our Students Learning?” think-tank manager Kevin Carey described various assessment methods. One is to measure behaviors such as hours spent on class preparation or writing papers, number of books read, and indicators of student-faculty interaction. A standard instrument for this is the National Survey of Student Engagement, in which Skidmore and hundreds of other colleges participate each year. Another method is to track graduates’ career paths and life accomplishments over time. Skidmore administrators are currently exploring the feasibility of conducting such a longitudinal survey, a kind of alumni census, that could rack up a rich store of data over many years. Whatever the methods, Carey concluded, as tuitions rise it’s crucial—and only fair—for colleges to research and disclose their students’ learning outcomes.
Starting in public schools and state colleges as a way to justify their use of tax dollars, assessment for various purposes has now spread throughout academe. Debates continue about its motives and methodologies, and different institutions are doing it differently, but it’s fast becoming standard operating procedure for most colleges.
When Skidmore’s assessment efforts began picking up speed around 2001, partly in response to a call from its accrediting agency, English professor Sarah Goodwin shared the skepticism of several of her faculty colleagues: “Outcomes assessment in a business may be a useful way to put pressure on productivity. But students aren’t products, and nor are the understandings we teach.” Verifying students’ learning is one thing, she said, but “I want to know if their lives are richer for their relationship with great literature—what kind of assessment can measure that?” Eight years later, Goodwin has some pretty firm ideas about what kinds of assessments can (and can’t) illuminate what kinds of growth and learning. And now she’s putting both her skepticism and idealism, together with a clear-eyed practicality, into her role as Skidmore’s assessment coordinator, working with a part-time research director and a steering committee of 12 professors, administrators, and students.
At this point, most of Skidmore’s academic departments are routinely assessing their major programs. In biology, for example, first a faculty committee outlined specific learning goals, from factual knowledge to competence in modes of scientific inquiry. Then they proposed ways to gather evidence, such as committee review of lab reports in 300-level courses, or comparisons of tests administered in intro courses and then in capstone courses. “Assessment starts slowly, like building a snowman,” says Kyle Nichols, chair of geosciences. But even the early layers can provide insights: So far, every geosciences major who has applied to graduate school has been accepted, and senior bio majors, who all take a subset of the standard grad-school exam, are scoring as well as grad-school applicants nationwide.
While she recognizes the public’s interest in documentation of learning outcomes, Goodwin emphasizes that at Skidmore “our impetus isn’t primarily accountability. In fact, at this kind of college it’s students who are most accountable for their learning, and we have to hold them to that. Our main purpose for assessments is to use the results ourselves, to keep improving our programs’ effectiveness.” And the next step is to appraise collegewide learning—for example, critical-thinking skills, integration of multidisciplinary approaches, and imaginative problem-solving regardless of major. As Goodwin puts it, “What should liberally educated people know? What should they be able to do? How should they approach the big questions?”
This fall Goodwin and the steering group, working with the Committee on Educational Policy and Planning, will ask the faculty at large to endorse a list of goals for student learning and development. So far the list is informed by several external sources (including, most importantly for Goodwin, the “Liberal Education and America’s Promise” initiative of the Association of American Colleges and Universities), but it particularly reflects Skidmore’s signature values of creativity, civic responsibility, and interdisciplinary thinking. It calls for knowledge in arts, math, science, humanities, social science, and language; demands cross-cultural fluency, interrogation of one’s own and others’ value systems, and “intellectual humility”; and cites a range of critical-thinking and communication skills.
Just putting those goals into words is a beneficial exercise, Goodwin notes. “When faculty begin articulating what they most want their students to learn, what they’re most passionate about imparting to them, then differences arise. And discussions over disagreements are the most productive.” For bio department chair Corey Freeman-Gallant, “Agreeing on a list of learning goals is the rational way to proceed, and it’s a very honest way, but it’s also difficult.” He adds, “What I find exciting is talking so specifically about where and how to create learning opportunities in our program. We all know learning doesn’t just happen. This is a way for us to be much more intentional about how we make it happen.”
Sociologist Susan Walzer, one of the self-acknowledged resident skeptics on the assessment committee, agrees that defining the learning goals was a productive process. “The committee is a large group, and I think the goals we articulated really benefited from that diversity.” While she believes there’s been more than enough focus on departmental assessment, she says, “this collegewide initiative asks very interesting questions.” She adds, “I have a taste for big questions myself, so I’m comfortable with trying to look at what students gain from their educations on a broad scale. I’d prefer to see us set lofty goals and then have to figure out how to do the measurements, rather than asking overly simple questions just because the data might be easier to code.”
Debating and elucidating the goals is only the beginning. How to evaluate their attainment is even more arguable. But Goodwin isn’t worried. In most collegewide assessments, no one is expecting—or attempting to defend—technical measurements or calculations per se. “I prefer the term ‘evidence,’” says Goodwin. OK, and what about the term “valid”? Here she freely stipulates that “with any knowledge that’s still contested, such as who shot JFK, different camps will favor certain types of evidence and dismiss others. Every discipline has its debates about what constitutes valid evidence—these are some of our most fascinating intellectual problems.” But among the Skidmore faculty, she maintains, “consensus is readily achievable for most of our assessments. English professors tend to agree on what makes a strong essay: clear writing, research, advancing an original thesis, critical thinking, logical use of evidence. Likewise, bio professors know what makes a good lab report or a poor one.” Though nonquantitative, such judgments can be very enlightening, she argues. She likens assessment to grading papers: “It's quite ‘unscientific,’ yet we all agree that it’s worthwhile, especially if it’s used to help chart a path to improving performance.”
In the same vein, Walzer says, “In social science, we recognize the limits of our measurements, but we make the measurements anyway.” And she cautions against over-reliance on “objective,” indirect data-gathering to the exclusion of straightforward questions. She explains, “In the same way that some approaches to assessment deprofessionalize teachers, I think insisting on only assessing students’ work, rather than asking them directly about their experiences, tends to dismiss their judgment and ownership of their educations.” The best practice, she says, is “to triangulate different data sources as much as possible.”
For every desired learning outcome, says Goodwin, “we’d love to demonstrate that 100 percent of our students are getting it. But assuming we fall short of perfection, the urgency in my mind is to figure out why and where we’re missing some students and to change our program so we can get them up to where we want all students to be.” She’s optimistic that as evidence drives adjustments in programs, and as new evidence is gathered to test those adjustments, the opportunity to adopt proven new techniques will get more faculty members more involved in the process. She’d like to see assessment in some form become “a habitual and routine part of teaching throughout the College.”
One area where formal appraisals have already begun, in several pilot projects, is writing across the curriculum. And it’s given participants a taste of the complexities and variations not only in evaluation techniques but in expectations to be gauged against. Another deceptively simple learning goal, Goodwin explains, is “to be able to ‘communicate effectively.’ Well, when we unpack that phrase, we’re looking at not just writing well but also sharing quantitative information, maybe musical or theatrical expression, and visual or graphic communications.” Further, the assessors must determine which sorts of communications are being taught to whom, and when and where in the curriculum. Only after those factors are sorted and weighed, says Goodwin, “can we start to inquire whether our students’ learning is measuring up.”
Over the summer about 30 faculty and staff members participated in workshops to learn about assessment practices, define benchmarks, and plan evidence-gathering for some upcoming projects to be coordinated by interdepartmental working groups. Among the subjects on Goodwin’s near-term list are the First-Year Experience program’s Scribner Seminars, which themselves resulted from assessments of first-year engagement and which so far seem to correlate with improved retention rates. Other plans include intercultural learning that spans social identities, and then perhaps a “pan-science project” examining science literacy among students in all majors.
Assessment’s work is never done. And for anyone who’s experienced liberal arts education in an intimate residential community, it may seem undoable on some levels. As Goodwin muses, “I see teaching at Skidmore as closely related to the alchemical, Romantic idea of the sublime, in that it seeks to liberate the human spirit in the service of democratic ideals. It can be hard to line up that notion with the idea of value for the price.” Even so, Skidmore intends to find ways to pull it off, not just for public consumption but for its own edification and growth. From critical thinking to evaluation of varied evidence to creative problem-solving, it seems Skidmore’s assessment of liberal learning is itself an exercise in liberal learning.