Tuesday, July 12, 2016

Learning activities with laptops

In The Great Laptop Debates, I have been firmly in the pro-laptop camp. I don’t think it’s a good idea to ban them from classrooms. Many students need them to take notes, with or without official disability accommodation. It’s a waste of my time to police whether students use electronic devices, pass notes among each other, or roll their eyes. If I was a student, I would leave any class that would force me to shut down my laptop if I wanted to use it: Good bye and go bless your heart! 

Be my guest, you might say, but what about those studies that show (correction, I’d say: claim) that students learn less in the presence of electronic devices? 

Now, I could go on a rant about why I don’t buy those claims and why the evidence is weak, lacks validity, etc. This might be fun – in fact, it is fun: see The Tattooed Professor’s epic but not always accurate rant. But it wouldn’t show you how laptops can actually increase learning. It’s not enough that laptops don’t hurt students if used properly: we have to use them in a way that actually improves learning. 

This is the time for full disclosure: I have not studied laptop use in classrooms in any systematic manner. I have not tried a wide variety of learning activities with laptops, haven’t collected data, haven’t published analyses of those data in SoTL journals. But I can do two things: point to literature and make some suggestions. 

Avanti dilettanti! 

Publications: I am sure that I’ve found only a small portion of all there is, but a good starting point (at least chronologically) is a 2005 issue of New Directions in Teaching and Learning edited by Barbara Weaver and Linda Nilson. The different articles report on intentional laptop use in a variety of classes at Clemson University. Weaver and Nilson’s introduction (pdf) summarizes a number of activities that are explained in more detail in the individual articles of the volume: using laptops to gather immediate responses from students, to conduct in-class online research and source evaluation, to deliver multimedia material such as videos, to engage students in simulations, or to take laptops on field trips for mobile classwork. 

Thursday, May 26, 2016

The open-door syllabus

A couple of weeks ago, my CFI colleague Ed Brantmeier and I ran two workshops about the Inclusive Courses review tool that we’ve been working on with Carl Moore, who currently runs the Research Academy for Integrated Learning at UDC. I am calling the survey or whatever it is a “review tool” because we are not sure what it is: partly reflective writing prompts, partly standardized survey, partly rubric. I suspect we’ll develop it in a couple of different directions. Feedback appreciated! (You can get a copy on Academia.edu or by emailing me at my broschax gmail.)

But that’s not the point here. My point is about syllabi. For our workshops, we had focussed on a subset of questions in our tool that dealt with the course syllabus, and in preparation, I looked at a bunch of syllabi that I could find online.

Shout-out to all those who post their course syllabi online: Much appreciated! It’s really instructive and humbling to read the syllabi that people write – so much good work out there.

What struck me was that many syllabi read like rulebooks, long lists of instructions and prohibitions that start right on the first page: attendance policies, deadlines, points subtracted for late submissions, margins and font requirements for papers, and and and. Sometimes it seemed that the professor had added another rule whenever there was trouble with something students did the previous semester. (Someone submitted a paper in 28pt font to meet the page requirement? I’ll put a font requirement on the syllabus!). Sometimes it simply looked like students were sinners in the hands of a wrathful professor.

What type of documents are such syllabi? Rulebooks (in fact, some approach booklet length, so rule booklet would be the right term). A code the professor can point to if a student complains about a grade, wants an extension, a makeup exam, or the like. Maybe a contract in which a professor tells the students what they’ll get from the class and what they’ll have to do to get it.

I want my syllabi to be open doors instead of rulebooks or contracts.

Tuesday, May 17, 2016

Hedgehogs, foxes, and academic portfolios

This week I am participating in an academic portfolio institute facilitated by Peter Seldin, Beth Miller, and a number of faculty members at my institution. Good stuff! Together with several colleagues, I am now revising my academic dossier, re-writing my teaching philosophy, description of teaching innovations, summaries of research projects, highlights of my service work, and the like. The point is to find common threads in one’s work, the overall purpose that drives what we do, and use this to guide the next steps of our careers. You can read about it in Seldin and Miller’s book. Since I am starting a new position this summer, as assistant director for career planning at my institution’s faculty development center, I see the institute as a welcome opportunity to think about what’ll come next.

So I’ve been busy writing my way through my past career, working on the common vision behind it all. And it got me thinking, which is a good thing: Does there have to be one overarching vision, a common purpose, a grand plan behind all the different things I’ve been doing? Honest question! I can see the ayes as well as the nays. (And, yes, I know that voice votes use “ayes” and “nos”, but I think this text needs more “y”s.)

Aye: It is good to know that I’m doing what I’m doing for a purpose, and that the purpose is one that I agree with. (Maybe even one of my choice!) It is good to be able to say yes to things because they belong to what I’m doing and no to things because they don’t belong to what I’m doing. It’ll make life easier. Also, knowing my professional purpose will make it easier to decide what the things are that I want to do before retiring (yeah right, retirement!), and to get them done. Maybe I’ll even become a peak performing faculty!!

Nay: This is so not me. I’m an empirical political scientist who has specialized on U.S. courts, and do you know the topic of the article that I recently read and found really exciting? Comparative political theory! On my commute home, I listened to Philippe Jaroussky’s recording of Verlaine songs, and that was the most important thing I think I came across today (sorry, Seldin, Miller, and colleagues ). Actually, the most important thing wasn’t Jaroussky but something I’m not going to write about beyond the note that it overshadowed everything else. And that’s how life is: it is not streamlined according to purpose or plan, it is multi-centered and whirling apart!

But then. We need meaning. We want to see connections. Scientists value parsimony. Religions seek the unifying vision, force, or deity. We like consistency in our ethics and politics.

Isaiah Berlin, in an essay on Tolstoy, cites the poet Archilochus: “The fox knows many things, but the hedgehog knows one big thing.” (Here is the first section of the essay, which states its premise.) According to Berlin, “hedgehogs” reduce their view of the world to one system, a few central principles, while “foxes” accept the variety, confusion, and contradiction of life. Berlin puts it much more nicely than I can summarize:
For there exists a great chasm between those, on one side, who relate everything to a single central vision, one system, less or more coherent or articulate, in terms of which they understand, think and feel – a single, universal, organising principle in terms of which alone all that they are and say has significance – and, on the other side, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way, for some psychological or physiological cause, related to no moral or aesthetic principle. (p. 2)
Berlin clearly sympathizes with the foxes and distrusts the hedgehogs. Ronald Dworkin, in one of his last books, Justice for Hedgehogs, provides a counterpoint and argues for a liberal perspective that establishes a philosophical foundation connecting ethics, justice, and law.

I don’t really have a conclusion here. No big picture that connects everything and makes meaning here. Sorry!

Sunday, May 15, 2016

Computer use reduces average grade from C- to low C-

The higher ed internets and social medias are positively abuzz these days with the Carter et al. paper on the effects of computer use in introductory econ at West Point. I've finally gotten around to reading the paper, and I like the study! It is extremely well-designed: it is situated in a real-life classroom setting, not a lab like other computer use studies; it uses real-life exams, not some ungraded quiz that students could care less about; it randomly assigns computer use policies in small course sections of large classes that use a common textbook and a common final exam; it compares course sections taught by different instructors and course sections taught by the same instructor; it includes important controls, does robustness checks with different types of standard errors, and covers almost any objection that I could think of. It's exactly how a quantitative, outcome-focused study in the Scholarship of Teaching and Learning should be conducted. It should be required reading in SoTL workshops.

But I don't buy the substantive conclusion drawn by the authors: that they have shown that the use of laptops and/or other electronic devices had a substantive impact on learning. Here is why.

Well, first, here is why not: I don't have a problem with the whole correlation/causation thing in this study. Sure, correlation is not causation, but the authors do a good job (including the use of 2SLS) to exclude alternative causal stories to the extent possible. I also don't have a problem with the conclusion that the authors found a statistically significant effect of computer/el.dev. use. I'm as critical of significance tests as anybody who has used Bayesian analysis, but I buy that Carter et al. find an effect that is clearly significant according to econometric practice.

My problem is that the statistically significant effect is small. (Yes, I know that the authors go out of their way to argue that it's large - more on that below.) Carter et al. present various versions of their analysis, most with a pretty consistent effect of around -0.2. Since the dependent variable is standardized, this means that permitting laptop or electronic device use was associated with a ceteris paribus reduction of the final exam grade (multiple choice and short answer portions) by .2 of a standard deviation. On a 100-point scale, that standard deviation was 9 points. In other words, the possibility of computer use in class reduced the average exam grade by less than two points out of 100 hundred.

Let's put this in grade terms. The average final exam score in the classes under observation was 72 in the multiple choice and short answer portions. Electronics use in the classroom was equivalent to reducing that score to a 70.2.

News flash: computer use in the classroom reduces a C- to a low C-.

Significance is not substance. This looks pretty insubstantial to me. But the authors make the argument that the effect is in fact meaningful, even large, and they do so by comparing it to changes of other factors identified in other studies. The currency of comparison is the standard deviation: Carter et al. point to the fact that "Aronson, Barrow, and Sander (2007) find that one standard deviation improvement in teacher quality increases test scores by 0.15 standard deviations" in a study of high-school students; as a result, they conclude, permitting laptops in class is worse than reducing teacher quality. This assumes, of course, that a standard deviation in a West Point intro to econ test is equivalent to a standard deviation of a high school test score. What's the apple and what's the orange here?

There's another aspect of the Carter et al. study that makes me wary of its conclusion that it shows a reduction of learning due to laptops in the classroom. What worries me is the 72-point average that I mentioned above. You put highly motivated young people who were selected on the basis of academic merit and test scores into a small class taught by an expert with a graduate degree - and at the end of the semester the average score is a C-? That's puzzling! But it's not uncommon (I should take another look at my own classes!), and I think, based on my own experience but no actual data analysis, that there are three likely explanations for this. First, the final exam may not be well-aligned with what was taught in the course. Most frequently, this happens when the exam questions are about material that was not emphasized but mentioned somewhere in the class material, or when exam questions are not about the core of the facts and concepts but about marginal details associated with them. A popular type of such questions asks about the precise value of some number that was used to exemplify a course concept in the textbook. Such exams measure not only to what extent students have understood and can use the course material but also to what extent they can memorize the details. A second reason for such a low final exam score could be that the exam consists of a large number of questions, so that a fair number of students lose points because they don't get through the exam. In that case, the exam measures to what extent students are skilled at working quickly with a particular exam format, in addition to measuring learning. A third possibility is that the instructors aim for a particular grade curve. It could be that they want only a certain percentage of top students to continue in the major, or they may want to identify top students for particular honors or other outstanding treatment. These reasons are not mutually exclusive and, depending on the goals of the instructor and the educational program, they may be legitimate. But they suggest that the exam scores may have limited measurement validity as indicators of learning. In other words, computer use may (slightly) reduce something, but it is not necessarily learning, or not all learning.

The last point raises an interesting question about an intriguing detail of the Carter et al. study: While computer use in the classroom had a significant effect on multiple choice and short answer scores, this effect was missing with regards to the essay portion of the exam; instead, the instructor effect was much stronger. The authors, in response, dismissed the essay grade as a bad indicator of student learning. Could it be, though, that the opposite was true and that at least some instructors used the essay grade to correct for the low standardized scores of students that they knew, from classroom experience, office hours, and the like, were bad test-takers but had learned a lot?

In any case, I am generally happy about the results found by Carter at al.: Once you control for all the other factors that influence student performance, it barely matters whether instructors permit electronic devices in the classroom or not. Considering the many problems associated with prohibiting computers in the classroom, this is good news! Of course, the results do not tell us how instructors and students can use all the tools, electronic and otherwise, at their disposal to actually increase their learning, but that's the next conversation to be had.