WordPress Adds “Request Feedback” Feature

This is interesting. It looks as if WordPress has added a new feature that allows bloggers to “request feedback” on a post before publishing it. Here’s a description from WordPress’ blog:

When you click on Request Feedback, you can enter email addresses of friends who are willing to help. They’ll receive a special private link to see your draft, where they can leave feedback on your post (see image above). Their feedback will appear in your post’s Request Feedback area when it arrives, so you can make changes to your draft accordingly.

I can see using this in a composition course that’s using student blogs. What would be really neat, though, is if writers could request feedback from multiple reviewers simultaneously, and all the reviewers could see each others’ feedback. It would be more like a group conference that way.


There’s a discussion on the techrhet listserv right now about teaching students to code (as opposed to using WYSIWYG applications like Dreamweaver). Folks who are interested in this might look at a 1999 special issue of Computers and Composition devoted to code (16.3). Also check out these online resources:
  • Susan Delagrange, “When Revision is Redesign” (the section on “Code“), in Kairos — “I want to argue for writing code, working under the hood of what-you-see-is-what-you-get software to more directly effect an interface that not only provides an optimal user experience, but also more precisely fits the design to the rhetorical argument.”
  • Karl Stolley, “Lo-Fi Manifesto” in Kairos — “[A]s teachers, we should actively work to provide students with sustainable, extensible production literacies through open, rhetorically grounded digital practices that emphasize the source in “free and open source.”
  • Charlie Lowe, “The Future of the Book: Time to Learn Some HTML/CSS” on Kairosnews — “HTML will need to be the base format for manuscripts going into a design workflow that results in digital and print versions of a book.”
  • Nancy Kaplan, “Knowing Practice: A More Complex View of New Media Literacy” — “[A]pplications and interfaces must remain visible and accessible to knowledge workers if they are to develop newmedia literacies.”
  • Lynda R. Stephenson, “Road Trip,” in Kairos — “This webtext has pedagogical and theoretical intentions that allow readers to reflect on why we should learn to code.”

Webinar on “Teaching Writing as an Information Art”

This is a very cool opportunity. Some smart folks will be putting on a free web seminar on “Teaching Writing as an Information Art” on February 28 at 9:00 am PST. Follow the link for more details and how to connect, but here’s how they describe the session itself:

Contemporary writing courses have been taking on computational tools, from word processors to wikis, for over two decades now, and for a large portion of that time, the tools have taken center stage. However, contemporary talk of media “literacies” has changed the place of tools in the classroom — or rather, has reframed the role of language as information. When students begin to study the role of words as tags, metadata, or search optimizing keywords, they are studying not just semantic structures but the logic and rhetoric of the flow of information. This panel discusses the idea of reframing those courses and their lessons under the title of Information Arts.

Sounds great. Hope to “see” you “there.”

Defining Digital Humanities?

I just got out of a department meeting where we were discussing the possibility of creating a new graduate certificate in the “digital humanities.” I think this is a terrific idea, but I have to admit I’m a bit ambivalent about the term “digital humanities,” partly because there’s some dispute over how to define it.

In a recent post on “The Digital Humanities Divide,” Alex Reid examines the CFP for the 2011 Digital Humanities conference, and finds that a

significant part of the digital humanities that is not captured in this call is the humanistic investigation of digital technoculture: no mention of games studies, social media, or mobile technology. In other words, no mention of the significant digital technologies and practices that are transforming human experience on a global scale. No, instead, we’re going to talk about writing software to analyze hundreds of out of print literary texts that no one can even name.

This aspect of the digital humanities is also reflected in the NEH’s recent call for Digital Humanities Start-up Grants. The call itself presents a fairly wide interpretation of “digital humanities,” but looking over the examples of projects that are getting funded (and based on a second-hand account of a conversation with a grant program officer), it seems like their main priority is on the activity of

planning and developing prototypes of new digital tools for preserving, analyzing, and making accessible digital resources, including libraries’ and museums’ digital assets

For the record, I don’t have anything against making such tools. However, as Reid points out, it seems odd that digital humanists wouldn’t be focused on “the powerful ways that digital technologies are changing the world.”

So, on the one hand, we have some folks saying there should be “more hackety-hack, less yackety-yack,” but on the other we have Neil Postman’s assertion that “technology education is not a technical subject. It is a branch of the humanities.” I think the tension here is not between digital and analogue, but instead in what we think the humanities is for. Is the point of the digital humanities to develop new tools for doing fairly traditional things with a narrow range of privileged texts, or is it to understand something about what it means to be human in a digital age?

Motives Matter

The California Faculty Association, which represents “professors, lecturers, librarians, counselors and coaches” in the CSU system, has a draft of educational principles titled “Quality Higher Education for the 21st Century.” The principle most relevant for our purposes here is this one:

Quality higher education in the 21st century will incorporate technology in ways that expand opportunity and maintain quality.

This statement stands in opposition to the view that technology — and online education in particular — will help “save vast sums of money.” The drafters of this statement go on to say that

When online technologies are used for higher levels of teaching rather than simply for drill or transfer of information, cost savings quickly evaporate. In fact, many faculty who are proponents of and experts in online education argue that teaching a good online course is more labor-intensive and thus more costly than more traditional formats.

I’m not sure about the comparison here, but I would agree that good teaching — whether with or without technology — is a time-consuming, labor-intensive affair. The forces in the university who think we can package and automate (or outsource) quality instruction via technology are deluding themselves.

Online education is unavoidable. It’s going to happen. But if our primary motive is cost and efficiency, it’s going to suck. If, instead, we do it to increase access and opportunity for students, then it might just work.

Technology Blamed for Violent Rhetoric

In the aftermath of last weekend’s senseless shootings in Arizona, many folks have been quick to blame the tragedy on the violent, incendiary political rhetoric of our times. It’s not hard to find examples of such rhetoric: Giffords’s Republican rival last summer appeared in political ads holding an M-16, and apparently he even invited voters at a campaign event to shoot with him. And then there’s Sarah Palin’s poorly-conceived “target map” and Twitter post.

Whether or not the shootings can legitimately be blamed on such rhetoric, I’ll leave to others to debate. Early indications are that the shooter was a demented, unhinged individual, and perhaps didn’t need violent rhetoric to motivate his actions.

What caught my eye today, though, was John McWhorter’s piece in The New Republic blaming the prevalence of violent political rhetoric on technology:

The actual cause of this new national temper is technology and its intersection with how language is used. Language exists in two forms in modern times: speech and writing. Writing is a latterly invention only some thousands of years old, produced and received more slowly than talk. It encourages reflection, extended argument (something almost impossible to convey amidst the overlapping chaos of conversation), and objectivity. Writing is, in the McLuhanesque sense, cool.

According to this theory, the act of writing inherently carries with it a different stance toward language–methodical, deliberate, rational. It is the linguistic equivalent of the slow food movement. Writing provides a kind of firewall against our passions. What technology has done, for McWhorter, is push our use of language back into oral territory, where things are less refined:

It is no accident that the shrillness of political conversation has increased just as broadband and YouTube have become staples of American life. The internet brings us back to the linguistic culture our species arose inall about speech: live, emotional, unreflective, and punchy. The slogan trumps the argument. Anger, often of hazy provenance but ever cathartic (“I want my country back”) takes fire. All of this is reinforced by the synergy of on line “communities” stoking up passions on a scale that snail mail never could.

As you might have guessed, I have a few problems with McWhorter’s theory here. First is the simplistic distinction between speech and writing that he posits. Not all speech is “emotional, unreflective, and punchy,” nor is all writing reflective, extended, and objective. I wouldn’t even say that these are broad tendencies. Instead, there are genres of writing that do indeed privilege the qualities he outlines–specifically the kind of essayistic writing that academics and authors at The New Republic gravitate toward. But it wouldn’t be hard to find examples of “emotional, unreflective, and punchy” rhetoric in written form. Sarah Palin’s tweet is a prime example.

Continue reading

Reading DIY U

Just got finished reading Anya Kamenetz’s DIY U. On the whole, I found it a worthy read, sloppy in places, but also usefully provocative. Its main value, I think, is in the early chapters, where Kamenetz traces out the causes of skyrocketing tuition costs. The upshot: a broken system of state/federal aid and loans plus the costs of “bundled” services that have nothing to do with learning. I agree that both of these things need to be fixed.

Where things get dicey is when Kamenetz starts offering recommendations. She believes that technology will save us all, both by creating more efficiencies at the institutional level and by allowing people to go all “edupunk” by bypassing institutional middle-men and going straight to all the juicy knowledge available online. Inside Higher Ed‘s “Dean Dad” has done a nice job opening up — and critiquing — Kamenetz’s brand of myopic anti-institutionalism.

My main concerns with the book have more to do with pedagogy. Kamenetz too often mistakes the delivery of static content with “learning.” Yes, there are moments when she breaks out of this pattern and advocates for “personal learning networks,” but for the most part DIY U is a love letter to things like MIT’s OpenCourseWare, which collects materials from courses taught at MIT. I’ve got nothing against OCW, but access to course materials is not the same as taking a course. As I wrote in an earlier post, bad pedagogy is bad pedagogy, whether it’s in a classroom or online. The worst f2f college courses involve a professor standing in front of a large lecture hall, monotonously reading from yellowing notes that haven’t been revised in decades, and then providing no opportunity to discuss or practice what’s being “taught.” Having online access to course materials is like having access to those lecture notes, and it would be generous to call that “learning.” Continue reading

New Technologies Make Bad Teaching Slightly Worse

Okay, I know Al already linked to this piece in the Chronicle about “online learning,” but I thought I’d follow up on it. In case you missed it, there’s an article on the Chronicle of Higher Education website, titled “Video Lectures May Slightly Hurt Student Performance,” which reports on a published study that apparently compares learning outcomes between students who attended live lectures against those who watched the same lectures online. That study was titled “Is it Live or is it Internet? Experimental Estimate of the Effects of Online Instruction on Student Learning,” which may explain why the Chronicle originally titled its write-up “Online Learning May Slightly Hurt Student Performance.”

Why did they change the title? Perhaps it has to do with all the subsequent reader comments to the Chronicle article pointing out the rather obvious fact that comparing outcomes associated with live lectures and video lectures has almost nothing whatsoever to do with “online learning” (I highly recommend reading the comments, which are quite entertaining). What the original study’s authors have “proven” (too generous a term without the scare quotes) is that students who watch lectures online don’t seem to get as much out of them as those who come to face-to-face lectures. Forgive me if, at this point, I can only say “well, duh.” Continue reading

Technology and Moral Panic

Technology is making us dumb. Or at least that’s the premise of a recent NYT story about “Your Brain on Computers.”  Apparently, all of this exposure to computers and gadgets is “rewiring” our brains, making us less able to focus and engage in discreet tasks. According to the article, there’s all sorts of research to back this up.

There are two things worth keeping in mind, though. First, in his op-ed, “Mind Over Mass Media,” Steven Pinker does a nice job picking apart this idea that the brain can be “re-wired.” I’m not a neurologist, but it sound like it comes down to this: yes, the brain is dynamic, but there are limits to its plasticity. Instead of saying that Twitter, for instance, is “re-wiring” the brain to react to short bursts of information (as opposed to sustained engagement), it might be more proper to say that such emerging media technologies simply connect with other ways in which the brain has always operated. That is, maybe the brain is both a fast-twitch and slow-burn muscle, but there’s just more fast-twitch stuff for it to do these days. (For those of you who were in 708 last spring, this reminds me of Mark Kelly’s presentation on attention.)

The second point Pinker also alludes to, which is the moral panic underlying all this. The NYT story anecdotally blames technology for all sorts of ills — a potential lost contract, declining school grades, general disconnectedness. Nowhere is it acknowledged that these might have other causes, nor is there any questioning of the values lurking behind them.

I guess I’m just losing patience with deterministic arguments about technology, of both the “gadgets ate my brain” and “computers will save mankind” varieties. Yes, we’re changing, and technology is part of that. But we should be careful making claims about what causes what, and more clearheaded about what is being lost and gained in the process.

The impulse of teachers shouldn’t be to try to put the genie back in the bottle, but instead to prepare our students for the kind of world we’re creating.