PHP Exam Rationale

For one of my two language requirements, I completed an exam in the computational language of PHP.

The purpose of this, in part, was to learn more about server management and the manipulation of the aesthetic and functional features of WordPress templates so that I might give myself some experience in managing a WordPress installation, just in case I’m ever working in a context that doesn’t already have one set up for teachers and students to use.

Additionally, over the last few years working as a Hybrid Coordinator at Baruch and as a result of reading for my orals exam, I’ve learned a lot about critical educational technology from the scholarship of people like Audrey Watters (web link), Sean Michael Morris (web link and presentation), Lisa Nakamura (web link), Carmen Kynard (journal database link), Mary Lynn Chambers (link to article abstract), Jesse Stommel (web link), Elizabeth Losh (link to book description), the FemTechNet community (web link), and several others.

So, part of the objective of completing this exam was also to gain an opportunity to foster my own critical transliteracy consciousness and to build a skill that I could one day teach to others.

I wrote about this process here, in a viewable Google Doc. I’m putting this on my site because I think that this was an enormously useful process, and I hope to encourage other PhD students in the humanities to consider gaining some basic fluency in a computational language in order to satisfy or partially satisfy a language requirement. I learned a lot, even though the end result looks pretty basic . 

I also benefited greatly from looking at the model that Erin Glass shared with me of her own rationale for learning Java Script. Here’s Erin’s website.  Thanks Erin!

Corpus linguistics, cosmopolitan English, and the trickiness of academic “communities”

Over the summer, I had an idea about how word processors (or other proofreading-focused software) could use corpus linguistics — rather than an (arbitrary) racist, classist, imperialist logic that privileges certain sets of conventions. I thought this might allow for a more capacious selection process when the writer was making a decision about which public(s) she invokes as she writes.

My idea was this: the program would come pre-loaded with a bunch of different corpi. Depending on the piece’s audience, the author could select the corpus that they wanted to use. The word processor would draw the author’s attention to the places where language that they used was in contrast to the most common usages in the corpus.

Put more simply, my word processor wouldn’t (necessarily) do this:

A screen shot of my Microsoft Word word processor with sentences that say "He don't go there anymore," and "She see it," and "He been trying," and which underlines pieces of the sentences with a green squiggly line, indicating that these are errors.
This picture is a little blurry, but you might be able to make out that MS Word is putting a green squiggly line underneath verbs that don’t “agree,” according to the conventions of Standard Edited English. The green squiggly lines are communicating that this language is wrong, rather than indicating the larger truth: that language is constructed within social, political, and historical contexts.

In Suresh Canagarajah’s “Multilingual Writers and the Academic Community: Towards a Critical Relationship,” he points out to the community of practitioners of English for Academic Purposes (EAP) the fact that discourse is socially constructed, that genres are living rather than fixed, and that very uneven power dynamics mediate what gets acknowledged and what gets labeled as an error, as incoherent, as insufficient. This would be partly acknowledged by this imaginary corpus-based word processor I wanted to will into existence.

But when dreaming of a corpus-based word processor that would be less fixated on tracking and flagging “errors” (i.e. violations of the conventions of the language of power), I still wasn’t acknowledging that a corpus, itself, is a social construction.

Which texts would we choose? Who decides?

In the case of the COCA (Corpus of Contemporary American English), there are millions of spoken and written texts (you can see what they are here). But even with millions of texts, do we go on majority rule? In this case, doesn’t the language of power still persist, and still perpetuate the status quo?

Let’s say that we were going to make a corpus for Comp scholars to consult when they were writing journal articles, and so we loaded in all of the journal articles that were ever written for a Comp Rhet journal which could tell us something about how well (or not) we were adhering to certain conventions.

Deciding on what constitutes a field’s journals is a political choice.

What gets in to a journal (and what doesn’t) directly reflects the habitus of the reviewers.

And, finally, a corpus-based processor would argue, invisibly, that the language of a field of academic practitioners is based on its history. It would not open up sufficient spaces for the language of the future.

Those the green squiggly lines would still be showing up to manage what was new, and to keep that status quo exactly where it is.

Back to the drawing board…