Datacloud

“You can be a creative writer using nothing but your imagination.  But you can’t be a technical or proposal writer if you don’t have information — hard cold facts and data to write about.”
–Rob in FL

Let’s just get this out of the way: anyone who references The Flaming Lips in an academic text deserves major props. Seriously though, Johndan Johnson-Eiloa’s Datacloud is reminiscent of Jeff Rice, another scholar who uses the hip-hop-as-pomo-writing argument to talk about how postmodern texts are inherently produced through contingent, experimental, and playful means – it’s “whatever.” And what was persuasive in Rice’s “The 1963 Hip-Hop Machine” is just as persuasive to me in Datacloud. It’s the whatever part.

In a nutshell, Datacloud argues that we live in an information-saturated world – “a cloud of data” — that not only requires current writers, producers and authors to accept said density, but to actually see how their responses to it (i.e. learning and working) – their “inhabiting” information, leads to rich, meaning-making activity worthy of theorizing further (3). To take it further J-E employs Stuart Hall (vis-à-vis articulation theory) alongside labor theorist Robert Reich (vis-à-vis symbolic-analytic work) to post, in J-E’s terms, “a job ad for information age cultural workers” (19).

At its meatiest, Datacloud uses the aforementioned theories to analyze computer interfaces; J-E admits the book started with this premise and a third of the book is dedicated to this analysis. By historicizing the computer he finds that interfaces have increasingly flattened, emphasizing surface over depth as the computer developed (think of the difference between DOS and Windows or OSX). What’s problematic about this, argues J-E, is that while more learning/work has become increasing squeezed into a small window (screen, screens and more screens), our technologies (and theories?) have hardly kept up with the jobs of symbolic-analytic workers, who are increasingly expected to experiment, collaborate, analyze vast piles of data, and understand how problems change with different contexts. J-E extends this argument beyond the “interface,” questioning software that emphasizes time (like MS Word) as opposed to software that emphasizes space (like ProTools or maybe Dreamweaver).

I don’t know if I’m really a symbolic-analytic worker, but my hybridized role as admin/instructor/student/consultant allows me to bask in interfaces at least 10 hours a day. Because I frequently need to coordinate schedules and documents it’s not unusual for me to need to skim through Firefox, Word, Mail, iCal for one task. In other words, a 40” monitor probably wouldn’t keep me happy. But after talking to my friend Rob, I really see what J-E is getting at in Datacloud.

Rob, a CNY native, has been a tech writer for over 30 years, but recently migrated to a southern city to work for a company that writes proposals for other companies. While Rob says he has not time to learn DITA or XML, he does use “old technology” (his words) like MS Office, Sharepoint, Visio, VBasic, MS Project, Photoshop and a bunch of other programs to work through a sea of data, docs, forms and other info. He says: “I spend 90% of my time on technical fiddling to cobble together ways to cope with information, and 10% actually writing meaningful proposal text.” He gives an example of working with a guy in Europe who wrote macros in vBasic so that he is able to keyword metadata in hundreds of documents for a construction project. But he’s not only a database wrangler. He needs to work with teams to navigate boatloads of data in order to produce arguments that persuade specific audiences.

He writes: “All my jobs since I became a tech writer have been stressful because I’m required to somehow produce meaningful, factual proposal text that gives solid reasons why we should win a contract…but without many knowledge resources, historical data or technical information. So I’ve increasingly moved into building knowledge bases so I can have the facts and information I need to write winning proposals.”

One way he’s been successful in this way is rearticulating the data so it works for him – no matter who the audience or what the context. More than anything that he’s worked on in 9 years, Rob says he’s proud of designing “dashboard workspaces,”which rearrange awful, dense tables of info (often required by the Fed) into interactive, hyperlinked graphs. This helps his team “actually concentrate on writing, not searching for information,” though Rob himself argues that he’s “increasingly involved in information capture, information indexing, information classifying, and information retrieval design and programming, rather than merely doing ‘technical writing’.”

The implications for pedagogy in both J-E and Rice are important. I’ve tried to use contingency as a method in WRT 205 (play them Girl Talk!) just as much as I have in a peer-tutoring practicum. That said, I wondered how Rob might read the conclusions in this book, especially the five strategies on page 134. More specifically, what would a more “spatial environment” look like? Who develops them? Workers (as in Rob’s dashboard)? Companies? (Is open source the answer?) And four longish years after this book, are we seeing such trends with things like tagclouds and interactive graphics like this one?