THATCamp SoCal 2012

I spent Friday enjoying the great workshops and conversations at THATCamp SoCal, held at CalState Fullerton this year (this has been an annual event for a couple of years now). Due to family illness, I was only able to attend the first of the two days.

Morning Session:

After melting my brain with some Zotero API discussion, in which I was clearly in over my casual-Zotero-using-head, I moved over to the Project Management Session. This kind of session hoping is more than ok at THATCamps – it’s encouraged. As Amanda French reminded us “vote with your feet” – the objective of attending a THATCamp is to get some useful new skills/insights/connections. Campers are encouraged to go where they will be most productive.

The project management session was great for me. USC’s Tim Stanton was great – clear, concise, and to the point. I can see why he’s good a managing projects. You can see what we covered in the collaboratively authored GoogleDoc – another common feature of THATCamps. Much of what I took away was re-emphasizing the value of project management and the need to deal with infrastructural challenges before the work gets started. The questions around how to measure time and compensation don’t get any easier once you’ve secured the grant. There is little else that can wreck as much havoc on a project timeline than running into staffing issues.

Noon:

We’re in the middle of what’s known as the “Dork Shorts” session, in which a speaker gets five minutes to share a project, talk about an idea, ask a questions. This is a genre of presentation popular within DH circles. It’s sibling, the PechaKucha session is more formal, with 20 or so timed slides. Both are designed to imbue knowledge transfer with fun – something most of our conferences could use more of.

In the course of the dork shorts there have been a couple of great projects mentioned that I’d like share:

Grace Ye is talking about the Re/Collecting Project, which “is an “ethnic studies memory project of California’s Central Coast”—you can call it Re/Co for short. Our aim is to digitally capture and make publicly accessible the rich history of the diverse—yet under-documented—communities of the region, which includes San Luis Obispo and northern Santa Barbara counties. The images, documents, stories, and mementos that we digitize generally cannot be found in any public repositories. Instead, they reside with individuals—in their family albums, in their attics or garages, in their memories. By digitizing these materials, this project will make centrally and publicly available historically significant materials and stories. We thus seek to encourage community and academic research by providing information-rich materials for a regional understanding of these local communities.”

The photo montage that she’s displaying while talking is absolutely gorgeous and I’m struck by the great connections that she’s made with the local community. When the project holds  “Re/Collecting Days,” they invite families and individuals to recollect their stories as well as to participate in collecting their story materials for digital preservation and access. From what I can see here, these events do a beautiful job of integrating oral history, art and visual culture curation, and community-engaged learning. For more information you can follow the jump above and/or contact Grace at  (gyeh – at- calpoly – dot- edu).

This next tidbit came as a response to an open query that someone made during his short – basically he’s looking for a variety of image recognition tools. One particularly good example was cited: The Real Face of White Australia, which uses a face detection script to generate its content. The  experimental browser forces a user to think about the paths through the collection and does something different in terms of navigation. It’s a cool project and it certainly offers some facial recognition technologies that might be leveraged elsewhere.

Afternoon session:

We’re in the big room and I’m learning about both Scalar and Text Modeling/Data Mining this afternoon.

Scalar

Matt Delmont’s Nicest Kids In Town Scalar project was already familiar to me (we’re family), but I’d not had the chance to work with the interface myself since it had gone into its more public mode this spring. Scalar is a platform for  “born digital, media rich, scholarly publishing” and it comes out of the Alliance for Networking Visual Culture at USC. You can see the video that Craig Dietrich shared with us that introduces the platform and its affordances. While a networked publishing platform that can create a variety of narrative paths might sound daunting, it was dead easy. We made our own little books right there in about a half of an hour. Sure, we were using some provided content, but in terms of working with the interface and authoring architecture, I’m not sure it gets much easier. What is particularly exciting for me about this ease of use is that it makes possible envisioning bringing students into the authoring process. I was so tickled that I’m exploring using the platform to have my Creating Archives students author a scalar book this fall.

Text Modeling/Mining

The last session for today is Scott Kleinman’s  intro to text modeling and mining. This is a big topic (no pun intended here) for the “Big Data” folks within DH and Ted Underwood was virtually present as the guru in the room by way of frequent citation.

As Scott pointed out, text modeling can be used for:

improving search and retrieval

text classification (genre)

text clustering (revealing similarities and differences)

Text modeling (revealing meaningful components)

text visualization (rendering imperceptible patterns visible, esp in ‘big data’)

He walked us through the workflow – from tokenization to visualization – and introduced a variety of algorithms that might be used. We also looked at some sample projects, like the Lexomics work at Wheaton and Matthew Jocker’s analysis of 19th century fiction.

Scott, citing @Ted_Underwood on topic modeling: argues that its “a way of working backward from texts to find topics that COULD have generated them.” That is, one can use big data to think not only about a large number of texts, but also to think about possible cultural contexts out of which those texts arose. We had a brief bit of discussion on issues around certainty and positivism, which are critiques that seem lurk at the edges of the more quantitative DH methods. I’m sympathetic to the critiques, given that I’m more interested in understanding interpretation as an act of poeisis, but that hasn’t stopped me from trying to understand more about text modeling and big data – in fact, I like to wonder if the two are actually harmonious, but that’s for another post.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s