Intro to Digital Humanities – Day 5

I am a student of the “Introduction to Digital Humanities” course

Today, I learned “Critical Reflections on Digital Humanities” (lesson 1.5).
I consumed the following lessons:

  1. Instructions
  2. DH Timeline
  3. History of Digital Humanities
  4. Critical Code Studies
  5. Digital Humanities and Design
  6. Computation is Not Value Neutral

I made a GIF of the timeline, available at the end of this post.

There were two discussions.

Discussion #1

Think about this timeline.
As you study the items we’ve included on this timeline, we encourage you to raise questions about them and discuss items that you think should be added or removed. You may wish to come back to this throughout the course, but the point we’re trying to make with this timeline is that many different kinds of people, technologies, and other infrastructure have come together to create the practice we know today as digital humanities. Undoubtedly there are many ways to summarize or evaluate the influences that have shaped humanities research. And a timeline is only one way to organize that summary.
We want to hear from you about your impressions regarding the historical events that have led to current digital humanities practice. Which items should be added or removed? Enter your post in the discussion forum below.

My answer (#1) follows.

I would not include all of the current entries; I reason about “enabler and multiplier” solutions vs. particular projects

I regard “writing” as humanity’s all-time greatest invention. No writing, no memory, making it much harder (impossible?) for new generations to fully benefit from their antecessors’ progress. So, I do agree with 1440’s printing press inclusion in the timeline, because that represents a significant higher level of “memory technology”, not only for recording purposes, but also for information diffusion.

The 1800’s “humanities” landmark, with a reference to the 15th century, is relevant in the sense that it acknowledges some of the first dedicated and explicit critical studies of human creative works.

1946’s father Roberto Busa + IBM project was unknown to me. I understand its inclusion, but I am hesitant on its relative relevance; I think its relevance is very much lower on the “scale”, compared to “printing press”, for example. One of the reasons I think this, is that at the time, software development was so tightened to the hardware itself, that there weren’t even standards for how to code characters – ASCII (a very relevant standard for encoding characters) is something of the early 1960s. This means that software creators would not agree in details so low, as how to represent an “A” (or any other symbol) in code. The consequence is that whatever tool IBM created for the study, its operation would be/was limited to a very specific IBM machine, requiring highly specialized people to do anything it with. In other words, my perspective is that any digital tool will only be worth mentioning by the time its inputs and outputs have reached a more “open”, or at least “standardized” maturity.
OCR, at least after standards for character encoding, is understandable in the timeline.

“Situationist International”, which I also learned about from this timeline, allowed its contributors experimentation worth the records. I will assume other collectives were doing the same, but they did not achieve the same notoriety.

1960s’ Geographical Information Systems, are the precursors of my favorite “leisure” software: Google Earth. Google Earth is underappreciated. It is so empowering, to be able to virtually travel to anywhere on Earth (and beyond!), and learn more from there! Tomlinson’s system was very different, but was the seed to everything that followed, so I find his contribution highly deserving of the timeline.

“The medium is the message”, is certainly not a consensus, when interpreted from an importance perspective, but it does apply to much of the communications happening today, via all the media. I would NOT include this sentence in the timeline.

Instead of “first two-node” network (a classification I do not agree with), I would pick the underlying key technology as one of the greatest all-time inventions: “packet switching” is about the digitalization of information and dividing/organizing it in digital packets whose sending/receiving order is NOT relevant, contrary to what happens in analogue conversations. Hence the packets can travel different routes, some longer, some shorter, some readily available, some found inaccessible (in case of war, some physical paths can get destroyed, yet it might still be possible to deliver the packets through alt-structures), and get assembled at the destination, according to metadata in each packet – its sequence number. To me, this ranks as high as the “printing press”.

The Internet changed, and will keep changing, everything. We all work on the shoulders of giants, and any technology is only possible as the top layer of a big stack of all the previous supporting technologies; so it can sound unfair to say that the Internet is the most important landmark of them all, in the timeline under discussion, but that is how I see it.

I agree with the PC in the timeline. As a tool, it was the first tool enabling individuals’ access to the Internet.

As with “Situationist International”, specific organizations’ work, no matter how interesting, is only subjectively more or less important, than others’, so I would not include MIT’s “Interactive Cinema Group”, “Index Thomisticus” and the “Dartmouth Dante Project” in the timeline. If they are to be included, what to say of many of Douglas Engelbart’s projects (a fundamental person in multimedia thinking and doing)? And what about Theodore Nelson’s hypertext works (the person who coined many of the hyper* expressions and many related ideas)?
I would instead look to include technologies, even if only conceptualized, as Vannevar Bush’s “Memex”.

TEI and the WWW are foundations, platforms, on which people build; so, as “enablers”, they fit my view of what is most justifiable in the timeline. On the other hand, Google Books and Wikipedia are superb, wonderful projects, built on top of “enabler technologies”, but not exactly at the same multiplier level.

I published my answer (#1) as a new post at the following URL:

Discussion #2

Now that you have learned more about some of the critical theory behind digital humanities work, we want to know how your thinking has changed? What topics or ideas surprised you the most? What might have been most relevant to your own research interests?

My answer (#2) follows:

Confirming the growing reach of Digital Humanities and two “surprising” situations, to be picky

As I proceed in the course, the more I feel the constant opportunities for Digital Humanities studies in today’s world, a personal growing interest in the field, and how the label might even apply to some projects of mine.
The strong multidisciplinary of the field is not surprising me, nor is its heavy collaboration with digital/computational techniques, technologies and tools. In that sense, my thinking has not changed.
I have to be picky, but I can identify two situations which I still need to understand better: one is the relevance given to specific projects (“Corpus Thomisticum”, “Dartmouth Dante” and “The Complete Writings and Pictures of Dante Gabriel Rossetti); the other is a different perspective I might have of the concept of “scale” and its handling by humans.
Regarding the specific projects, I understand their merits and even their pioneering nature, but I am unconvinced that they are/were engines for the advancement of Digital Humanities. I see them as examples of, but not as the engines for. For this reason, I was expecting complementary sentences, stating “there are other examples” and/or clarifications on their main contributions.
Regarding the perspective of “scale”, I sometimes perceived the one-sided idea that computational approaches scale-up without issue, contrary to humans. In fact, scaling-up is a huge computational challenge for non-linear problems, and humans can be surprisingly good in handling massive sets of data, for a mix of reasons. One pop example is how only very recently Artificial Intelligence was able to beat humans at the game of Go.

I published my answer (#2) as a new post at the following URL:

02_DigHum_01_v1_119_Critical_Code_Studies_20190410_RC-_edxmstr_v1-en_768.jpg (image/jpeg)


03_DigHum_01_v1_120_DigHum_and_Design_20190520_RC_v2-_edxmstr-en_768.jpg (image/jpeg)


04_DigHum_01_v1_121_Computation_Not_Value_Neutral_20190410_RC-_edxmstr_v1-en_768.jpg (image/jpeg)


dh_animated_timeline.gif (image/gif)


Technical Details