DH Evaluation: Do you FEEL it?

I will admit, without shame, that I’m still not 100% what digital humanities is supposed to be. To be fair, I know it’s a complex topic that can’t be easily defined by a picture next to some words in the dictionary. But I feel that DH consists of so many ideas, tactics, technologies, etc. that defining it becomes nearly impossible – because how can you define something that is always changing, to the point that by the time the definition has been expressed, it suddenly must change again?

I keep this in mind as I look to my DH evaluation project, and think to myself “how the hell am I going to judge this thing?” I can’t help but reflect on an assignment I just completed in my other class with Dr. Heba (Visual Rhetoric, if anyone else here is in that class). That assignment involved reading about “visual social semiotics,” which is essentially the method of analysis one can use to critique images in similar ways as critiquing words and language. I am pretty well versed in editing the written word (and the verbal, which is often harder), so I related a lot to what the author was contending about how the visual has earned its place in professional communication. She also spoke a lot about how intricate an image is – the angle, color, focus, space, all the things that make an image strong enough to impart a certain feeling or claim on the reader/observer. Visual social semiotics, even the author admits, is not a perfect analysis method, as all the details and combinations that communication requires (such as the combination of images and text) to evoke different responses can hardly be perfectly analyzed – there’s just so much possibility.

That is why, for our DH project evaluation, I intend to look at my selected project from a critical but flexible point of view. Rather than comment on every detail, point out every flaw, and tally every success, I will use a general outline that answers the ultimate question: “Did this project do its job, and do it well?”

The first step in this, for me, is gauging the purpose of the project. This isn’t something I need to know from the author’s mouth; indeed, we rarely get the chance to converse with the creator of a work anyway. No, the project should speak for itself, and so I will look for purpose in its content – what does this project, as a whole, seek to do? Inform, persuade, entertain? For whom is this project created (as in, what end-game audience, rather than the obvious answer of a professor or authority for a grade)? The next step is analyzing execution: does the content bear weight? Is the interface being used easy to navigate and understand? Is the design effective; is the programming performed in a way to draw in the user and make him want to engage in the content? My final step will be a “recommendation” of sorts. From a user point of view, I will point out not just the details but the overall feeling I have that results from my interaction with the project: did I feel like I got something out of the project, and would I recommend it to others as a worthy example of DH work?

This last step may be a red flag, as it is very subjective and hard to qualify or quantify. But at the end of the day, most analysis is, to some degree, subjective – in the humanities, at least. Going back to the idea of visual social semiotics, can one really look at an image objectively and still walk away having understood or absorbed that image’s message? Can we look at topics in humanities and comment on them without influence of feeling or experience? No, I don’t think so. And that’s not necessarily a bad thing. To me, DH is where technology and humanity intersect – it is where tangible advancement and intellectual growth walk hand-in-hand. To take the humanity – the feeling – out of it, and make it only about what’s right or wrong, good or bad, in a digital context would be insulting to the field. If we are to embrace DH, we must be willing to accept that the analysis of DH will involve not only a critique of the technical, but the intelligent integration of the experience and passion held by those who participate in DH creation and studies.

images

4 thoughts on “DH Evaluation: Do you FEEL it?”

  1. Very well said Liz. I was just saying something like this in my class last night. I’m glad I’m not the only one who feels a bit boggled. I think the aha moment will be at the end when all of this is over. The gradual peeling of the proverbial onion of this complex phrase digital humanities will not be unveiled until the very last task we complete.

    I appreciate your breakdown of the evaluation process. I think the main point to discuss is the subjectivity of this. I believe that all evaluation is subjective. We are all viewing this project from the lens that we have grown to trust. We will elaborate on that lens and produce a scholarly opinion of someone else’s work. Even with quantitative and qualitative glimpses of support, there lies the subjective demon peering in the shadows. I’ve noticed this more with video game research. Research articles can swing from one conclusion to the next. This is bad….. This is good….. Here’s the proof……. This scholar said this…….
    These discrepancies reveal our subjective view guiding us to other views that lean to our opinions.

    I may be way off on this, but I just went off on a tangent regarding the subjectivity of the academia world. I believe when we looking closely at everything it is hard to pin point a finite way of seeing or defining it.

  2. Liz (and Tonya): I have good news and bad news. The good news is that you’re in good company in not being able to pin down, exactly, what DH is.

    The bad news is that no one else can, either. The precise definition of DH will continue to be a subject of debate for a long time. In fact, because of its inter-disciplinarity, I suspect it will be a lot like how the field of American Studies has worked to define itself over the past80+ years. People in the field can recognize works and methodologies that seem to fall right in its sweet spot, but there are numerous seminal articles concerning the methods of American Studies, its ethos, and its boundaries. DH may in some ways end up the same way.

    But I digress. The real question is, how should we evaluate something? I like your point, Liz, that we evaluate it on the merits of what’s apparent. Something to think about: should we also consider what is not apparent? For example, why its creators chose one platform over another, or coded it in one language rather than another, or decided to include certain kinds of content at the expense of others?

    1. When it comes to evaluating DH projects, I think it’s important that the individual, the committee, the journal, the department, etc. should be consistent with the standards they use to evaluate the project. Are they factoring in platform? Are they considering coding language? Did they exclude content? All of these elements, in my opinion, need to be considered when individualized standards are created. I think this is important in order to achieve fairness and consistency in the DH world. If I were responsible for evaluating DH projects, I would want some type of rubric to guide me through the evaluation process. I think that the rubric could be based off of projects deemed successful in the DH world. (Though that in itself is subjective). However, this is such a difficult thing to do in a world that is constantly changing. The rubric would have to be very broad. It would have to have subgroups that would apply to various types of DH projects. But what happens when something is invented that is outside DH’s invisible box.

      So many interesting things to think about!

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *