Evaluating Digital Humanities

Evaluating Digital Humanities has proven to be a difficult undertaking for me. As we’ve read about Digital Humanities for the past few weeks, I think we all have seen how ambiguous and nebulous the field is. I think this is due in part to the relative infancy of the field in comparison to other realms of the humanities, and I think this ambiguity and vaguery is freeing to the field as it allows for the field to take on many different shapes rather than be tethered to one singular set of standards, it also makes the process of legitimizing and evaluating products of the field to be somewhat of an arduous task.

 

One reason this is such a difficult task, as  Dr. Schocket mentioned, is the myriad of Digital Humanities projects that exist. As I began reviewing the lists of DH projects to evaluate, there seemed to be a little bit of everything listed from things I have familiarity with like EverNote and Tumblr, to things I had never heard of (but wish I had!) such as Cornell Notes PDF Generator and QuarkXPress. While all of these products are very unique, useful for humanities in a variety of ways, promote and illustrate the scholarship of their creators, and have some connection to politics, or social power of users and creators– just some of the categories for DH as discussed in Debates in Digital Humanities— these projects are so widely different it’s difficult to be comparative of them with any objectivity to truly and fairly evaluate them.

Beyond the scope of products possible for evaluation in the field of DH, the list of criteria these products/projects must meet is as broad, if not broader. As I mentioned before here, and in comments in this blog, the abstract nature of Digital Humanities is refreshing to me. I often find myself at odds with my own field of Education as regulations require abstract concepts to be compartmentalized and restricted by established sets of standards and requirements, which, to me, is a direct juxtaposition to the artistic nature of the field of Education. Within Education, as within Digital Humanities, there are too many variables for one set of structured criteria from which to evaluate the effectiveness of a creation of the labor within the field. I will concede, however, there should be some core standards from which these products are evaluated, lest there be not progress or evolution within the field.

 

Of the examples provided by Dr. Schocket of existing criteria for DH evaluation, I found the Modern Language Association and The Journal of Digital Humanities to focus on criteria specific for academia and ideology versus methodology or practicality of the product being evaluated. While I’m sure these standards are useful for evaluating the merits of the scholarship behind DH products in academia, a unique challenge is presented in the field of DH (which may or may not exist in other fields, but in my limited experience, it would seem this is unique to DH), that does not allow for these standards to be the only ones applied to fairly evaluate a DH project. If we’re using book reviews, specifically academic book and journal reviews/evaluations as a baseline for the structure for our own evaluations, I think the shortcomings of this analogy and their application in this instance quickly become obvious.

 

As Dr. Schocket mentioned, we’ve accepted a paradigm for how books and scholarly articles should be evaluated: it’s author-centric without much (if any) regard for the “behind the scenes” work that is done like editing, peer review, publishing, etc. Instead, we evaluate the merits of the work as a construct of the author solely. Additionally, less requirements are placed on evaluations of books and academic journals in a sense because we rarely consider the collaborative measures required to make the book or journal a success, nor do we consider the audience for the piece, the practicality of the piece, its aesthetic value and even the longevity of its ingenuity or usefulness. This paradigm does not seem to exist in the field of DH as the aforementioned criteria are all of extreme importance when evaluating a DH project..

 

Where the MLA and The Journal of Digital Humanities assert a set of standards that seems to better fit the ideology of DH, The Lairah Digital Humanities checklist attempts to assert a set of standards that better matches the methodology of the field giving a checklist for evaluation that includes content, users, management, and dissemination. All of these are more widely applicable to the myriad of DH products, but still seem to include a wealth of criteria not applied to other, more “traditional” humanities scholarship (like books and journals), further illustrating just how difficult it is to objectively evaluate DH products.

 

As if this wasn’t enough cause for uncertainty when I set out to tackle this assignment, the concept of revolution created an even greater gray area. In Bend Until it Breaks: Digital Humanities and Resistance, Robin Wharton argues, “The capability inherent in digital humanities for resistance is part of what makes digital humanities ‘humanistic’…it’s what connects the digital humanities to the humanities.” The resistance in DH that is so necessary for the connection to the humanities, according to Robin Wharton, includes breaking things in order to change and improve them. This includes breaking the previously accepted paradigms and standards for which scholarly products are evaluated and lauded or discredited, which would be more inclusive of digital projects and their creators, allowing for digital humanists to be given the same credit in academia as those in other facets of the humanities. While this acceptance of the need for resistance to established standards is wonderful for the field of DH and for many other fields that elect to follow suit, this creates further issues for being able to evaluate DH projects. For example, if a DH project doesn’t match all of the established criteria set forth by the varying entities governing and guiding the field of DH mentioned in this post, does that mean it should be discredited or viewed as unvaluable to the scholarship of the field, or does it simply mean that its creator(s) are attempting to break the mold, attempt something revolutionary, and still deserve a fair evaluation, and even praise for their work, despite the fact it doesn’t fit into a set of preconceived standards?
This project is challenging for all of the reasons discussed above and more. I think the field of DH is paving the way for other fields to be more inclusionary of a variety of scholarly projects that appear in modes not traditionally accepted, but still necessary both in ideology and methodology. I think the ambiguous definition of the field, its purpose, and the standards on which its judged are of great benefit for the scholars within the field as it provides a necessary flexibility for academic inquisition and creation.  I believe as more DH products are created and as they continue to be evaluated on the currently established criteria, with an open mind for rewriting and evolving that criteria, a new paradigm and set of standards will emerge which will allow for more objective evaluation while still allowing for the necessary creativity and ingenuity within the field.

One thought on “Evaluating Digital Humanities”

  1. Cassie, good points all around.

    As field (if DH is one), we have a long way to go to come up with a commonly-accepted set of criteria for evaluation. I like your pointing out the LAIRAH checklist (http://www.ucl.ac.uk/infostudies/research/circah/lairah/features/), which is a good start. But of course, like any checklist, it has its advantages and disadvantages. One the one hand, it’s a list of characteristics that a project should have, but on the other, it doesn’t prod us to think about the comparative merits of projects that have met the criteria in similar ways. Another drawback is that it is very much geared to large projects with significant institutional support. For every big project, there are probably scores of smaller ones on a bootstrap. So this is a very tough nut to crack.

Leave a Reply to Andy Schocket Cancel reply

Your email address will not be published. Required fields are marked *