Hi Everyone,
I’ve had an interesting request from a language faculty member. Do any of you know of any Universities and/or colleagues that are taking a sort of “big data” approach to literary analysis?
Sort of along the lines of what Stanford is doing with their Literary Lab
http://litlab.stanford.edu/ or Google’s “Ngram Viewer.” This all started with a request to digitize 2 novels to make them searchable in
order to run searches with in the 2 novels to see where and in what context both authors were using similar topics. This lead to the idea to see if there are any technological means to do the same, but in a much larger scope of perhaps all literary works in
a 20-50 year time frame that have been published on a particular topic across various genres and in what frequency such texts have been published during that time frame.
If any of you have any leads on people/places/tools that are starting to do this or are already doing something similar I would love to reach out to them.
Thanks,
Adan
--------------
Adan Gallardo
FLRC Manager | Language Technology Specialist
Pomona College | 550 N. Harvard Avenue | Claremont, CA 91711