Texts and Images of Austerity in Britain

A new project about austerity in the UK.

Posted by Devin J. Cornell on Dec 30, 2017

Last October I visited Erlangen, Germany to attend a workshop set up by Dr. Tim Griebel and Prof. Dr. Stefan Evert called “Texts and Images of Austerity in Britain. A Multimodal Multimedia Analysis”. Tim and Stefan are leading this ongoing project aimed at analyzing 20k news articles from The Telegraph and The Guardian starting in 2010 and leading up to Brexit. I’m working alongside 21 other researchers with backgrounds in discourse analysis, corpus linguistics, computational linguistics, multimodal analysis, and sociology to explore discourse between the two news sources across time from different perspectives.


Images of austerity news headlines

Comparison of front pages of The Guardian and The Daily Telegraph after the Greek people voted to reject austerity measures imposed by the IMF.

This event was a great opportunity to learn from experienced corpus linguists (CL) and discourse analysis scholars from different countries with different academic backgrounds. I found it surprisingly difficult to jump into UK politics without any background, but I learned a lot from listening to presentations and discussions. I spent last summer learning about Colombian politics and assumed the language barrier was the most difficult part of that research, but even in English it took me a while to understand the economic factors involved in the political discussions.

It was fascinating to compare populist movements occurring in the UK with those in the US and Colombia that I’m more familiar with. Last year I attended a preconference on populism at the International Communications Association annual meeting in San Diego, and the general consensus was that populism is a set of styles (repertoires?) and moral backdrops (intuitions?) that politicians use to build support for political positions. My observation is that the exact positions they apply to may vary widely by country. An inclination towards formal practice theories of culture leads me to believe that we can compare and contrast contexts by identifying sets of discursive repertoires and discursive frameworks that tap into deeply held morals and emotions. I’ll be presenting some of my work on Colombia from both theoretical and empirical perspectives using a combination of interviews and Twitter data at the 2018 Pacific Sociological Association annual meeting. That work will be shared after the proceedings are posted.

Another big impression was the surprising divide between the corpus CL community and other fields like sociology, communications, and the digital humanities who study large corpora of texts using computational methods. While I’m familiar with the topic modeling and LSA approaches, the linguists I met use collocations, POS taggers, stemming, lemmatization and other non-machine learning approaches to text analysis. I expressed my surprise to one of the other workshop attendees and they pointed out that the pushback against ML from CL was reflected in a presentation Dr. Andrew Hardie (also a workshop attendee) gave at the 2017 Corpus Linguistics conference that aimed to critique topic modeling – I wrote a response to that critique here.

This workshop also forced me to consider the contributions that sociology could have to computational text analysis. My thought is that sociologists can help to place the media into a larger societal context by exploring economic, cultural, and organizational factors that affect and are affected by the media. Discourse analysis, digital humanities, and communications all examine the causes, content, and effects of media on people, but I think sociology has the opportunity to explore this at a collective scale. Cultural analysis could further contribute by looking at the moral under-girding of political rhetoric and how it relates to the construction of social categories that people understand and navigate through.

Overall, this workshop was a great opportunity for me to meet new people and be exposed to totally different approaches to computational text analysis and broader discourse theory. I’m excited to see where these collaborations will lead!

EDIT: the end result was finally published!