Category Archives: seminar

Digital Classicist seminar by MA DH students (Friday July 3)

The Pedagogical Value of Postgraduate Involvement in Digital Humanities Departmental Projects

Francesca Giovannetti, Asmita Jain, Ethan Jean-Marie, Paul Kasay, Emma King, Theologis Strikos, Argula Rublack, Kaijie Ying (King’s College London)

Digital Classicist London & Institute of Classical Studies seminar 2015

Friday July 3rd at 16:30, in Room 212, 26-29 Drury Lane, King’s College London, WC2B 5RL

The SNAP (Standards for Networking Ancient Prosopographies) Project at King’s College London, funded by the Arts and Humanities Research Council (AHRC) under the Digital Transformations big data scheme, seeks to act as a centralized portal for the study of ancient prosopographies. It links together dispersed, heterogeneous prosopographical datasets into a single collection. It will model a simple structure using Web and Linked data technologies to represent relationships between databases and to link from references in primary texts to authoritative lists of persons and names. By doing so it particularly addresses the issue of overlapping data between different prosopographical indexes. It has used as its starting point three large datasets from the classical world – the Lexicon of Greek Personal Names, Trismegistos, and the Prosopographia Imperii Romani – and aims to eventually be a comprehensive focal point for prosopographical information about the ancient world.

A team of voluntary postgraduate students from the department of Digital Humanities at King’s College London has been involved in the further development of certain parts of the project, which build upon the skills learnt in the offered Masters Degrees. These include coding tasks with Python, RDF, SPARQL queries and improvements to the final HTML pages as well as administrative tasks such as communicating and negotiating with potential contributors for the expansion of the dataset.

This initiative provides the students with the opportunity to apply these skills to a large scale project beyond the usual scope of the assignments related to the Masters Degrees. It gives the opportunity to experience how a team of digital humanists work towards a common objective. This offers a more well-rounded perspective of how the different components involved in a digital humanities project interact with and mutually support each other. The talk will be analysing the pedagogical value of these initiatives for postgraduate students approaching the work world or continued academic study.


The seminar will be followed by wine and refreshments.

DH/Classics seminar: Perseus, Open Philology and Greco-Roman studies for the 21st century

Digital Humanities and Classics Research Seminar

Wednesday June 24th, 18:00
Room K3.11, Strand Campus, King’s College London, WC2R 2LS

Professor Gregory Crane
Universität Leipzig and Tufts University
Perseus, Open Philology and Greco-Roman studies for the 21st century


Professor Crane is the Alexander von Humboldt Professor of Digital Humanities at Leipzig, and the Winnick Family Chair of Technology and Entrepreneurship and Professor of Classics at Tufts University. He completed his doctorate in classical philology at Harvard University. From 1985, he was involved in planning the Perseus Project as a co-director and is now its Editor-in-Chief. He has received, among others, the Google Digital Humanities Award 2010 for his work in the field.

DH Seminar: Julianne Nyhan, Investigation of earliest contributions to Humanist

Were Humanists and Digital Humanities always so very different? An investigation of the earliest contributions to Humanist

Julianne Nyhan (University College London)

When: Tuesday March 3rd, 18:15 start
Where: Anatomy Museum, Strand Building 6th Floor (, King’s College London, Strand, London WC2R 2LS

Abstract: Until recently the history of Digital Humanities has, with a few notable exceptions (see, for example, relevant entries in the bibliography that I’m in the process of compiling here: mostly been neglected by the DH community as well as by the mainstream Humanities. Of the many research questions that wait to be addressed, one set pertains to the history of the disciplinary formation of Digital Humanities. What processes, attitudes and circumstances (not to mention knowledge and expertise) conspired, and in what ways, to make it possible for DH to become disciplined in the ways that it has (and not in other ways)? What might answers to such questions contribute to new conversations about the forms that DH might take in the future? Here I will make a first and brief contribution to answering such far-reaching questions by identifying and analysing references to disciplinary identity that occur in conversations conducted via the Humanist Listserv in its inaugural year.

About Dr Nyhan: Dr Julianne Nyhan is lecturer in Digital Information Studies in the Department of Information Studies, University College London. Her research interests include the history of computing in the Humanities, Oral history and most aspects of digital humanities. Her recent publications include the co-edited Digital Humanities in Practice (Facet 2012) and Digital Humanities: a Reader (Ashgate 2013). She is at work on a book (Springer Verlag 2015) on the history of Digital Humanities (information about the wider Hidden Histories project is here Having recently completed a number of interviews with the female keypunch operators who were trained by Roberto Busa in the 1950s and 1960s to work on the Index Thomisticus project she is also at work on a paper about this (see  Among other things, she is a member of the Arts and Humanities Research Council (AHRC) Peer Review College, the communications Editor of Interdisciplinary Science Reviews and a member of various other editorial and advisory boards. She tweets @juliannenyhan and blogs at Further information is available here:

Digital Classicist London 2015 call for papers

The Digital Classicist London seminars provide a forum for research into the ancient world that employs innovative digital and interdisciplinary methods. The seminars are held on Friday afternoons from June to mid-August in the Institute of Classical Studies, Senate House, London, WC1E 7HU.

We are seeking contributions from students as well as established researchers and practitioners. We welcome papers discussing individual projects and their immediate contexts, but also wish to accommodate the broader theoretical considerations of the use of digital methods in the study of the ancient world, including ancient cultures beyond the classical Mediterranean. You should expect a mixed audience of classicists, philologists, historians, archaeologists, information scientists and digital humanists, and take particular care to cater for the presence of graduate students in the audience.

There is a budget to assist with travel to London (usually from within the UK, but we have occasionally been able to assist international presenters to attend).

To submit a proposal for consideration, email an abstract of no more than 500 words to by midnight GMT on March 8th, 2015.

Organised by Gabriel Bodard, Stuart Dunn, Simon Mahony and Charlotte Tupman. Further information and details of past seminars, including several peer-reviewed publications, are available at:

Digital Humanities seminar, spring 2015

The Digital Humanities research seminar will run fortnightly, on Tuesday evenings during term, in the Anatomy Museum on the Strand campus (with one exception, on February 5, which is a lunchtime meeting in the Drury Lane 2nd floor seminar room). We hope to discuss the place of DH within the arts and humanities and within the academy as a whole. All are welcome.

When: 18:15 start (except Feb 5, 12, Mar 10, 20)
Where: Anatomy Museum, Strand Building 6th Floor
King’s College London, Strand London WC2R 2LS (except Feb 5/Mar 10/Mar 20)

January 20, 2015:
Richard Gartner, Giles Greenway, Faith Lawrence, Jennifer Pybus (King’s College London)
Round table: Big Data in the Digital Humanities

February 5 (NB: Thursday, 13:00 start, in 26-29 Drury Lane, room 212):
Clare Hooper (University of Southampton IT Innovation Centre)
Understanding Disciplinary Presence in Interdisciplinary Fields: analysing contributions in the Digital Humanities and Web Science

February 12 (NB: Thursday, 14:00 start, in 26-29 Drury Lane, room 212):
Michael Lesk (Rutgers)
The Convergence of Curation

February 17:
Ségolène Tarte (University of Oxford)
Of Features and Models: A reflexive account of image processing experiences across classics and trauma surgery
(Joint seminar with Classics Department)

March 3:
Julianne Nyhan (University College London)
Were Humanists and Digital Humanities always so very different? An investigation of the earliest contributions to Humanist

March 10 (NB: 17:30 start, in Council Room K2.29):
Irene Polinskaya (KCL), Askold Ivantchik (Bordeaux) & Gabriel Bodard (KCL)
Byzantine Inscriptions of the Northern Black Sea (details)

March 17:
Marilyn Deegan, Simon Tanner (KCL), Sam Rayner (UCL), et alii.
Panel: Future of the Academic Book

March 20 (NB: Friday, 12:30 start, in 26-29 Drury Lane, room 212):
Nicole Coleman (Stanford)
Palladio: Visual Tools for Thinking Through Data

Linking Ancient People, Places, Objects and Texts

Linking Ancient People, Places, Objects and Texts
a round table discussion
Gabriel Bodard (KCL), Daniel Pett (British Museum), Humphrey Southall (Portsmouth), Charlotte Tupman (KCL); with response by Eleanor Robson (UCL)

18:00, Tuesday, December 2nd, 2014
Anatomy Museum, Strand Building 6th Floor
King’s College London, Strand London WC2R 2LS

As classicists and ancient historians have become increasingly reliant on large online research tools over recent years, it has become ever more imperative to find ways of integrating those tools. Linked Open Data (LOD) has the potential to leverage both the connectivity, accessibility and universal standards of the Web, and the power, structure and semantics of relational data. This potential is being used by several scholars and projects in the area of ancient world and historical studies. The SNAP:DRGN project ( is using LOD to bring together many technically varied databases and authorities lists of ancient persons into a single virtual authority file; the Pleiades gazetteer and service projects such as Pelagios and PastPlace are creating open vocabularies for historical places and networks of references to them. Museums and other heritage institutions are at the forefront of work to encode semantic archaeological and material culture data, and projects such as Sharing Ancient Wisdoms ( and the Homer Multitext ( are developing citation protocols and an ontology for relating texts with variants, translations and influences.

The panel will introduce some of these key projects and concepts, and then the audience will be invited to participate in open discussion of the issues and potentials of Linked Ancient World Data.

Seminar: Text Mining for Digital Humanities

Text Mining for Digital Humanities

Professor Timo Honkela (presented by Tuula Pääkkönen)
National Library of Finland, Helsinki
Tuesday, 11 November 2014, 6.00 pm
Anatomy Museum, Strand Building 6th Floor,
King’s College London, Strand London WC2R 2LS
With the increased availability of texts in electronic form, text mining has become commonplace as an attempt to extract interesting, relevant and/or novel information from text collections in an automatic or a semi-automatic manner. Text mining tasks include, for example, categorization, clustering, topic modelling, named entity recognition, taxonomy and conceptual model creation, sentiment analysis, and document summarization. The majority of text mining research has focused on corpora that have been born digital. However, for humanities and social sciences, the digitisation and analysis of originally printed or handwritten documents is essential. These documents may contain even a large proportion of OCR errors which has to be taken into account in the subsequent analytical processes. In this presentation, text mining of historical documents is discussed in some detail. Attention is paid to the  methodological challenges caused by the noisy data, and to the future possibilities related to multilinguality and context-sensitive analysis of large collections.
From the beginning of 2014, professor Timo Honkela works at the Department of Modern Languages, University of Helsinki, and the National Library of Finland, Center for Preservation and Digitisation in the area of digital humanities. Before this he was the head of the Computational Cognitive Systems research group at Aalto University School of Science. With close to 200 scientific publications, Honkela has a long experience in applying statistical machine learning methods for modeling linguistic and socio-cognitive phenomena. Specific examples include leading the development of the GICA method for analyzing subjectivity of understanding, an initiating role in the development of the Websom method for visual information retrieval and text mining, and collaboration with professor George Legrady in creating Pockets Full of Memories, an interactive museum installation. Lesser known work include statistical analysis of Shakespeare’s sonnets, historical interviews, and climate conference talks, and analysis of philosophical and religious conceptions.
(Unfortunately, at the last minute Prof Honkela finds himself unable to be with us for his presentation.  Thus, it will instead be given by his colleague Tuula Pääkkönen).


PHEME: Computing Veracity in Social Media

(Guest post from Dr Anna Kolliakou, who gave a guest seminar in DDH a few weeks ago. Anna and Robert would be very interested in collaborating with anyone in DH who has interests in their project.)

Computing Veracity in Social Media

From a business and government point of view there is an increasing need to interpret and act upon information from large-volume media, such as Twitter, Facebook, and newswire. However, knowledge gathered from online sources and social media comes with a major caveat – it cannot always be trusted. Pheme will investigate models and algorithms for automatic extraction and verification of four kinds of rumours (uncertain information or speculation, disputed information or controversy, misinformation and disinformation) and their textual expressions.

Veracity intelligence is an inherently multi-disciplinary problem, which can only be addressed successfully by bringing together currently disjoint research on language technologies, web science, social network analysis, and information visualisation. Therefore, we are seeking to develop cross-disciplinary social semantic methods for veracity intelligence, drawing on the strengths of these four disciplines. The Department of Digital Humanities, an international leader for the application of technology in social sciences, was the appropriate platform for researchers from the SLAM Biomedical Research Centre at KCL, one of PHEME’s partners, to present their proposed work in veracity intelligence for mental healthcare with an aim to develop academic collaborations with academics interested in social media analysis, NLP and text mining. For more information…

Seminar: June 2, 2014: Robert Stewart and Anna Kolliakou

Social media poses three major computational challenges, dubbed by Gartner the 3Vs of big data: volume, velocity, and variety. PHEME will focus on a fourth crucial but hitherto largely unstudied, big data challenge: veracity. The relationship between clinicians and their patients has already been changed by the internet in three waves. First, the provision of pharmaceutical data, diagnostic information and advice from drug companies and health care providers created a new source for self-directed diagnosis. Secondly, co-creation sites like Wikipedia and patient support forums (e.g. PatientsLikeMe) have more recently added a discursive element to the didactic material of the first wave. Thirdly, the social media revolution has acted as an accelerant and magnifier to the second wave.

Prof Robert Stewart and Dr Anna Kolliakou, from the SLAM Biomedical Research Centre at King’s College London, have started the process of re-tooling medical information systems to compete with this new context. This will facilitate practical applications in the healthcare domain, to enable clinicians, public health professionals and health policy makers to analyse high-volume, high-variety, and high-velocity internet content for emerging medically-related patterns, rumours, and other health-related issues. This analysis may in turn be used (i) to develop educational materials for patients and the public, by addressing concerns and misconceptions and (ii) to link to analysis of the electronic health records.

In this seminar, they will be discussing the development of 4 main demonstration studies that aim to:

  1. Identify social media preferences and dislikes about certain medication and treatment options and how these present in clinical records
  2. Monitor the emergence of novel psychoactive substances in social media and identify if and how promptly they appear in clinical records
  3. Explore how mental health stigma arises in social media and presents in clinical records
  4. Ascertain the type of influence social media might have on young people at risk of self-harm or suicide

Paul Caton: Six terms fundamental to modelling transcription

Department of Digital Humanities Lunchtime Seminar.  This is a preview of a paper he will deliver at the Digital Humanities Conference at Lausanne later this month.

Paul uses the HSM model[1] to understand transcriptions. He defines a series of abstractions, which can help us understand the process of transcription, the objects involved in the process and its agents.


A physical manifestation of an object which contains marks.


An alteration on a surface performed by an agent, i.e. scratches, prints, etc. Marks are perceptible by an agent.


Process by which an agent attempts to discover and establish a type sequence of marks in a surface. Readings can be entirely speculative. It assigns token status to marks. The reading agent must comprehend the concept of writing. A positive result state occurs when an agent assigns token status to at least one mark with certainty greater than 0. A negative result occurs when no marks are assigned token status by an agent with certainty greater than 0. A zero result state is when the agent has no certainty either way as to whether a mark is or is not a token.

Token sequence

Must at least have 1 token. A token sequence is not right or wrong, it just exists. Transcription is dependent on the token sequence produced by the reading. A T token sequence is the result of a process of transcription.


An exemplar is a combination of a surface and marks where the an act of reading is attempted, and is the basis for a transcription. The status of being an exemplar is relative; i.e. if one person makes a transcription FOO of exemplar BAR, and then another person wants to make a transcription BAZ of FOO, then FOO has the status of exemplar with respect to BAZ.


When a positive reading result occurs, token sequence is identified, and the token sequence is recognised as type, then, and only then, a surface-mark combination can be considered a document. An agents attempts a reading when it is believed that a surface-mark combination is a document, or at least there is the possibility that it might be a document.

Paul observes that the process of transcription must involve intention to be different from reproduction or copying.

A document is not necessary for the act of transcription to occur, only the intention of recognising token sequences from marks must occur.


Sperberg-McQueen, C. M., Claus Huitfeldt, and Allen Renear (2001). Meaning and interpretation of markup. Markup Languages: Theory & Practice 2.3: 215–234. On the Web at

Huitfeldt, Claus, and C. M. Sperberg-McQueen (2008). What is transcription? Literary & Linguistic Computing 23.3: 295–310.

Caton, Paul (2009). Lost in Transcription: Types, Tokens, and Modality in Document Representation. Paper given at Digital Humanities 2009, University of Maryland, College Park, June 2009.

Sperberg-McQueen, C. M.. Claus Huitfeldt, and Yves Marcoux (2009). What is transcription? Part 2. Talk given at Digital Humanities, College Park, Maryland. Slides on the Web at

Huitfeldt, Claus, Yves Marcoux, and C. M. Sperberg-McQueen (2010). Extension of the type/token distinction to document structure. Paper presented at Balisage: The Markup Conference 2010, Montréal, Canada, August 3 – 6, 2010. In Proceedings of Balisage: The Markup Conference 2010. Balisage Series on Markup Technologies, vol. 5. doi:10.4242/BalisageVol5.Huitfeldt01. On the Web at

Caton, Paul (2012). On the Term ‘Text’ in Digital Humanities. Literary & Linguistic Computing. 28.2: 209–220.

Caton, Paul (2013). Pure transcriptional encoding. Paper given at Digital Humanities 2013, Lincoln, Nebraska.

Sperberg-McQueen, C. M., Yves Marcoux, and Claus Huitfeldt (2014).  Transcriptional implicature: a contribution to markup semantics. Paper to be given at Digital Humanities 2014, Lausanne, Switzerland.

  1. Transcription model based on work by Huitfeldt and Sperberg-McQueen (2008) and continued jointly with Marcoux (2009, 2010).  ↩

Digital Classicist CFP (2014)

The Digital Classicist London seminars have since 2006 provided a forum for research into the ancient world that employs digital and other quantitative methods. The seminars, hosted by the Institute of Classical Studies, are on Friday afternoons from June to mid-August in Senate House, London.

We welcome contributions from students as well as from established researchers and practitioners. We welcome high-quality papers discussing individual projects and their immediate context, but also accommodate broader theoretical consideration of the use of digital technology in Classical studies. The content should be of interest both to classicists, ancient historians or archaeologists, and to information specialists or digital humanists, and should have an academic research agenda relevant to at least one of those fields.

There is a budget to assist with travel to London (usually from within the UK, but we have occasionally been able to assist international presenters to attend).

To submit a proposal for consideration, email an abstract of approximately 500 words to by midnight UTC on March 9th, 2014.

Further information and details of past seminars are available at: