Monthly Archives: November 2013

Visualising Macroscopic Deterioration of Parchment and Writing via Multispectral Images (a short report)

Monday at lunch time we gathered in the Seminar room to listen to Alejandro Giacometti’s talk about multispectral images and their potentials in helping recover data in damaged manuscripts.

Before I get into details about the talk I have to admit I don’t know much about multispectral images, nor about the physical properties of cultural heritage documents. I found my point of interest where the RGB spectrum gets broken down into smaller chunks and specific wavelength ranges, including infrared and ultraviolet, could help reveal a hidden world we thought destroyed and long gone.

After Alejandro introduced us to the project, showed us how he and his team deteriorated a deaccessioned 1753 parchment manuscript through chemical and mechanical agents to simulate damages that could occur to documents through time, he then demonstrated how multispectral images, coupled with custom metric, can help recover information and also determine what possible events the manuscripts went through.

The differences between images can be impressive, for example an image partially covered with aniline dye looks blackened to the naked eye, but would reveal a still readable text when the wavelengths are leaning towards the infrared. It’s basically like a filter was applied and the content revealed based on what you were looking for.

Imaged sample of the treated parchment.
Imaged sample of the treated parchment.

Although the results of this experiment are immediately recognisable when focusing on the text, Alejandro’s project was not specifically trying to find lost texts, but to create a register of imaging methods and wavelengths with a statistical record of how effective each was at recovering text lost to different kinds of damage.

This is new and early research, with a large potential for further work, but with the opportunity to keep the study going, Alejandro sees the potential to run more tests, combine several agents to simulate real life events to broaden and refine the metric that could give historians a very powerful tool to recover more data from cultural heritage documents, whether they were damaged by a glass of wine dropped by an absent-minded scribe, a fire in a library, their parchments scraped and used multiple times or simply because of the natural course of time.

Next month is time for a tougher audience: Alejandro will be presenting in front of the Viva committee and this is our chance to wish him good luck!

Also working on this project:

  • Alberto Campagnolo
  • Lindsay MacDonald
  • Simon Mahony
  • Melissa Terras
  • Stuart Robson
  • Tim Weyrich
  • Adam Gibson

Reporting from MEX 2013

I recently struggled to get back into writing, whether for lack of time or for the ability to focus on one single subject. And that’s why it took me so long to report back about my experience at MEX, last September.

MEX is a conference about mobile user experience that happens periodically in London. As they put it themselves:

MEX is an event focused on user modes as the raw ingredients of digital experience across phones, tablets, PCs, wearables and more. Learn from expert speakers, develop skills & ideas in facilitated creative sessions and gain lifelong access to a network of fellow pioneers.

I applied for a scholarship and, luckily enough, I was granted a place.

The round of speakers was pretty impressive and I was looking forward to find out more about the latest in the field.

My goals

Since I have only recently started to approach UX in a more didactic way rather than instinctively (Finally! You say, I agree), I wanted to find out:

  • How much of what I do by instinct is right and how much is wrong
  • How do you sell UX to a client?
  • How much psychology is involved?
  • Tips on how to approach UX

The talks

★★★☆☆

I enjoyed Sofia Svanteson and Per Nordqvist‘s enthusiasm while presenting their ‘Explore’ talk, on how task based interactions are becoming limited, they only work when a user has a specific goal to achieve. James Taplin told us how technology and focus on UX can help improve the world promoting the ‘Principal Sustainability’. I learnt how much easier could the process of learning be, when you get the UX right, as Arun Vasudeva showcased some examples of rich content integration in education.
The interesting concept of co-creation was mentioned by Lennart Andersson, while Ben Pirt showed us how badly UX is implemented in hardware design! Where is the power button?
Jason DaPonte talked about his Sunday Drive app and hinted at the still unutilised potentials to integrate UI with existing devices (car navigation systems in this case).
Then we learnt how the majority of young users don’t listen to digital music in a ‘traditional’ way anymore, they prefer platforms like YouTube, Spotify, Grooveshark and how this generates an opportunity for developing a new design and offer a different experience. As Brittney Bean introduced us to her new project Songdrop between a joke and the other.

After all this information in one single day I wasn’t sure I could absorb more on day 2, but I did.

I discovered my inner (Forrest) Gump, as James Haliburton put it. How our receptive mode depends on context and spontaneity and how indeterminate and non-committal it can be.
Amy Huckfield intends to improve the life of children with ADHD with her research ‘Children with ADHD: The untapped Well of Future Creatives’, basically helping the child re-engage and re-focus after losing attention through an interactive wristband.
Rich Clayton explains how his Travel Time app could help business analyse their geographic data in an affordable way, because time could be more important than distance.
And finally Davide “Folletto” Casali tells us that 70% of the projects fail because of lack of users’ acceptance. We tend to adapt the tools we have, rather then look for the right ones to satisfy our needs, even when developing. To quote Bruno Munari:
“Complicare è semplice. Semplificare è difficile.” (To complicate is easy. To simplify is difficult.)

The ‘Creative sessions’

I didn’t particularly enjoy the Creative Sessions. Attendees were split in groups and were supposed to discuss and explore different topics. I was in the Create group and to date I am not sure what our objectives were. Maybe the topic was too broad. We ended up starting various interesting conversations on how to define a currency other than money to help potential users (we had set our target on students as an example), but we ended up with few ideas on a platform that could help with this currency exchange rather than an idea on how to enhance creation through user experience.

Conclusions

I find myself using a fairly common approach to user experience design, although I wasn’t aware I was doing it. That means there is a lot of room for improvement and definitely UX is becoming a conscious part of my design since stage one of sketching from now on.

Is UX changing?
Yes, it is. User are getting ‘smarter’ and UX needs to adapt quickly and continuously. Fairly obvious, but it’s because it’s obvious that it’s easy to forget. It’s not just the devices that are changing, users are too.
People want rich(-er) content, they look for it, feeling like they are making their own choices, but they do look for guidance and good UX design can be that guide.

Off topic things I learnt

The th sound is really a challenge for us Italians (I hope Davide “Folletto” Casali won’t get offended by this comment).
But I also learnt that the letter j makes Scandinavians struggle.

Dataset refining and simplified queries

A current project to better understand the evolution of the Celtic language is making use of interactive maps that bring together many different data sets for comparison.

The project is making use of and refining existing data that often exists in a flat tabular form and creating a more scalable relational database that can be used to perform more complex queries than was previously possible.

For instance, one data set contains a list of burial sites with details including the nature of the burial, the type of pottery and other artifacts found in association with the individuals at the site and, where possible, the gender and approximate age of each individual.

When these data were originally collected a spread sheet was used to classify each site but often the entries followed a free text format and it was perhaps not considered that this form of recording would be hard to use programmatically in the future. For instance, a typical summary description of the remains found at a site recorded in a spread sheet cell might be:

“adult male burial asso with adult female and baby”.

Whilst this is descriptive and can be easily understood by a human, it is hard to make use of in a structured query. For this reason much effort has been expended to normalise the spread sheet data into a flexible relational database form. In the previous example, whilst we a still retain one site record, we link this site to three separate buried individuals. Each individual in turn has its own strictly controlled categorisations:

Individual1, Adult Male

Individual2, Adult Female

Individual3, Neonate.

Each of these individuals may have descriptions of the manner in which they were interred, for example orientation of the long axis of the body, position of the head or the side of the body it is lying on. Additionally, the individual grave goods are associated with the correct individual to aid interpretation. Wherever possible, all such free text fields have been converted into discrete lists of valid options.

This structure allows us to use queries like ‘Return all burial sites where there are more than five individuals, all male, regardless of age, found with a Cairn, but not a Pit, where at least one individual is oriented NW-SE’.

This is an admittedly complex example and one that is possibly unlikely to yield results, but it demonstrates the sort of complexity available.

If this sort of query were expressed in SQL, it would be verbose and probably unintelligible to a non-technically minded researcher.

To help researchers assemble similar queries, the data sets will be carefully pre indexed to expose the most useful facets of the data. Each record can exhibit none or many of the data facets indexed. These options are presented in simple checkbox lists to either include or exclude particular facets and to continually narrow the results.

This screen grab from a functional prototype shows a user requesting burials from the database that are of type “Cist”, have been “Disturbed” and DO NOT include individuals aligned “East to West”.

facet

Although these results could then be displayed to the user in a tabular form, they are presumably interested in the geographic distribution of the results. The results are mapped so it can be seen where these sorts of similar burials are.

A common problem when the result set is large, or is particularly concentrated in a small area, is that map markers tend to overlay each other and the relative density of clusters cannot be easily seen or individual markers easily identified. To deal with this potential issue, the mapped results are clustered and styled so that at a glance, the researcher can see where her results are concentrated. Intensity of colour reflects an increasing count in these two example screen grabs (the second being the same result set as the first at a higher zoom level). A useful side effect of this approach is that the web browser doesn’t have to cope with rendering too many markers which can impact performance.

map1       map2

Kiln development: convenience features

Until recently, Kiln has felt to me like a bare bones developer’s tool, providing a simple but powerful framework for doing whatever XML publishing you might want, without any hand holding beyond some common elements added on to Cocoon. That has changed, somewhat, to the extent that now a new user, completely unfamiliar with XSLT, is guided in a new project to a site displaying TEI as HTML, complete with faceted search, without touching any code.

This, coupled with the new tutorial, hopefully makes Kiln a much more attractive proposition to those who are not so technically inclined (or who have not been forced to use and learn it by working at DDH!). Now I’ve gone back to adding in elements that aid the developer, and I’ll describe three of those.

Continue reading Kiln development: convenience features

EpiDoc training workshop, Rome

Photo by DAVID ILIFF. License: CC-BY-SA 3.0

At the beginning of October I ran a pre-conference tutorial on EpiDoc markup and tools at the TEI members’ meeting in Rome, co-taught with Ryan Baumann of Duke University. (Tutorial abstract on conference site.) We were hosted in the brand spanking new Vetreria Sciarra building of La Sapienza, on Via dei Volsci.

The first day of the tutorial was focused on EpiDoc recommendations for TEI encoding of epigraphic and papyrological texts. Continue reading EpiDoc training workshop, Rome

Lean UX in DH projects

As the user experience (UX) lead (and often sole UX practitioner) in a small team of software developers I’ve often had to adopt a pragmatic approach to incorporating a UX workflow into our project work. This has involved keeping clear of trendy methodologies and buzzwords, instead focusing on the key outcomes of creating the best possible user experience for the end-users of our products. To do this we follow a user-centred design process wherever possible, undertaking research with our target user groups, designing interfaces that allow them to achieve their goals and testing our designs with “real” users in the form of one-to-one usability testing.

Continue reading Lean UX in DH projects