Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

EyCon Workshop 2: “Using Visual AI Applied to Digital Archives”— Highlights and Reflections

Devika Mehra (Loughborough University)

In what way does AI (Artificial Intelligence) impact traditional archival practice and theory? What do these new and emergent human-machine-based tools mean for future research and practice for refiguring and increasing access to all visual archives? These were the some of the key questions raised at the second EyCon workshop, ‘Using Visual AI Applied to Digital Archives.’ The workshop was organised by the EyCon project (Early Conflict Photography and Visual AI) held at the Imperial War Museum (London) on 8 June -9 June 2023.

The EyCon project is co-funded by the AHRC/Labex Past in the Present led by Dr Lise Jaillant (Loughborough University) and Dr Julien Schuh (Paris Nanterre University), in partnership with the University of Paris Cité and Prof Daniel Foliard. It aims at harnessing AI-reliant tools to analyse a large corpus of colonial photographs. EyCon’s database will include thousands of historical photographs documenting armed violence. The project is partnering with a network of archival institutions in France and in the UK. This workshop acted as a forum to negotiate with these pressing questions and the impact of visual AI techniques on digital archives. While Day 1 had three seventy-five-minute sessions, Day 2 had two practical sessions spread over three hours. Such a format fostered close engagement with French and UK cultural institutions impacted by the challenges of sensitive digitised archives.

On Day 1 Dr Lise Jaillant (UK PI-EyCon Project, Loughborough University) and Dr Maria Castrillo (Head of Collections Access and Research, Imperial War Museums) welcomed the guests in the morning. The first session then started with two presentations focusing on using computational methods to explore illustrations in digital visual collections. The session opened with the presentation, ‘Getting the Picture: Developing Computational Methods to Interrogate Large Datasets of Historical Book Illustrations,’ by Professor Julia Thomas and Irene Testini from Cardiff University. They introduced the methods of computational analysis used for The Illustration Archive database and their latest project, ‘Finding a Place’. The Layout Parser used to analyse captions for illustrations highlighted the potential of using computation methods for working with historical illustrations. Some challenges encountered in their project such as awkward layouts and poor quality of digitised images were raised adding to the ongoing debates of dealing with large corpus of historical data. This was complemented by the second presentation, ‘Using multimodal machine learning to study the visual representation of war,’ by Dr Thomas Smits (Universiteit Antwerpen). Drawing on the example of the Indonesian War of Independence (1945-1949), it explored how multimodal machine learning models can be used to analyse the visual representation of war at scale. It offered current and new possibilities of transitioning from monomodality to multimodality in Digital Humanities research.

After a short break, current computational methods and tools used for accessing, analysing, and disseminating visual and audio-visual collections were examined in the next session on new models of accessing digital archives using visual AI. Eléonore Plantard and Véronique Pontillon-Valedon (ECPAD: Établissement de communication et de production audiovisuelle de la Défense) began this session with their presentation, ‘The New ECPAD Digital Platform: When Workflows Question the Communication of Archives,’ on the different interfaces and softwares used for archiving sensitive images and handling their dissemination to the public. In making the images open to the public, GLAM industry deals with restrictions to communicability related to Heritage and Intellectual Property Codes and the Defence Code. Using their new database model, they demonstrated that new computational methods offer ways to GLAM practitioners to handle, preserve, and market sensitive visual archives.

This was followed by another thought-provoking presentation by Giacomo Alliata who shared the work he is doing with his colleagues, Yumeng Hou and Sarah Kenderdine, at the Laboratory of Experimental Museology eM+, EPFL. He focused on new computational methods and tools to augment moving image archives in his presentation, ‘Augmenting the Archival Experience of Embodied Knowledge Through Visual AI: a Computational Framework.’ It focused on how AI can be used for new ways of visualising motion- and posture-based performances and the relevance of understanding the body. He demonstrated how new and emergent computational practices offer ways of unlocking the massive audio-visual collections using two use cases: the Prix de Lausanne archive and the Hong Kong Martial Arts Living Archive.

The guests regrouped after lunch for the final session of the day, ‘Visual AI, Digital Archives and GLAM institutions: Challenges and Future Outputs.’ In his talk on ‘Back to basics: why technology alone is not enough to unlock the potential of visual archives,’ Geoff Browell (King’s College London Archives) addressed the challenge of finding a balance between technology and archival expertise to augment the potential of visual archives. Some of the key concerns raised included: the short-term/theme-based digitisation projects, the need for specialist cultural or historical knowledge to contextualise these collections, the communicability restrictions concerning ‘controversial’ collections, and the issue of long-term digital preservation. He pointed out the need to develop long term partnerships amongst interdisciplinary communities working with archives and to develop sustainable digital humanities projects. Dr Christiane Sibille and Nicole Graf (ETH-Bibliothek, Zurich) carried forward the discussion on use of machine learning methods and generating metadata in cultural heritage institutions in their presentation, ‘Collections as Data in the Context of Visual AI.’ Through their work on Image Archive, they demonstrated how archivists and volunteers used a crowdsourcing platform and computational tools to create high-quality metadata. Day 1 ended with a tour of the Second World War Galleries organised by Imperial War Museums team that focused on the narratives created by museum spaces for the wider public to understand the fraught history of war.

Day Two of the workshop consisted of two practical sessions; one led by Imperial War Museums colleagues, Alan Wakefield and Helen Mavin, and the second one led by the EyCon French team members, Dr Julien Schuh and Marina Giardinetti. The practical sessions offered an opportunity for an interactive discussion on war photo archives, sensitivity issues, and computational tools.

Alan Wakefield opened the first session with his presentation on ‘The IWM Photograph Archive & Images of warfare in the British Empire 1900 – 1929.’ He gave a critical overview of the Imperial War Museum Collections and used specific examples to highlight how his team works to develop a nuanced understanding of the histories these collections represent. He raised questions and challenges on how to reconsider the collections and focus on weaker areas to tell the narratives of every individual who contributed to the war effort hidden within these collections. This was followed by Helen Mavin’s presentation, ‘Addressing sensitivities at IWM,’ that brought the discussion forward on issues of sensitivity and reconsidering metadata and accessibility of photographs and films. Using recent IWM projects with external collaborators (including the ‘Provisional Semantics’ project), she noted collection development and digitisation programme and the challenges of incorporating multiple perspectives in an evolving digitised national collection.

The second practical session, ‘Visual AI for the EyCon Project, Sensitive Images, and the EyCon Database,’ presented the EyCon database, visual similarity model, and the tools developed in the context of the EyCon project to make visible conflicts that are rarely present in contemporary visual culture. They introduced the Layout Parser model and the techniques that they have used to identify sensitive content and to add trigger warnings with the help of AI. They explained how they used computational tools to train the database and then use it on our historical dataset. This method allowed them to group together similar images (both images and captions for the pictures). They demonstrated how they took all the metadata as well as the visual aspects of the pictures in the database to enable multimodal search functionality. This enabled them to link pictures from different databases with the help of this tool.

In short, the sessions covered wide-ranging issues concerning the potential and challenges of AI applied to archival practices in GLAM institutions, leading to a reassessment of large visual collections and historical databases.

Workshop Report: Multimodal Visual Similarity Algorithms and Digitized Photo Archives

Jonathan Dentler, Chercheur postdoctoral

Content warning: This post contains images that may be upsetting or distressing and display violence and the aftermath of violence. It is shown for educational and scholarly purposes and we ask that the reader engage with the material ethically and responsibly. 

On May 9th, EyCon team members and institutional partners from the Musée Quai Branly-Jacques Chirac, the Service Historique de la Défense (SHD), and the Établissement de Communication et de Production Audiovisuelle de la Défense (ECPAD) met at the ECPAD installation at the Fort d’Ivry to discuss the team’s exciting research into the possibilities opened up by multimodal visual AI in historical research in photography and visual culture. Daniel Foliard began by introducing the workshop’s objectives. Such tools can help group together similar digital images, tracing the circulation of images across different publications for example. They can also assist with functions such as detecting objects as well as stylistic or formal analysis. Finally, such tools can produce new metadata in a semi-automated manner by, for instance, suggesting probable attributions for variables such as a photograph’s date of production, medium or process, photographer, or place depicted. Visual similarity tools become much more powerful when combined with existing textual metadata as an additional vector in a multi-modal fashion, and this is precisely what EyCon project computer science intern Mohamed Salim Aissi has been working on recently. In this workshop, we wanted to demonstrate a proof-of-concept test based on a selection of photos from the EyCon database centered on the atrocities carried out by the Italian Army in Tripoli in the fall of 1911.

Figure 1: Pierre Schill, workshop presentation. “The Paradox of the Italo-Turkish War: an abundant production and wide distribution of images, and yet an ‘under-exposed’ conflict.” 

In order to demonstrate some of the possibilities and limitations of these tools, we invited Pierre Schill to share his research on the photographic coverage of the Italo-Turkish war. M. Schill is a researcher who has published on the photos from the war in Libya, and who is currently collaborating with the EyCon team on a forthcoming article on the topic. You can find his book on the topic here, which contains additional context and analysis for these images. The idea was to show how photo historians analyze and interpret historical photos in part by tracing the circulation of images from their production through their reproductions in various formats and publications. This can help establish context by making arguments for probable attributions, as well as show how photographs’ meanings are inflected by editorial choices such as cropping or captioning. (Figure 1) M. Schill began by suggesting that a paradox of the war in Tripoli is that, while it was highly photographed by newspaper correspondents, it is little remembered in Europe today. In part, he suggested, this is because of the difficulty in identifying camera operators as well as the diversity of sites and modes of conservation of the photographs. To get a better idea of the conflict and its visual records, it is necessary to draw links between the various archives holding the record in order to make attributions and establish context.

Figure 2: Pierre Schill, workshop presentation. “A scene with two prisoners and three soldiers: two photographs in circulation in the press.”

(Figure 2) Based on his painstaking archival work, M. Schill showed us a number of examples of how he compared photographs in different collections in order to establish that they depicted the same event from different angles, or that the same photographer or persons appear in different scenes. (Figure 3) He also demonstrated that photos have been misattributed; visual cues such as the angle of the sun in different images of the same scene show that it was photographed multiple times, likely by different photographers. 

Figure 3: Pierre Schill, workshop presentation. “Play of shadows and identity: beyond the analogy of the scene (public hanging, Dec. 6, 1911).”

The key question for this workshop was whether it might be possible to produce visual similarity tools that are capable of capturing this level of nuance, helping to perform tasks such as suggesting probable attributions for photographs. For example, using a multi-modal approach, a visual similarity tool could quickly find two instances of the same image in different publications, different formats, and/or different archives. If the image is attributed to a photographer in one instance but not in the other, the text-processing part of the process could perceive this attribution, and then suggest that attribution for the other instance of the image.

However, there are a number of dangers here. First, without a sufficiently sophisticated tool, this might simply automate misattributions. In the case of the false attribution of the hanging photograph to Gaston Chérau that Pierre Schill shared with us, would a multi-modal visual similarity approaches be sufficiently nuanced to recognize that the changed angle of the sun indicated that significant time had elapsed between exposures, and that this might be strong evidence for different photographers? If the database with which the algorithm was working included the image from Domenica del Corriere in which the image was credited to “Dott Comboni” (likely an Italian military photographer who took the image later in the day), perhaps it could deliver the correct suggestion. This raises two important points. First, it is important to work with very large amounts of data. Second, and even more crucially, it is essentially that the researcher be aware of the limitations of the database with which the machine is working, and that semi-automation of this function in no way means that the suggestions it generates are infallible. A second-order problem also results from the coexistence of these two factors: while working with large data sets is important—indeed, it is central to the appeal of algorithmic approaches—an ever-larger data set also risks obscuring the tool’s limits from the researcher, lending the results a misleading aura of authority.

Following M. Schill’s presentation, Mohamed Salim Aissi presented his new work on a multimodal visual similarity tool, which will be built into EyCon’s public-facing database. We first constructed a limited image database to test the tools. This set included images of loose photographs documenting the Tripoli atrocities from the Forbin fonds at the SHD, as well as instances of those images reproduced in publications such as The Daily Mirror and Excelsior. In order to make sure that the tools could pick out similarities among a much larger set of non-similar photos, we also included the roughly 60,000 images from the fonds Valois, images produced by the French military during the Great War.

(Figures 4-5) M. Aissi explained how visual similarity algorithms and textual similarity algorithms function, translating the technical subtleties into terms that were comprehensible for the non-specialist audience.

Figure 4: Mohamed Salim Aissi. Graphic explanation of layers produced by a visual similarity tool. 

Figure 5: Mohammed Salim Aissi. Graphic explanation of how textual metadata is vectorized to create probabilistic predictions for word order. This is how the textual aspect of the multimodal tool operates. 

Figure 6: Mohamed Salim Aissi. Demonstration of multimodal search function for retrieving similar images from a large corpus.

(Figure 6) He then demonstrated the maquette he created, which can be used to search solely on the basis of visual similarity, solely on the basis of text, or with a multimodal approach. For example, a user will be able to query the database using an image, and ask for the ten most similar images in the database. Alternatively, the user could enter in a query using textual terms, and see what photographs are proposed. Finally, the user can search using an image file, and add additional vectors to the search on the basis of text that should be associated with the image.

Figure 7: Mohammed Salim Aissi. Multimodal vectorization of metadata and resulting clustering of terms and images. 

(Figure 7) Using several examples, M. Aissi demonstrated that this combined approach delivered a much higher degree of accuracy in finding different versions of an image across different publications for which cropping or other formal features of the reproduction varied.

Finally, Marina Giardinetti discussed some of the issues of metadata treatment raised by visual similarity tools. Noting that the structure of the metadata in the database was essential for such tools to function well, she explained the EyCon team’s methodology for conserving metadata and its sources in ways that allow its production to be traceable and for artificial intelligence tools to function in the best way possible. EyCon integrates the source of all metadata attached to the image so that its origin is documented and conserved. The EyCon website will show the metadata, allowing the user to get an idea of some of the variables that the multimodal algorithm used in making its determinations. If an algorithm suggests metadata, this would be noted in the display.

In the closing discussion, the participants considered the potential pitfalls and strengths of such tools. In comparison to many of the AI-powered tools that have come to the forefront of public discussion in the past year or two—notably ChatGPT—a strength of this system is that it clearly shows whether metadata was produced by a human or a machine, and presents at least some ways for the user to get an idea of how the algorithm has arrived at its conclusions. One key associated issue is metadata standards—EyCon is promoting discussion on this issue in France, and encouraging the use of the International Image Interoperability Framework (IIIF) standard. IIF standardizes the delivery of image files from servers to web displays where they can be used and interacted with. It eases the delivery of richer functionality for image or audio-visual files beyond simple viewing, and, crucially, it attaches metadata to the digital object, which makes it useful for preserving context when it comes to archival images.

Finally, we discussed the question of design and user experience. From our perspective, the display and design we propose for the public-facing database has several benefits. First, because the user has to do some work to define the search parameters, it makes the process slightly clunky. Instead of making the experience as seamless and smooth as possible, we wanted to show the system’s artificiality, finding a balance between ergonomics and resistance. Ideally, this should help the user realize that the suggestions made by the AI are not infallible and they do not exclude other possibilities that might have been missed due to the limitations of the inputs or of the algorithm. We also discussed the potential uses for researchers: while these tools may help us reanimate images and their circulations, they are ultimately most helpful in accelerating certain time-consuming operations performed by the researcher, allowing them to focus on other tasks requiring higher order analysis and synthesis. The key challenge for such tools going into the future may be finding ways to translate the impenetrable depths of the technological “black box” into terms that the average user can understand, mediating its function such that the user has insight into the variables at play, or the possibilities that might have been missed. If this is not accomplished, there is a very real danger that AI could degenerate from a research tool into a hindrance, obscuring aspects of the analogue archive behind a veneer of computational objectivity.

We would like to thank Véronique Pontillon Valedon, chargée des actions scientifique at the ECPAD, for welcoming the workshop participants at the facility at the Fort d’Ivry.