Jonathan Dentler, Chercheur postdoctoral
Content warning: This post contains images that may be upsetting or distressing and display violence and the aftermath of violence. It is shown for educational and scholarly purposes and we ask that the reader engage with the material ethically and responsibly.
On May 9th, EyCon team members and institutional partners from the Musée Quai Branly-Jacques Chirac, the Service Historique de la Défense (SHD), and the Établissement de Communication et de Production Audiovisuelle de la Défense (ECPAD) met at the ECPAD installation at the Fort d’Ivry to discuss the team’s exciting research into the possibilities opened up by multimodal visual AI in historical research in photography and visual culture. Daniel Foliard began by introducing the workshop’s objectives. Such tools can help group together similar digital images, tracing the circulation of images across different publications for example. They can also assist with functions such as detecting objects as well as stylistic or formal analysis. Finally, such tools can produce new metadata in a semi-automated manner by, for instance, suggesting probable attributions for variables such as a photograph’s date of production, medium or process, photographer, or place depicted. Visual similarity tools become much more powerful when combined with existing textual metadata as an additional vector in a multi-modal fashion, and this is precisely what EyCon project computer science intern Mohamed Salim Aissi has been working on recently. In this workshop, we wanted to demonstrate a proof-of-concept test based on a selection of photos from the EyCon database centered on the atrocities carried out by the Italian Army in Tripoli in the fall of 1911.
Figure 1: Pierre Schill, workshop presentation. “The Paradox of the Italo-Turkish War: an abundant production and wide distribution of images, and yet an ‘under-exposed’ conflict.”
In order to demonstrate some of the possibilities and limitations of these tools, we invited Pierre Schill to share his research on the photographic coverage of the Italo-Turkish war. M. Schill is a researcher who has published on the photos from the war in Libya, and who is currently collaborating with the EyCon team on a forthcoming article on the topic. You can find his book on the topic here, which contains additional context and analysis for these images. The idea was to show how photo historians analyze and interpret historical photos in part by tracing the circulation of images from their production through their reproductions in various formats and publications. This can help establish context by making arguments for probable attributions, as well as show how photographs’ meanings are inflected by editorial choices such as cropping or captioning. (Figure 1) M. Schill began by suggesting that a paradox of the war in Tripoli is that, while it was highly photographed by newspaper correspondents, it is little remembered in Europe today. In part, he suggested, this is because of the difficulty in identifying camera operators as well as the diversity of sites and modes of conservation of the photographs. To get a better idea of the conflict and its visual records, it is necessary to draw links between the various archives holding the record in order to make attributions and establish context.
Figure 2: Pierre Schill, workshop presentation. “A scene with two prisoners and three soldiers: two photographs in circulation in the press.”
(Figure 2) Based on his painstaking archival work, M. Schill showed us a number of examples of how he compared photographs in different collections in order to establish that they depicted the same event from different angles, or that the same photographer or persons appear in different scenes. (Figure 3) He also demonstrated that photos have been misattributed; visual cues such as the angle of the sun in different images of the same scene show that it was photographed multiple times, likely by different photographers.
Figure 3: Pierre Schill, workshop presentation. “Play of shadows and identity: beyond the analogy of the scene (public hanging, Dec. 6, 1911).”
The key question for this workshop was whether it might be possible to produce visual similarity tools that are capable of capturing this level of nuance, helping to perform tasks such as suggesting probable attributions for photographs. For example, using a multi-modal approach, a visual similarity tool could quickly find two instances of the same image in different publications, different formats, and/or different archives. If the image is attributed to a photographer in one instance but not in the other, the text-processing part of the process could perceive this attribution, and then suggest that attribution for the other instance of the image.
However, there are a number of dangers here. First, without a sufficiently sophisticated tool, this might simply automate misattributions. In the case of the false attribution of the hanging photograph to Gaston Chérau that Pierre Schill shared with us, would a multi-modal visual similarity approaches be sufficiently nuanced to recognize that the changed angle of the sun indicated that significant time had elapsed between exposures, and that this might be strong evidence for different photographers? If the database with which the algorithm was working included the image from Domenica del Corriere in which the image was credited to “Dott Comboni” (likely an Italian military photographer who took the image later in the day), perhaps it could deliver the correct suggestion. This raises two important points. First, it is important to work with very large amounts of data. Second, and even more crucially, it is essentially that the researcher be aware of the limitations of the database with which the machine is working, and that semi-automation of this function in no way means that the suggestions it generates are infallible. A second-order problem also results from the coexistence of these two factors: while working with large data sets is important—indeed, it is central to the appeal of algorithmic approaches—an ever-larger data set also risks obscuring the tool’s limits from the researcher, lending the results a misleading aura of authority.
Following M. Schill’s presentation, Mohamed Salim Aissi presented his new work on a multimodal visual similarity tool, which will be built into EyCon’s public-facing database. We first constructed a limited image database to test the tools. This set included images of loose photographs documenting the Tripoli atrocities from the Forbin fonds at the SHD, as well as instances of those images reproduced in publications such as The Daily Mirror and Excelsior. In order to make sure that the tools could pick out similarities among a much larger set of non-similar photos, we also included the roughly 60,000 images from the fonds Valois, images produced by the French military during the Great War.
(Figures 4-5) M. Aissi explained how visual similarity algorithms and textual similarity algorithms function, translating the technical subtleties into terms that were comprehensible for the non-specialist audience.
Figure 4: Mohamed Salim Aissi. Graphic explanation of layers produced by a visual similarity tool.
Figure 5: Mohammed Salim Aissi. Graphic explanation of how textual metadata is vectorized to create probabilistic predictions for word order. This is how the textual aspect of the multimodal tool operates.
Figure 6: Mohamed Salim Aissi. Demonstration of multimodal search function for retrieving similar images from a large corpus.
(Figure 6) He then demonstrated the maquette he created, which can be used to search solely on the basis of visual similarity, solely on the basis of text, or with a multimodal approach. For example, a user will be able to query the database using an image, and ask for the ten most similar images in the database. Alternatively, the user could enter in a query using textual terms, and see what photographs are proposed. Finally, the user can search using an image file, and add additional vectors to the search on the basis of text that should be associated with the image.
Figure 7: Mohammed Salim Aissi. Multimodal vectorization of metadata and resulting clustering of terms and images.
(Figure 7) Using several examples, M. Aissi demonstrated that this combined approach delivered a much higher degree of accuracy in finding different versions of an image across different publications for which cropping or other formal features of the reproduction varied.
Finally, Marina Giardinetti discussed some of the issues of metadata treatment raised by visual similarity tools. Noting that the structure of the metadata in the database was essential for such tools to function well, she explained the EyCon team’s methodology for conserving metadata and its sources in ways that allow its production to be traceable and for artificial intelligence tools to function in the best way possible. EyCon integrates the source of all metadata attached to the image so that its origin is documented and conserved. The EyCon website will show the metadata, allowing the user to get an idea of some of the variables that the multimodal algorithm used in making its determinations. If an algorithm suggests metadata, this would be noted in the display.
In the closing discussion, the participants considered the potential pitfalls and strengths of such tools. In comparison to many of the AI-powered tools that have come to the forefront of public discussion in the past year or two—notably ChatGPT—a strength of this system is that it clearly shows whether metadata was produced by a human or a machine, and presents at least some ways for the user to get an idea of how the algorithm has arrived at its conclusions. One key associated issue is metadata standards—EyCon is promoting discussion on this issue in France, and encouraging the use of the International Image Interoperability Framework (IIIF) standard. IIF standardizes the delivery of image files from servers to web displays where they can be used and interacted with. It eases the delivery of richer functionality for image or audio-visual files beyond simple viewing, and, crucially, it attaches metadata to the digital object, which makes it useful for preserving context when it comes to archival images.
Finally, we discussed the question of design and user experience. From our perspective, the display and design we propose for the public-facing database has several benefits. First, because the user has to do some work to define the search parameters, it makes the process slightly clunky. Instead of making the experience as seamless and smooth as possible, we wanted to show the system’s artificiality, finding a balance between ergonomics and resistance. Ideally, this should help the user realize that the suggestions made by the AI are not infallible and they do not exclude other possibilities that might have been missed due to the limitations of the inputs or of the algorithm. We also discussed the potential uses for researchers: while these tools may help us reanimate images and their circulations, they are ultimately most helpful in accelerating certain time-consuming operations performed by the researcher, allowing them to focus on other tasks requiring higher order analysis and synthesis. The key challenge for such tools going into the future may be finding ways to translate the impenetrable depths of the technological “black box” into terms that the average user can understand, mediating its function such that the user has insight into the variables at play, or the possibilities that might have been missed. If this is not accomplished, there is a very real danger that AI could degenerate from a research tool into a hindrance, obscuring aspects of the analogue archive behind a veneer of computational objectivity.
We would like to thank Véronique Pontillon Valedon, chargée des actions scientifique at the ECPAD, for welcoming the workshop participants at the facility at the Fort d’Ivry.