Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

EyCon: What we are doing with Machine Learning and Computer Vision.

Blog Post by Soumik Mallick

Objective:

Historical photographs held by archival institutions provide a huge source of information for researchers in several domains of computer science, image processing, and computer vision. Photographs have been analyzed in multiple fields, including archaeology, war history, post-phenomenological geography, propaganda research, post-colonial studies, and many more. However, the work required for methodical analysis of very large quantities of photographs is a time-consuming and painful task, which also necessitates research time. Nowadays state-of-the-art Machine Learning (ML), Deep Learning (DL) and Computer Vision (CV) algorithms have significantly sped up this process and are providing a new viewpoint for researching photo archives.

Within the EyCon project, we are contributing to historical photograph analysis through ML as well as DL. This is allowing us to take one step further towards automatically analyzing historical data and extracting valuable information, which can help historians to find deeper meanings. Currently, we are developing novel methods using ML and DL for historical layout analysis and object detection. Our early-stage experiential prototype shows how several historical photograph analysis research problems can be addressed by utilizing our techniques. Our in-house developed algorithm allows for automatically extracting layout and detection of the objects detailed in historical photographs. Our work is currently in its early stages and needs more refinement, but it is already demonstrating positive results.

Methods: 

Hand-crafted features are affected by the difficulty of capturing explicit knowledge about the attributes to be associated with archival photographs, such as newspapers layouts. DL and CV have opened innovative opportunities for computer science researchers to assist the wider research community with automatic tools, to analyse and further understand document layout and objects. They can recognize meaningful patterns in historical data that are intrinsically related to human perception, and can assist experts in document layout analysis, such as object detection in war photography, which is particularly useful for museum and art gallery websites.

Currently, our model has three separate stages 1) LayoutParser and 2) Object detection and 3) Human pose estimation. We will discuss them one by one.

  1. Layout-parser:

DL methods have played a significant role in the last decade in increasing the performance of CV tasks, including historical document analysis tasks and layout analysis. Yet, the drawback of common DL approaches is their enormous hunger for annotated data. This hunger is particularly problematic in historical document analysis and layout analysis since most tasks, like semantic labelling, require experts to label the document images correctly.

Figure 1: LayoutParser framework

In recent times several DL models and datasets have been developed for layout analysis tasks. Nowadays, we also see that object detection-based methods like Faster R-CNN and Mask R-CNN are also used for identifying document elements, as well as detecting tables. Most recently some research work shows that Graph Neural Networks have also been used in table detection. 

Nevertheless, drawbacks of those models are usually implemented individually and there is no unified framework to load, as well as use, such models. There have been a variety of document data collections to facilitate the development of DL models. Some examples include PRImA (magazine layouts); PubLayNet (academic paper layouts); Table Bank (tables in academic papers); Newspaper Navigator Dataset (newspaper figure layouts); and HJDataset (historical Japanese document layouts). So, the models trained on these datasets are already available in the LayoutParser model zoo to support different scenarios.

A Model in LayoutParser takes a document image as an input and generates a list of rectangular boxes for the target content regions. There are other methods, which depend on deep convolutional neural networks (CNN) rather than manually curated rules to identify content regions. It is formulated as an object detection problem and state-of-the-art models like Faster R-CNN and Mask R-CNN are used. There are three key components in the data structure, namely the Coordinate system, the TextBlock, and the Layout. They provide different levels of abstraction for the layout data, and a set of APIs (Application Programming Interfaces) is supported for transformations or operations on these classes.

In LayoutParser, coordinate supports generally two kinds of variation: TextBlock consists of the coordinate information and extra features like block text, types, and reading orders; a layout object is a list of all possible layout elements, including other Layout objects. They all support the same set of transformation and operation APIs for maximum flexibility. LayoutParser currently supports nine different pre-trained models, trained on five different datasets.

LayoutParser provides a unified interface for existing Optical Character Recognition (OCR) tools, and it supports the Tesseract and Google Cloud Vision OCR engines. In accordance with the authors of the research paper, we have already implemented a LayoutParser using Pytorch, Python >= 3.6, Detectron2 and CUDA.

LayoutParser uses Detectron2-based pre-trained models like Faster R-CNN, RetinaNet, and Mask R-CNN to detect the layout of our input document. This is basically an object consisting of a list of detected layouts.

(a)
(b)
(c)

Figure 2: Sample of output of LayoutParser

In each detected layout, you will get the following important information:

  1. Coordinates of the bounding box (x1, y1, x2, y2) of each detected layout;
  2. The type of detected layout (i.e text, image, table, list, or title);
  3. The ID of the detected layout;
  4. The text inside each detected layout;
  5. The confidence score of each detected layout;

 For EyCon, we fine-tuned our model of LayoutParser according to our dataset and achieved good results. Fig. 2 shows the outcome of our LayoutParser.

Historical Object Detection:

Detected objects are helpful in determining the context of historical photos when used in conjunction with appropriate classes, as well as the focus of each photographer. A photograph of a chair, for example, may have been taken indoors, whereas photos of horses, boats, cars, or trains may have been taken outdoors. Moreover, the presence of objects can help determine the time of the year, and even establish the period for unlabelled photographs. For example, a high number of chairs, ties, uniforms, and people in a photograph are likely to indicate some official event or group of soldiers. At the same time, photos of battle tanks, airplanes, artillery, sandbags and guns, would suggest a battle or near battle areas. Aside from identifying the context of each photo, such analysis can draw out the main focus of a photographer, evaluating which types of objects are most frequently present in their photographs.

Figure 3: The Mask R-CNN framework

The main goal of object detection is to predict a set of bounding boxes and category labels for each object of interest. In recent times detectors address this set prediction task in an indirect way, by defining surrogate regression as well as classification problems on a large set of proposals as anchors, or window centres. Most of their performances are significantly influenced by post-processing steps to collapse near-duplicate predictions, the design of the anchor sets, and by the heuristics that assign target boxes to anchors.

Secondly, we implemented object detection using Mask R-CNN. Mask R-CNN’s benchmark that is used in Detectron2 uses multiple frameworks. It is an overall two-stage procedure that uses parallelism in regard to the class and box. Every candidate object has two outputs: a class label and a bounding box offset. Building from Fast R-CNN, the first stage of a Regional Proposal Network (RPN) is adopted. Moreover, pixel-to-pixel alignment is added and features are extracted using RoIPool from each candidate box. RoIPool and RoIAlign are techniques that perform quantization for the pixels.

Mask R-CNN adopts the same two-stage procedure, with an identical first stage (which is RPN). In the second stage, in parallel to predicting the class and box offset, Mask R-CNN also outputs a binary mask for each RoI. This is in contrast to most recent systems, where classification depends on mask predictions. It follows the spirit of Fast R-CNN that applies bounding-box classification and regression in parallel (which turned out to largely simplify the multi-stage pipeline of the original R-CNN).  Mask R-CNN is also special because it outputs a binary mask for each RoI in contrast to most object segmentation systems where classification depends on mask predictions. For each RoI, a multi-task loss is defined as L = Lc + Lb + Lm. The mask representation is quite valuable to the performance of Detectron2/Mask R-CNN since it encodes an input object’s spatial layout and thus produces faster inference. Mask R-CNN also features a convolutional backbone for feature extraction (creates pixel-to-pixel correspondence provided by convolutions) and a network head for bounding box recognition as part of both classification and regression.

(a)
(b)

Figure 3a and 3b: Sample of output of object detection

Improved objection detection:

This simplifies the detection pipeline by dropping multiple hand-designed components that encode prior knowledge, like spatial anchors or non-maximal suppression. Unlike most existing detection methods, the object Detection transformer doesn’t require any customized layers and thus can be reproduced easily in any framework that contains standard CNN and transformer classes. Below Fig. 4 shows object detection architecture with transformer. 

Figure 4: Object detection architecture with transformer

The transformer encoder-decoder infers a fixed-size set of N predictions, in a single pass through the decoder, where N is set to be significantly larger than the typical number of objects in an image.

Next, an important step is finding a bipartite matching between these two sets, a permutation of N elements σ is searched with the lowest cost.

Where Lmatch (yi, ^y(i)) is a pair-wise matching cost.

The matching cost takes into account both the class prediction and the similarity of predicted is yi= {bi, ci}.

ci is the class probability.

Lmatch (yi, ^y (i)) is:

Compute the Hungarian loss, which is a linear combination of a negative log-likelihood for class prediction and a box loss defined as:

A conventional CNN backbone helps to learn a 2D representation of an input image and the backbone is ImageNet-pretrained ResNet-50 or ResNet-101 with frozen batch norm. After that the model flattens it and supplements it with a positional encoding before passing it into a transformer encoder. A Transformer decoder then takes as input a small, fixed number of learned positional embeddings, called object queries, and additionally attends to the encoder output. Each output embedding of the decoder is passed to a shared feed forward network (FFN) and that predicts either a detection with class and bounding box or a “no object” class.  Fig. 5 shows an output of the object detection we implemented with PyTorch and Detectron2. 

(a)
(b)

Figure 5: Example output of our improved Object detection

Human pose estimation:

A human pose skeleton denotes the orientation of an individual in a particular format. Fundamentally, it is a set of data points that can be connected to describe an individual’s pose. Each data point in the skeleton can also be called a part or coordinate, or point. A relevant connection between two coordinates is known as a limb or pair. However, it is important to note that not all combinations of data points give rise to relevant pairs.

(a)
(b)
(c)

Figure 6: Keypoints detected by OpenPose

The pipeline from OpenPose is actually pretty simple and straightforward. First, an input image is fed as input into a “two-branch multi-stage” CNN. Two branch means that the CNN produces two different outputs. Multi-stage simply means that the network is stacked one on top of the other at every stage.

Figure 7: Architecture of the two-branch multi-stage CNN of OpenPose

Two-branches: the top branch, shown in beige, predicts the confidence maps of different body parts location such as the right eye, left eye, right elbow and others. The bottom branch, shown in blue, predicts the affinity fields, which represents a degree of association between different body parts.

Multi-Stage: at the first stage the network produces an initial set of detection confidence maps S and a set of part affinity fields L. Then, in each subsequent stages’ predictions from both branches in the previous stage, along with the original image features F, are concatenated, represented, and used to produce more refined predictions. Finally, the confidence maps and affinity fields are being processed by greedy inference to output the 2D key points for all people in the image. We explain OpenPose first, which can help you to understand how we can get key points.    

Human pose from single image:

Estimating a 3D human pose from a single image is known to be a severely ill-posed problem because many different body configurations can virtually have the same projection. A typical solution consists in using discriminative strategies to directly learn mappings from image evidence (e.g., HOG, SIFT) to 3D poses. This has been recently extended to end-to-end mappings using CNNs. In order to be effective, though, these approaches require large amounts of training images annotated with the ground truth 3D pose. While obtaining this kind of data is straightforward for 2D poses, even for images ‘in the wild’ (FLIC or LSP datasets), it requires using sophisticated motion capture systems for the 3D case:  

(a)
(b)

Figure 8: 3D human pose estimation on historical images

With 3D body pose parameterization, most approaches use a skeleton with a number N of joints ranging between 14 and 20 and represented by 3N vectors in a Cartesian space. In order to enforce joint dependency during the 2D-to-3D inference and considered latent joint representations, obtained through Kernel Dependency Estimation and autoencoders. To gain depth-scale invariance, we have to first normalize the vertical coordinates of the projected 2D poses xi to be within the range [−1, 1]. 3D joint positions yi are expressed in meters with no further pre-processing. After, we can represent both 2D and 3D poses by means of Euclidean Distance Matrices. For the 3D pose y it is defined edm(y) to be the N × N matrix where its (m, n) entry is computed as:

Similarly, edm(x) is the N × N matrix built from the pairwise distances between normalized 2D joint coordinates. 

Retrieving the 3D joint positions

from a potentially noisy distance matrix edm(y) estimated by the neural network, can be formulated as the following error minimization problem:

As a regression between two Euclidean Distance Matrices encoding pairwise distances of 2D body joints and 3D body joints, a 3D human pose estimation problem can be formulated. Regression is carried out by a Neural Network, and 3D joint estimates are obtained via Multidimensional Scaling from the predicted 3D Euclidean Distance Matrices.  Above, Fig.8 shows the outcome of 3D human pose estimation.

Implementation:

Both our LayoutParser and object detection are based on PyTorch with Facebook Detectron2. For human pose estimation we used PyTorch. Currently, we are using a single NVIDIA RTX 3000 GPU and CUDA version 11.

Result:

Our work is still at an early stage. While we have achieved good results so far, it needs more refinement.

 Figure 9: Output result of LayoutParser

Figure 10: Output result of object detection

Figure 11: Human pose estimation                   

Difficulty and challenges:

We are facing some challenges with LayoutParser. Historical photographs contain lots of complex layouts and LayoutParser has failed to identify those layouts. Historical photographs also have poor image quality with high levels of noise. There can be several kinds of logos, symbols and stamps inside images as well as documents, which make it very difficult to perform LayoutParser on these records.

Conclusion:

In future, we aim to add more features to our platform, initially improving the detection of smaller-sized objects. In our updated version of the model, we will try to add person segmentation, face detection with facial expression analysis, group-level emotion recognition, gender estimation, and weapon detection. We believe that these updated features will help to provide us with a deeper understanding, which will enable further research by historians or within cultural heritage organisations. While our work is currently fragmented into different parts, soon we will be able to provide our data open source on our website through a Python SDK library with an integrated model, so you can also download all our code and database. We wanted to keep you updated on our progress so far, and to let you know that we are continuously working towards improving our model to provide a uniform system for the automated analysis of historical photography.

 References

Cai, Hongping, Qi Wu, Tadeo Corradi, Peter Hall, ‘The Cross-Depiction Problem: Computer Vision Algorithms for Recognising Objects in Artwork and in Photographs’, CVPR (2015).

Carion, Nicolas, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko, ‘End-to-End Object Detection with Transformers’, CVPR (2020).

Castellano, Giovanna, Eufemia Lella, and Gennaro Vessio, ‘Visual Link Retrieval and Knowledge Discovery in Painting Datasets’, Multimedia Tools and Applications, 80 (2021).

Castellano, Giovanna, Giovanni Sansaro, and Gennaro Vessio, ‘Integrating Contextual Knowledge to Visual Features for Fine Art Classification’, arXiv, 2021.

Crowley, Elliot J., and Andrew Zisserman, ‘In Search of Art’, ECCV (2014).

Gonthier, Nicolas, Yann Gousseau, Said Ladjal, Olivier Bonfait, ‘Weakly Supervised Object Detection in Artworks’, ECCV (2018).

He, Kaiming, Georgia Gkioxari, Piotr Dollar, and Ross Girshick ‘Mask R-CNN’, ICCV (2017).

Francesc Moreno-Noguer, 3D Human Pose Estimation from a Single Image via Distance Matrix Regression, CVPR2016

Liu, Ziming, Guangyu Gao, Lin Sun, Li Fang, Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, ‘Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields’, CVPR (2017).

Liu, Ziming, Guangyu Gao, Lin Sun, Li Fang, ‘IPG-Net: Image Pyramid Guidance Network for Small Object Detection’, CVPR (2020).

Shen, Zejiang, and Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, and Weining Li, ‘LayoutParser: A Unified Toolkit for Deep Learning-Based Document Image Analysis’, arXiv (2021).

Strezoski, Gjorgji Marcel Worring, ‘OmniArt: Multi-task Deep Learning for Artistic Data Analysis’, CVPR (2017).