logo le blog invivoo blanc

Entity Matching : From Traditional Techniques to Deep Learning Solutions

6 June 2023 | Machine Learning | 0 comments

In today’s world, where data is abundant and diverse, it has become crucial to organize information in a systematic manner. One way to do this is through referencing, which involves indexing products, information, files, objects, buildings, etc., and mentioning them in a system. The term “referencing” has taken on a new meaning on the internet, where it refers to registering a website on a search engine or directory. This task has now expanded beyond websites and web pages to include images, documents, videos, products, places, and applications.

In the field of Data Science and Engineering, this task is known as “entity matching” or “entity linkage”. It is a vital aspect of Natural Language Processing (NLP) that has gained the attention of researchers worldwide. Entity matching aims to firstly identify and connect data, and then answer the question of whether two entities refer to the same real-world object or not.

Over time, various techniques have been developed to implement such tools to perform this task. Traditional EM tools were some of the first tools. These tools are only based on text similarity calculation using predefined rules. Recently, many techniques use deep learning technologies to obtain maximum efficiency and thus provide a more accurate and effective approach to entity matching.

1. Traditional Entity Matching

1.1. Definition

Entity matching, also known as record linkage, was initially introduced as a text similarity calculation problem, where predefined metrics such as cosine similarity, Levenshtein ratio, or euclidean distance were used to obtain a match. These techniques were widely used until 2016.

To facilitate this process, several python libraries have been developed, such as the TF-IDF matcher. This package calculates the cosine similarity between two string inputs after converting them into the Term Frequency-Inverse Document Frequency (TF-IDF) representation. This representation method helps in calculating the importance of each word in a document or a string. Additionally, the Fuzzy matcher package can also be used to perform entity matching.

However, these traditional techniques have limitations, as they do not consider the context of the data and rely solely on the textual similarity of the entities. This often results in false positive matches and can lead to inaccurate results.

To overcome these limitations, more advanced techniques have been developed, such as deep learning-based approaches. These techniques take into account the contextual information of the entities, resulting in improved accuracy and efficiency in entity matching.

1.2. Traditional Entity Matching Pipeline

The pipeline for traditional entity matching involves several steps, which can be summarized as follows:

  1. Data Preprocessing: This step involves cleaning and transforming the input data into a standard format that can be used for matching. This step includes tasks such as removing stop words, stemming, and tokenizing.
  2. Feature Extraction: The next step involves extracting features from the preprocessed data, which will be used for matching. The most common features used are text-based features such as term frequency (TF), inverse document frequency (IDF), and TF-IDF. Other features such as n-grams, character-level features, and semantic features can also be used.
  3. Similarity Calculation: The features extracted in the previous step are then used to calculate similarity between the entities. Various similarity measures such as cosine similarity, Levenshtein distance, and Jaccard similarity can be used for this purpose.
  4. Thresholding: Once the similarity scores are calculated, a threshold is set to determine which entities are a match. If the similarity score is above the threshold, the entities are considered a match; otherwise, they are considered non-matches.
  5. Post-processing: The final step involves post-processing the matched entities to remove any duplicates and resolve conflicts.

While this pipeline has been successful in many applications, it has limitations, such as not being able to capture the context of the entities and having low accuracy in noisy datasets. More advanced techniques, such as deep learning-based approaches, have been developed to overcome these limitations.

2. Deep Learning Entity Matching Solutions

2.1. Definition

With the rise of deep learning techniques and the advent of pre-trained models, the task of entity matching has become more sophisticated. Deep learning models can now identify patterns in data and create customized distance calculations instead of relying on pre-defined metrics for matching data.

Recent advancements in deep learning for natural language processing have led to the development of methods for performing entity matching tasks more efficiently. Initially, these approaches were introduced for classification tasks but have since been extended to cover the entire entity matching pipeline.

Some popular deep learning-based entity matching solutions include

  • Deepmatcher: A Python package that uses deep learning to perform entity and text matching. Unlike other deep learning-based approaches, Deepmatcher uses HighwayNets for optimizing entity matching tasks, covering the entire pipeline rather than just classification.
  • Ditto: A Transformer-based pre-trained language model EM tool that converts feature matching into a sequence pair classification problem. Ditto generates highly contextualized embeddings that allow for better language understanding compared to traditional word embedding.
  • Auto-EM: An entity matching tool that leverages pre-trained models and transfer learning to build a fully automated entity matcher. This tool captures name variations and complex structures by utilizing large-scale knowledge bases, character-level, and word-level information. It requires data files from Microsoft’s proprietary knowledge base for DL model training, leveraging entity-type and entity-synonym structures.

2.2. Deep learning-based entity matching Pipeline

Conventionally, entity matching is often viewed as a process consisting of several phases, although there is no generally agreed-upon list of specific steps.

However, this process can be viewed as a chain of sub-tasks or sub-problems that generate the result of the matching.

  1. Data pre-processing: This step involves preparing the input data for the deep learning model. The data may be in the form of text, images, or any other data type. The data is cleaned, normalized, and converted into a format suitable for the deep learning model.
  2. Schema matching: In this step, the schema of the datasets is compared to identify the common attributes between them. This step is important to ensure that only relevant attributes are used for matching.
  3. Blocking: In this step, the datasets are divided into blocks based on their attributes. This helps to reduce the number of comparisons required during the matching process.
  4. Record pair comparison: In this step, each pair of records within the same block is compared to calculate their similarity score. Various techniques can be used for record comparison, such as cosine similarity, Levenshtein ratio, or euclidean distance.
  5. Classification: In this step, a deep learning model is trained to classify record pairs as matches or non-matches based on their similarity scores. The model learns the underlying patterns in the data and generates a model that can accurately match entities.

These steps are often iterative, meaning that the results of one step can be used to inform the next step. For example, the results of the record pair comparison step can be used to train the classification model. The goal of the deep learning EM pipeline is to accurately match entities while minimizing the number of false matches and non-matches.

3. Challenges and opportunities of Entity matching Existing solutions

 Traditional SolutionsDeep Learning Solution
Limitations– Based on text similarity metrics and predefined rules only, which may not capture the complexity and context of the data. This can lead to inaccurate matching and false positives/negatives.
– Struggle with matching entities with different representations, such as matching text to images or matching unstructured data to structured data.
– The burden of feature engineering and rule definition falls on the user, which can be time-consuming and difficult.
– Require large amounts of labeled data, which can be difficult and time-consuming to acquire
– Can be computationally expensive to train and deploy, which may not be feasible for some organizations or projects
– The complexity of deep learning models can make them difficult to interpret and understand, which can make it challenging to identify and correct errors.
– Models may overfit to the training data, which can lead to poor performance on unseen data
Opportunities– Effective for simple matching tasks, such as matching records with similar names or addresses
– Useful for matching structured data, such as tables or databases
– Can serve as a baseline for comparison with more advanced techniques, such as deep learning-based solutions.
– Require less computing resources compared to more advanced techniques
– Highly effective for matching complex and unstructured data, such as natural language text, images, and videos
– Can learn to identify patterns and features in the data without the need for explicit feature engineering or rule definition
– Models can be used for a wide range of entity matching tasks, from simple name and address matching to more complex entity resolution tasks.
– Can be trained to leverage large-scale knowledge bases or external sources of data to improve matching accuracy  


In conclusion, entity matching, also known as record linkage, is a crucial aspect of Natural Language Processing (NLP) that involves identifying and connecting data to determine whether two entities refer to the same real-world object or not. Traditional entity matching techniques are based on text similarity calculations, which have limitations and often lead to false positives, while deep learning-based approaches have emerged to overcome these limitations. Deep learning-based approaches use contextual information to achieve better accuracy and efficiency, and several popular tools, such as Deepmatcher, Ditto, and Auto-EM, have been developed to perform entity-matching tasks more effectively. Overall, the rise of deep learning techniques and pre-trained models has transformed the field of entity matching and made it more sophisticated.