DEV Community

Talha Munir ๐Ÿ‡ต๐Ÿ‡ธ
Talha Munir ๐Ÿ‡ต๐Ÿ‡ธ

Posted on

Integrating Relational Databases with Apache Hadoop

As we all know about relational databases, they are extremely common and can easily be integrated with other information technologies. They also provide support for structured data. It is easy and efficient to query such sort of data as well.

In this small article we will be focusing on how to connect Hadoop with Relational Databases. Both the technologies can work together very well.

Let's talk about the advantages of connecting Hadoop with RDBMS. Since we know Hadoop is capable of batch processing of very big data. Where data can be in different formats and are from various sources. Performing analysis on very big data can be very resource consuming. Instead of performing this analysis on data that resides on relational database, it would be highly beneficial and efficient to perform such analysis on Hadoop Distributed File System.

Now since Hadoop is a Distributed file system it lacks some of the functionality that our relational databases perform such as indexing, query optimization and random data access.

But Hadoop provides compensation for above drawbacks such as efficiently processing massive quantities of unstructured data. Combining both the technologies are highly beneficial for your big data workflows.

Apache Sqoop:

We use sqoop as a tool to integrate Hadoop with a RDBMS. Sqoop can easily connect to different relational databases such as MySQL, PostgreSQL or Oracle via JDBC, which is a standard API to connect databases with Java. It also works with non-JDBC tools that allow bulk import or export of data, but with certain limitations (e.g. no support for certain file types).

Working of Sqoop:

Sqoop works by executing a query on the relational database and exporting the resultant rows into a file following one of the following format. The format can either be text, binary, Avro or a sequence file. The files are saved on Hadoop HDFS. Sqoop also lets you to import formatted files back into a relational databases. Sqoop uses map reduce paradigm to import and export the data.

Issues in Sqoop:

Above all the benefits, there are some of the issues that can be faced while using sqoop.

Sqoop 1 and Sqoop 2:

The organization provides 2 different versions of the tool:
Sqoop 1 and Sqoop 2. Both the versions are incompatible with each other.
Few differences are sqoop 1 runs from the command line and where as sqoop 2 has a web application and a REST Api.

JDBC driver:

Sqoop works best with those relational databases that support JDBC. This urges you should install the right drivers on every machine in the Hadoop cluster.

Data Types:

Different databases support different datatypes. While exporting to sqoops these datatypes convert to Java types, which is a conversion that may not run smoothly and can cause errors.

References:

You can view more on GitHub and documentation of Apache age and Hadoop by following these links:

  1. https://age.apache.org/age-manual/master/intro/overview.html
  2. https://www.integrate.io/blog/integrating-relational-databases-with-apache-hadoop/
  3. https://hadoop.apache.org/docs/stable/

Top comments (0)