MongoDB Spark Connector Essentials

In demand of a tool to integrate Apache Spark with MongoDB? This article shares the benefits of using MongoDB Spark Connector to make data migration, data analyzing, and writing easier for every user. You will learn how to read and write data using a sharded cluster in MongoDB Spark Connector.

The MongoDB Spark Connector is a powerful tool that enables you to integrate Apache Spark with MongoDB. This connector allows you to read data from MongoDB collections, write data to MongoDB collections, and use the full power of Apache Spark to process and analyze data.

Below are some key scenarios where you might want to use MongoDB Spark Connector

  1. Data Migration:MongoDB Spark Connector can be a powerful tool for migrating data from other sources to MongoDB, allowing you to take advantage of the distributed computing power of Apache Spark to process and transform your data before loading it into MongoDB.
  2. Integrate MongoDB with other data sources:MongoDB Spark Connector makes it easy to integrate MongoDB data with other data source and build powerful data pipelines.
  3. Analyze large Dataset: The MongoDB Spark Connector can help you process large amounts of data stored in MongoDB using the distributed computing power of Apache Spark.

In this article we will specifically look some basic capabilities of how to read and write data from sharded cluster using MongoDB Spark Connector.

Reading from Sharded Cluster

Coded Snippet for reading data from Sharded Cluster

val df = spark.read 
.format("mongodb")
.option("spark.mongodb.connection.uri", "hostname")
.option("spark.mongodb.database", "databaseName")
.option("spark.mongodb.collection", "collectionName")
.option("partitioner","com.mongodb.spark.sql.connector.read.partitioner.ShardedPartitioner")
.load()
  1. spark.read: This creates a DataFrameReader object that we can use to read data into a DataFrame.
  2. .format("mongodb"): This specifies that we want to read data from a MongoDB database.
  3. .option("spark.mongodb.connection.uri", "hostname"): This specifies the connection string for the MongoDB instance we want to read from.
  4. .option("spark.mongodb.database", "databaseName"): This specifies the name of the database we want to read from.
  5. .option("spark.mongodb.collection", "collectionName"): This specifies the name of the collection we want to read from.
  6. .option("partitioner","com.mongodb.spark.sql.connector.read.partitioner.ShardedPartitioner"): This specifies the partitioner we want to use to partition the data from the MongoDB collection. In this case, we’re using the ShardedPartitioner, which is designed for use with sharded MongoDB clusters.

The MongoDB Spark Connector provides several Read options that can be used to further customize the write behavior

ShardedPartitioner

.option("partitioner","com.mongodb.spark.sql.connector.read.partitioner.ShardedPartitioner") is a configuration option that can be used when reading data from a MongoDB sharded cluster.

By default, when you read data from MongoDB using the Spark Connector, the connector uses a partitioner called MongoPartitioner to split the data across Spark partitions. However, when reading data from a sharded cluster, you may want to use a different partitioner that is aware of the sharding structure of the cluster.

TheShardedPartitioner partitioner works by issuing a separate read operation to each shard in the cluster and then merging the results into a single Spark DataFrame. This allows the connector to take advantage of the parallelism provided by the sharded cluster and can result in faster read times for large datasets.

Filtered Read from MongoDB

aggregation.pipelineoption allows for a filtered read of data from a MongoDB collection. It allows you to specify a MongoDB aggregation pipeline that will be applied to the data before it’s loaded into a DataFrame.

val df = spark.read 
.format("mongodb")
.option("spark.mongodb.connection.uri", "hostname")
.option("spark.mongodb.database", "databaseName")
.option("spark.mongodb.collection", "collectionName")
.option("partitioner","com.mongodb.spark.sql.connector.read.partitioner.ShardedPartitioner")
.option("aggregation.pipeline", """[
{
$match: {
startDt: {
$gte: ISODate("2022-04-02"),
},
endDt: {
$lte: ISODate("2022-04-03"),
},
},
},
]""")
.load()

By including this aggregation pipeline in our read operation, we’re able to filter the data from the MongoDB collection before it’s loaded into a DataFrame, which can help improve performance and reduce the amount of data we need to process.

Writing to Sharded Cluster

Code Snippet for writing data into Sharded Cluster

toWrite.write
.format("mongodb")
.option("spark.mongodb.connection.uri", "hostname")
.option("spark.mongodb.database", "databaseName")
.option("spark.mongodb.collection", "collectionName")
.option("writeConcern", "majority")
.mode("append")
.save()
  1. toWrite.write: This creates a DataFrameWriter object that we can use to write data from our DataFrame.
  2. .format("mongodb"): This specifies that we want to write the data to a MongoDB database.
  3. .option("spark.mongodb.connection.uri", "hostname"): This specifies the connection string for the MongoDB instance we want to write to.
  4. .option("spark.mongodb.database", "databaseName"): This specifies the name of the database we want to write to.
  5. .option("spark.mongodb.collection", "collectionName"): This specifies the name of the collection we want to write to.
  6. writeConcern: This option specifies the level of acknowledgement that should be received from the MongoDB server when writing data. It allows you to customize the durability and consistency guarantees for your writes. For example, you can specify that the write should only be considered successful if it is replicated to a certain number of nodes, or if it is written to disk on the primary node. The default write concern is majority.
  7. .save(): This saves the data from the DataFrame to the specified MongoDB collection.
  8. .mode("append"): This specifies the write mode. In this case, we’re using append mode, which means that the new data will be appended to the existing data in the collection. We can also use other write modes such as overwrite or ignore depending on the behavior you want.
  • overwrite: This mode overwrites any existing data in the collection with the new data from the DataFrame.
  • ignore: This mode does not write the data to the collection if it already exists in the database.

The MongoDB Spark Connector provides several write options that can be used to further customize the write behavior when writing data from a DataFrame to a MongoDB collection

Sharded Writes using idFieldList

The idFieldList option in the MongoDB Spark Connector is used to specify the fields that should be used as the shard key when writing data to a sharded cluster in MongoDB.

In MongoDB, sharding is a technique used to partition data across multiple servers, called shards, in order to distribute the workload and handle large amounts of data. In order for the data to be properly distributed across the shards, a shard key must be defined. The shard key is a field or set of fields in the data that determines the distribution of the data across the shards.

When using the MongoDB Spark Connector to write data to a sharded cluster, the idFieldList option is used to specify the fields that should be used as the shard key.

This option takes a comma-separated list of field names, and is typically set to the _id field, which is a unique identifier for each document in a MongoDB collection. Other fields can be used as the shard key as well, depending on the specific data being stored and the access patterns for that data.

By specifying the idFieldList option when writing data to a sharded cluster, the connector can properly partition and distribute the data across the shards based on the specified shard key, which can improve performance and scalability.

Conclusion

Overall, the MongoDB Spark Connector provides several key benefits, including:

  • Flexibility and scalability: The connector is highly flexible and scalable, allowing you to easily process and analyze data across multiple nodes or clusters.
  • Integration with MongoDB features: The connector provides seamless integration with MongoDB features, such as sharding and aggregation, allowing you to take advantage of the full power of the MongoDB platform.
  • Simplified development: The connector provides a simple and intuitive API that allows you to easily read and write data from Spark to MongoDB, without having to worry about complex data transformations or data loading operations.

Happy Coding!!

Subheader 1
Text Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna.Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna.


Subheader 2
Text Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna.

Subheader 3
Text Lorem Ipsum Dolor Sit Amet, Consetetur Sadipscing Elitr, Sed Diam Nonumy Eirmod Tempor Invidunt Ut Labore Et Dolore Magna Aliquyam Erat, Sed Diam Voluptua. At Vero Eos Et Accusam Et Justo Duo Dolores Et Ea Rebum. Stet Clita Kasd Gubergren, No Sea Takimata Sanctus Est Lorem Ipsum Dolor Sit Amet. Lorem Ipsum Dolor Sit Amet, Consetetur Sadipscing Elitr, Sed Diam
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna.

Rajesh Vinayagam
Distinguished Architect

Share this Article

Latest Insights

Tag Cloud

Share this Article