时间:2021-07-01 10:21:17 帮助过:32人阅读
by Mike O’Brien, MongoDB Kernel Tools Lead and maintainer of Mongo-Hadoop, the Hadoop Adapter for MongoDB Hadoop is a powerful, JVM-based platform for running Map/Reduce jobs on clusters of many machines, and it excels at doing analytics
by Mike O’Brien, MongoDB Kernel Tools Lead and maintainer of Mongo-Hadoop, the Hadoop Adapter for MongoDB
Hadoop is a powerful, JVM-based platform for running Map/Reduce jobs on clusters of many machines, and it excels at doing analytics and processing tasks on very large data sets.
Since MongoDB excels at storing large operational data sets for applications, it makes sense to explore using these together - MongoDB for storage and querying, and Hadoop for batch processing.
We recently released the 1.1 release of the MongoDB Connector for Hadoop. The MongoDB Connector for Hadoop makes it easy to use Mongo databases, or MongoDB backup files in .bson format, as the input source or output destination for Hadoop Map/Reduce jobs. By inspecting the data and computing input splits, Hadoop can process the data in parallel so that very large datasets can be processed quickly.
The MongoDB Connector for Hadoop also includes support for Pig and Hive, which allow very sophisticated MapReduce workflows to be executed just by writing very simple scripts.
Hadoop streaming is also supported, so map/reduce functions can be written in any language besides Java. Right now the MongoDB Connector for Hadoop supports streaming in Ruby, Node.js and Python.
How the Hadoop connector works
I’ll be giving an hour-long webinar on What’s New with the Mongo-Hadoop integration. The webinar will cover
MongoDB and Hadoop usage with Elastic MapReduce to easily kick off your Hadoop jobs
Overview of MongoUpdateWriteable: Using the result output from Hadoop to modify an existing output collection
The webinar will be offered twice on August 8:
Register for the Webinar on August 8
Update: Watch the webinar recording
原文地址:MongoDB Connector for Hadoop, 感谢原作者分享。