A tool for provisioning and managing Apache Hadoop clusters in the cloud.
We wanted to test different ways to deploy and utilize Hadoop clusters in Private/Hybrid Cloud.
The main idea is to create blueprints for various services and produce consistent, measurable, accurate and successful deployments.
- Cisco USC Mini – 2blades
- NetApp NFS 1TB share
- RedHat OpenStack Platform 8 [1x controller][1x compute] running RHEL 7.2
- Modified for v3 Authentication and Multi-tenancy
- OSP Director used for undercloud/overcloud automation
- Cloudbreak deployer VM inside cloudbreak project, running in domain context: HIGHVAIL .
- Deploy simple Hadoop service 2-3 nodes.
- Scale, recreate ….
Cloudbreak [upsteam project: APACHE AMBARI],
as part of the Hortonworks Data Platform, makes it easy to provision, configure and elastically grow HDP clusters on cloud infrastructure. Cloudbreak can be used to provision Hadoop across cloud infrastructure providers including Amazon Web Services, Microsoft Azure, Google Cloud Platform and OpenStack.
Create HDP in 3 easy steps:
- Pick a blueprint
- Choose a cloud
- Launch HDP
Cloudbreak is built on the foundation of cloud providers APIs (Amazon Web Services, Microsoft Azure, Google Cloud Platform, OpenStack), Apache Ambari, Docker containers, Swarm and Consul. Cloudbreak uses Docker container technology to deploy clusters in a cloud-agnostic way.
More on Blueprints
The architectural view of services and ecosystem.
Next post will focus on cloudbreak setup and installation.
We have LAB ready. If you want to us to try your blueprint/service contact me or come and visit us!
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.