Cmpute.io is now a part of Cisco. Learn More

About Cisco

Hadoop Group Resource

Create a new Terraform template

Configure the batchly provider

                provider "batchly" 
                {
                    tenant_url = "${var.tenant_url}"
                    api_key = "${var.api_key}"
                    secret_key = "${var.secret_key}" 
                }

For Example: <customer.batchly.net>/<customer.cmpute.io>

Provides a batchly Hadoop group resource

Example Usage

                # Create a batchly Hadoop group

                resource "batchly_aws_hadoop_group" "workers" 
                {
                    name = "app_name"
                    account_resource_id = "A-XXXXXXX"
                    region = "us-east-1"
                    cluster_id = "cluster id"
                    jar_location = "s3 location of jar"
                    action_on_failure = "Continue"
                    instanceTypes = ["c1.medium", "c4.large"]
                    arguments = "arg1"
                    vpc_id = "vpc-xxx"
                    subnets = ["s-xxxxxxx", "s-yyyyyyy"]
                }

Argument Reference

  • name - (Required) Name of the hadoop job in batchly to be created.

  • region - (Required) aws region.

  • account_resource_id - (Required) The resource id created while adding an account in batchly.

  • cluster_id - (Required) cluster id.

  • jar_location - (Required) s3 location of the jar to be processed.

  • action_on_failure - (Required) If the processing errors out the set of actions that should be taken the allowed values are Continue/Wait and cancel/Terminate cluster.

  • instanceTypes - (Required) list of instance types to be launched for processing.

  • arguments - (Optional)

  • vpc_id - (Optional) vpc id.

  • subnets - (Optional) list of subnets to launch the instances.