Software components in vRealize Automation allows bash/powershell scripts to be executed at various states of the software lifecycle, the states are:

  • Install – used to initially deploy and install RPMs
  • Configure – used to configure the software within the guest
  • Start – Start any services
  • Update – Update services when a scale in/out operation executes
  • Uninstall  – Uninstall services – great for licence removals.

This post will show you how to deploy a Mongo DB cluster running on vSphere using vRealize Automation.


The sample is available on my git page and can be imported using CloudClient (I love how this has taken off as my team built this too)

vra login userpass –server vra_server –user vra_user –password vra_password –tenant vra_tenant* 
vra content import –path /path/to/MongoCluster-composite-blueprint.zip

The Software Component

Mongo-3.4 Software


  • The container tells the software which component it can be placed on in the design service, in this case a machine

Mongo-3.4 Properties


  • mongoAdminPass – Mongo password for administrative usage
  • replicaSetName – Mongo replica set name
  • mongoHostConf – File used to store all node names in the guest OS
  • domainName – domain of the machines being deployed
  • repoFile – yum configuration variable for mongo download
  • sshPassRpmUrl – RPM URL for SSHPASS command
  • mongoAdminUser – Mongo Admin username
  • mongoVmNames – Array of machine names being deployed
  • content – content used to write into the repoFile for mongo download information
  • rootPass – root password required to copy keyfiles to all nodes

Mongo-3.4 Actions

2017-02-14 09.55.01 pm.png

This is the bash script used to download configure Mongo

The flow:

  1. Identify initial primary node as we want to have commands run on only one node, this is the first entry in the $mongoVmNames array variable, this variable is populated by the composite blueprint in which all machine names are dynamically inserted
  2. Update mongo.conf to start on all IP address, not the default localhost
  3. Start mongod on all nodes
  4. Download SSHPASS command for key exchange (on all nodes)
  5. On the primary node create a keyfile and set correct ownership and permissions, copy this file to all nodes in the /etc folder
  6. On the primary node run rs.initiate() which create an initial replication configuration
  7. On the primary node wait for all nodes to be resolved in DNS (in a while loop)
  8. On the primary node add each replica/slave node to the replication configuration
  9. On the primary node create an admin user that can manage the database (test and admin)
  10. Stop mongod
  11. On all nodes update /etc/mongod.conf (under the security yam key entry) to enable internal authentication (so that each  node can communicate with each other in the cluster) and setup user authentication.
  12. All output is written to /tmp/mongo_install.out as well as the /tmp/[software_guid]/task.stdlog files


Auto configured /etc/mongod.conf file


  destination: file

  logAppend: true

  path: /var/log/mongodb/mongod.log

# Where and how to store data.


  dbPath: /var/lib/mongo


    enabled: true


  fork: true  # fork and run in background

  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile

# network interfaces


  port: 27017


  authorization: "enabled"

  keyFile: "/etc/mongod.key"


  replSetName: rs0

The Composite Blueprint

2017-02-14 10.50.09 pm.png

The blueprint shows a cluster (that allows scale out) with a minimum of 3 nodes for quorum

2017-02-14 10.50.38 pm.png

The most interesting property here is the mongoVmNames, this is an array and has a BINDING with _resource~mongo_cluster~MachineName, this means that all machines names deployed will be added to the array.

Request the blueprint

Provide vSphere machine configuration



Provide mongo configuration


Hope this is useful and feel free to ask any questions.