Ansible Refactoring Part 1 - Setting up your Control Node

Ansible Refactoring Part 1 - Setting up your Control Node

Part 1 - Getting Started

This is Part 1 of an Ansible Refactoring Series - Start Here

To follow along I assume you have access to a lab environment suitable for the task. I will add, July 2020, a Vagrantfile to simplify running this locally on your laptop. Meanwhile this can be deployed via the brand new Ansible Multitier Infra Config assuming you have AWS credentials or OpenStack.

ansible-playbook main.yml -e @configs/ansible-multitier-infra/sample_vars_ec2.yml -e @~/secrets/aws-gpte.yml -e guid=refactor

For Red Hatters and Partners this is available via CloudForms at OpenTLC Catalogs as OPENTLC Automation → Ansible Advanced NG - Multi Tier Babylon You will receive 3 emails during the provisioning process, the last of which will contain your ssh login details.

Your Environment

The lab environment consists of four servers and a control node.

image::/images/ -topology.png[role="thumb center" width=80%]

Server Role Ports (TCP) Software Ansible Group Purpose

control

Control Node

22

Ansible

NA

Ansible Control Node

frontend1

Load Balancer

22, 80, 443

HAProxy

load_balancers

Load balances across App Tier

app1, app2

Application Servers

22, 8080

Python Flask

app_servers

Webserver and API (Python/Flask)

appdb1

Database Server

22, 5432

Postgresql

database_servers

Back end database for Flask application

Note
Only control and frontend1 are exposed to the Internet

All nodes, today, run Red Hat Enterprise Linux 7.7 though CentOS 7.7 could be used in a homelab situation. At this point, the machines do not have their respective payloads installed or configured but are setup for ssh access.

You will work, and run, Ansible from the Ansible Control Node control.

Optional (if using password based ssh)

The new labs we are deploying with AgnostiCD have evolved to using randomly generated ssh passwords over keys. However if you are like me and want to avoid the hassle of managing passwords I recommend injecting your own ssh public key.

  1. Inject your own ssh key

    laptop $ ssh-copy-id -i <PATH-TO-PUBLIC-KEY> <YOUR-REMOTE-USER>@control.<GUID>.example.opentlc.com
    Example - Customize the above command as necessary
    laptop $ ssh-copy-id -i ~/.ssh/id_rsa.pub tok@control.3tier-01.example.opentlc.com
    Sample Output
    ssh-copy-id -i ~/.ssh/id_rsa.pub tok@control.ntier-infra-01.example.opentlc.com
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/tok/.ssh/id_rsa.pub"
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    tok@control.ntier-infra-01.example.opentlc.coms password:
    
    Number of key(s) added:        1
    
    Now try logging into the machine, with:   "ssh 'tok@control.ntier-infra-01.example.opentlc.com'"
    and check to make sure that only the key(s) you wanted were added.

    Now try logging into the machine, with: "ssh 'tok@control.ntier-infra-01.example.opentlc.com'" and check to make sure that only the key(s) you wanted were added.

Connect to your control node

  1. ssh either with your username/password or key

     laptop $ ssh <YOUR-REMOTE-USER>@control.<GUID>.example.opentlc.com # optional -i <PATH-TO-PUBLIC-KEY>
    Note
    You can simplify your ssh setup by editing your local, laptop, ~/.ssh/config file. This would allow you to simply ssh control using whatever alias makes sense for the HOST line below. For example
    Host control
      HostName control.foo.example.opentlc.com
      User <MY-REMOTE-USER-NAME>
      IdentityFile ~/.ssh/<MY-REMOTE-USER-KEY>
      StrictHostKeyChecking no
  2. Switch, via the sudo command, to your ansible service account devops

    Note
    This is a common pattern on control nodes where you login as yourself and then switch to the service account to perform ansible and other management tasks. It is a poor, and risky practice, to work as root. It would also be possible to add yourself the devops users ~/.ssh/authorized_keys, again this is a poor practice
    sudo su - devops

Explore your environment

  1. Check your basic toolchain is in place. At a minimum you will need ansible and git

    type git ansible
    Sample Output
    git is /bin/git
    ansible is /bin/ansible
  2. Install any preferred additional tools, utilities, and configuration you like to have

    If you expect to spend any significant amount of time working on a host it is recommended to spend a few moments customizing your working environment. Possible steps can include:

    • Customize your ~/.bashrc or equivalent

    • Customize your ~/.vimrc or equivalent

    • Installs useful and/or favorite tools e.g.:

      • vim/emacs/nano

      • curl

      • telnet # useful for debugging services

      • jq # if you expect to be working with JSON etc

      • tree

    • Other environment optimizations

Tip

Whilst typically not widely used here is a use-case for the ansible-pull command that allows you to pull in an external playbook and execute it. For example create a playbook in a git repo that customizes a typical Linux environment, installs favorite/useful packages and customizes key files such as ~/.bashrc and (for me) ~/.zshrc

ansible-pull -U https://github.com/tonykay/frank.git main.yml -i localhost,

Now your toolchain is in place and optimized move on to exploring your ansible setup.

Check your ansible configuration and Setup

Typical tasks working with a new, or unfamiliar, control node

  • Check ansible version

  • Identify, and examine, your ansible.cfg

  • Explore your inventory

  • Verify your ssh setup and configuration

    1. Check ansible version

      ansible --version
      Sample Output
      ansible 2.9.10
        config file = /etc/ansible/ansible.cfg
        configured module search path = [u'/home/devops/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
        ansible python module location = /usr/lib/python2.7/site-packages/ansible
        executable location = /bin/ansible
        python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
    2. Explore your inventory

      cat /etc/ansible/hosts
      Sample Output
      [load_balancers]
      frontend1.ntier-infra-01.internal
      
      [app_servers]
      app2.ntier-infra-01.internal
      app1.ntier-infra-01.internal
      
      [database_servers]
      appdb1.ntier-infra-01.internal
      
      [ntierapp:children]
      load_balancers
      app_servers
      database_servers
      
      [ntierapp:vars]
      timeout=60
      ansible_user=ec2-user
      ansible_ssh_private_key_file="~/.ssh/ntier-infra-01key.pem"
      ansible_ssh_common_args="-o StrictHostKeyChecking=no"
      Tip

      ansible-inventory is a useful utility for exploring, and visualizing, your inventory.

      Table 1. ansible-inventory options
      Option Function

      --graph

      Create an inventory graph, also --graph <GROUP> option

      --vars

      Adds vars to --graph output only

      --host

      Specific host

      -i

      Alternative inventory source

      ansible-inventory --graph --vars
      Sample Output
      @all:
        |--@ntierapp:
        |  |--@app_servers:
        |  |  |--app1.ntier-infra-01.internal
        |  |  |  |--{ansible_ssh_common_args = -o StrictHostKeyChecking=no}
        |  |  |  |--{ansible_ssh_private_key_file = ~/.ssh/ntier-infra-01key.pem}
        |  |  |  |--{ansible_user = ec2-user}
        |  |  |  |--{timeout = 60}
        |  |  |--app2.ntier-infra-01.internal
        |  |  |  |--{ansible_ssh_common_args = -o StrictHostKeyChecking=no}
        |  |  |  |--{ansible_ssh_private_key_file = ~/.ssh/ntier-infra-01key.pem}
        |  |  |  |--{ansible_user = ec2-user}
        |  |  |  |--{timeout = 60}
        |  |--@database_servers:
        |  |  |--appdb1.ntier-infra-01.internal
        |  |  |  |--{ansible_ssh_common_args = -o StrictHostKeyChecking=no}
        |  |  |  |--{ansible_ssh_private_key_file = ~/.ssh/ntier-infra-01key.pem}
        |  |  |  |--{ansible_user = ec2-user}
        |  |  |  |--{timeout = 60}
        |  |--@load_balancers:
        |  |  |--frontend1.ntier-infra-01.internal
        |  |  |  |--{ansible_ssh_common_args = -o StrictHostKeyChecking=no}
        |  |  |  |--{ansible_ssh_private_key_file = ~/.ssh/ntier-infra-01key.pem}
        |  |  |  |--{ansible_user = ec2-user}
        |  |  |  |--{timeout = 60}
        |  |--{ansible_ssh_common_args = -o StrictHostKeyChecking=no}
        |  |--{ansible_ssh_private_key_file = ~/.ssh/ntier-infra-01key.pem}
        |  |--{ansible_user = ec2-user}
        |  |--{timeout = 60}
        |--@ungrouped:
      Note
      You can also list your inventory groups hosts with the ansible command ansible <GROUP_NAME> --list-hosts e.g. ansible all --list-hosts The -i option allows you to specify an alternative inventory including a directory or dynamic inventory script or plugin.
    3. Finally verify basic ssh connectivity to show that your ssh configuration is valid and all necessary users, and keys are setup.

      ansible all -m ping
      Sample Output
      Sunday 19 July 2020  12:20:13 +0000 (0:00:00.054)       0:00:00.054 ***********
      frontend1.3tier-01.internal | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false,
          "ping": "pong"
      }
      app2.3tier-01.internal | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false,
          "ping": "pong"
      }
      appdb1.3tier-01.internal | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false,
          "ping": "pong"
      }
      app1.3tier-01.internal | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false,
          "ping": "pong"
      }
      Sunday 19 July 2020  12:20:14 +0000 (0:00:01.069)       0:00:01.124 ***********
      ===============================================================================
      ping -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.07s
      Playbook run took 0 days, 0 hours, 0 minutes, 1 seconds

Next Steps

Now your environment is fully configured and ready to run. Move onto Part 2: First Deploy