Ansible and Docker - the Perfect Duo for the Best Software Product Management Docker, the famous Linux-based container development and deployment file format works incredibly well with Ansible. Ansible is an automation tool that aims to remove the work drudgery involved in several IT needs involving tasks like configuration management, application deployment and intra-service orchestration. Ansible acts more like a configuration management system, (but it still can be used in many ways) and application deployment. Developers can group Ansible with configuration management tools like Puppet, SaltStack and Chef. It increases work productivity by simplifying all the complex tasks and enabling developers to concentrate on other value-added tasks. Before going into the fancy details of Ansible, let's look at how Docker can work with Ansible. You must already be familiar with Docker, an open source containerization platform that aids in the automation of application deployment within software containers. The main reason why developers prefer Docker is that it helps in the running of several apps on side by side containers, thereby aiding in compute density.  Ansible modules are accompanied by Playbooks that aid in the configuration and deployment to remote machines. It would be easier to understand playbooks, if you can see Ansible as tools and, playbooks as instructions manuals for their usage. Ansible helps you to automate Docker and operationalize the container build and deployment process. When you automate Docker tooling with Ansible, you enjoy a host of advantages like portability, auditability, management of entire environments, etc. Before going in detail about how the combination of Ansible and Docker can give the best results in software product management, let’s take a look at Ansible, its main features and use cases.

What is Ansible?

Ansible was developed by Michael DeHaan in January 2016. It is written in Python and PowerShell and functions on Linux, Unix-like and Windows OS. As mentioned before, Ansible is a simple automation IT engine designed exclusively for multi-tier deployments and is the perfect solution for your IT infrastructure because it deals with how your systems inter-relate, and exclusively, on how to manage one system at a time. Ansible uses a simple language known as YAML (Ansible Playbooks) to help developers describe automations tasks. Ansible functions by networking the nodes and releasing small programs known as Ansible modules. These programs act as resource models and are deployed during the duration of the program and removed when the task is finished. The modules would be within a library and it can lie in any machine without any additional requirements like servers or databases. You can track changes to the content with the help of a version control system. You can use SSH keys by default to use Ansible, in the place of passwords. Being an agentless architecture, Ansible uses OpenSSH and WinRM for secure network connections through network traffic encryption by providing safe tunneling capabilities and various authentication methods. OpenSSH is also known as OpenBSD Secure Shell and is based on Secure Shell (SSH) protocol. It is thus a premier connectivity tool with which developers can make a remote login through SSH protocol.

Why Use Ansible?

Ansible is an indispensable tool, it can automate your network, it can automate your application deployment. There are the two main reasons for using Ansible. Working in an IT environment entails you to do the same tasks repeatedly, and this may lead to drudgery in work. With a source tool that’s as powerful as Ansible, this problem is solved forever. It can banish the complexity out of your work environment and accelerate DevOps initiatives successfully. Ansible has a clear advantage over other automation tools like Puppet, Chef and SaltStack. Ansible is agentless - meaning there are no daemons or agents needed to run a particular action. It automates configuration through pushing commands on SSH protocols. Hence it is a push model, meaning no additional installs are not required at the end points. As the other tools are not agentless, to manage the remote servers you need to separately install the separate agents of each at each remote machine. Have a look at some of the benefits of using Ansible:

1. Agentless Architecture

Having an agentless architecture like Ansible is very important in network automation. The problem is that many of the devices residing in the network, do not have an API - Application Program Interface. API is really important in automation because that makes it easier to automate and manage traditional devices. Apart from providing productivity in large volumes, being agentless enables Ansible to address several IT challenges, even the most complex ones. As opposed to agent management tools that require firewall configurations, Ansible works in an agentless manner over WinRM and SSH. This is because all the connections are performed over pre-configured remote-access services. Operations are executed in a very efficient manner where connections can be reused and there is minimum amount of network traffic. The agentless management approach leverages Windows and SSH Remote Management feature by transferring all the auto-generated modules to self-destructing remote machines. This eliminates the need for actually executing Unix commands. When compared to agent based architecture, it appears that agent less architecture emerges the clear winner because it is much more powerful, secure and addresses the needs of the day. One of the major advantages agentless has over agent based is that needless chatter is avoided. Communication is initiated only when there is work to be performed. Agentless is also pretty much useful in Internet of Things, especially in areas where it is not possible to install proprietary agents (as this is what agent-based is mostly about).

2. Usability across entire IT teams

Ansible empowers the users to enjoy extended automation solutions across the entire team, with the help of YAML language. This is quite different to other modules that need pre and post steps to use modules. Whichever team of IT organization you belong to - development team or the production team - with Ansible, you can automate your IT infrastructure or even DevOps tools with one common language.

3. Unifies Application and OS Configurations

Ansible can substantially solve major IT challenges by unifying application software and OS configuration under a single banner. This reduces all the errors that used to happen in past IT configuration changes. With Ansible, it is possible to configure a process that creates the base OS on different machines and activates state convergence successfully.

4. Highly Secure and Reliable

Ansible is extremely reliable and secure making it  a fair choice among developers, IT managers and administrators. The reason why Ansible makes connections secure is because it uses OpenSSH, especially for the remote ones. As it is the most widely used protocol, vulnerabilities, if any,  are instantly fixed.

5. Idempotency

This is an important concept relating to Ansible scripts. And idempotency has evolved to be a continually used word in software development. When Ansible playbooks are idempotent, it means that actions remain the same without the requirement of any intervening actions. Just performing the actions once would be sufficient.

6. Great Extensibility

Ansible offers excellent extensibility, irrespective of whether the modules are run locally or remotely to configure OS parameters, applications and servers. It allows for new logging callbacks  for acting, integration with data stores (external) and performing an inventory of all the data collected from cloud or CMDB systems.

7. Low Learning Curve

Anybody can learn to use Ansible as it has a low learning curve as it uses YAML language for its provisioning scripts. As no coding skills are required, it will not be difficult for those who do not have a technical knowledge base. Ansible is noted for its fine-tuned simplicity because the automation code is so easy to remember, even after its years since its been created.

8. FOSS with a Growing Community

As Ansible is free and open source (FOSS), it means you can access all the codes publicly through communities like Github and Meetup. The community is growing by the minute. And developers can successfully start using the software without any instructions. The usability feature of Ansible is enhanced through Ansible Tower, a product of Ansible enabling multi-tenancy, role-based access control or RBAC, web UI, REST APIs and plenty more. The tool can be installed and activated within minutes!

Ansible Tower

Integrated with a very powerful user interface and RESTful API, Ansible Tower, an enterprise, commercial product of Red Hat Inc, is a must for developers working in the Ansible environment. It is very capable of managing complex multi-tier deployments, speed productivity and scale IT automation with ease. Additionally, developers have access to NOC-style display for content monitoring of Ansible environment, including all the recent job activity, complete with screenshots with date and time ranges. Ansible Playbook runs “stream by” as and when it is happening. It allows you to see each success and each failure in detail and in real time. As the whole process is automated, developers can see what’s queued up. Also, it is possible to see who is doing what task and when, as all the automation activity is logged safely. The Tower workflow is perfect for complex tasks, enabling multi-playbook workflows, where it can be run just once, or run several times with different credentials. Ansible Tower - dashboard

Ansible Tower Dashboard

Ansible Tower has a dashboard that allows the developers to see all the inventory details and adjust job status updates.

Another major feature of Tower is system tracking. With this developers can audit and verify that all the machines work in compliance with each other. It would also help them track machines that have changed with time, compare the changes and how they can be configured and deployed according to requirement.

Tower is integrated with a very simple portal mode and surgery features that enhances their automation tasks across various users and synchronized directly from directories like LDAP, Active Directory.

Tracking inventory is another major easy task attached to Tower. Developers can collect the inventory from a host of cloud servers like Microsoft Azure, Amazon Web servers and connect them to Red Hat Satellite or Red Hat CloudForms environment, and also for data repositories like custom CMDB (configuration management database)

Main Use Cases of Ansible

1. Provisioning It would be of great help to have a good provisioning tool when you have to roll out datacenter deployments. If you and your team are looking to have development environment to save time and to make management easier, then you ought to have Ansible as well. Provisioning has brought in a paradigm shift in IT productivity everywhere in the world. Provisioning is, however, a major part of backend operations, and is usually not seen to the front end users. Let’s look at how NEC, a major player in the Telecom industry has solved their “time to market” problems with Environment Provisioning.

The problem

: Customers were quickly moving towards the cloud, faster than they had envisioned. New clients also joined leading to more challenges, more work and less time on hand. Using a team of human resources resulted in manual errors and delay.

The solution

: The company started using both Ansible and Ansible Tower to cater to the requirements of existing and new customers, to free up time and to avoid all manual errors. The network engineers began to add CLI commands to their SDN fabric through Ansible modules and NEC Programmable Flow Rest API interfaces with Tower were created for new customer virtual networks. After creating new Ansible modules, they built new customer networks with Ansible Playbook. Eventually, the company was able to enjoy new audit functionality, increased revenue as they were able to market faster, predictable deployment process, reduce errors in manual work and utilize the team's resources in a better way.

2. Application Deployment

Ansible is the most reliable solution for deploying all your applications, even the multi-tier ones. This can be done in a reliable and consistent manner from one common framework. When you use Ansible Tower for deployment your team can handle everything in software development, right from development to production. You don’t have to use multitude number of steps each time you want to deploy something, just list the steps in the Playbook and it will be done automatically. Let's look at how FATMAP, England’s leading company for detailed mobile 3D maps utilizes Ansible for application deployment.

The problem:

As FATMAP’s application development process was complicated, they needed to deploy their apps easily and quickly. They couldn’t afford the seemingly endless meta programming pipeline and processing stage, they had to do it quickly and they needed something that would integrate seamlessly with Windows, OS X and Linux platforms.

The solution:

It was simple. Once they started using Ansible, they were able to deploy multi-tier applications without the wait. Ansible models do not just manage one system at a time, it could describe how the machines interrelate. As Ansible is agentless, it can integrate easily with all the platforms, even hybrid environments, easily solving one of the major challenges of Fatmap.

3. Configuration Management

Configuration management is the process of maintaining the functionality, physics attributes and credibility of a product.  With Ansible, this entire process is made simple, because it is automated and saves time. It also helps you have a system that works as you want and doesn’t require a large team of developers in order to make it function. Let's look at how The National Aeronautics and Space Administration (NASA) uses Ansible to increase efficiency and cloud migration.

The problem:

NASA needed to migrate 65 applications from traditional hardware based data center to the cloud to help save costs and to improve agility. The applications should be migrated in as-is condition. This means having an environment covering the tedious task of managing several virtual private clouds and AWS accounts.

The solution:

NASA started using Ansible Tower to manage its cloud and AWS environment much better. The processes that used to take hours, could now be completed in less than five minutes.  The company was also able to patch updates within 45 minutes, this was a multi-day process in the past.

4. Continuous Delivery

The mantra to building software could probably be “release early, release often”. This would be possible if automation is quick with minimal human intervention. With Ansible, it is possible for continuous delivery and automation through multi-step, multi-tier orchestration. It would be interesting to watch how Amelco, a UK based financial services company leveraged the power of Ansible to better their business.

The problem

: Amelco is responsible for delivering high end solutions for the financial market and the betting industry. They had a VMware run multi-tiered architecture to enable customers to trade and make financial bets. Efficiency and lag was a problem when they had to deploy applications over 400 VMware nodes operating on Linux-based Ubuntu. Time was of essence and efficiency posed a problem. The company needed a solution that would minimize downtime.

The solution

: Amelco utilized Ansible’s agentless framework to deploy, operate and upgrade their application while saving time, money and minimizing downtime. Faster deployment time brought them more customers, reduced complexity and accelerated delivery. The company used Ansible Tower to help automate their deployment across various platforms.

5. Multi-tier Orchestration

Multi-tier Orchestration is one of the best features of Ansible. There are time-consuming and repetitive tasks in the product deployment that you need to address. And even when you have migrated to the cloud, it is important to have a centralized control of cloud for hassle free business scheduling. Examine how Splunk (NASDAQ: SPLK)  utilized Operational Intelligence in accordance with Ansible and Playbook to address this problem.

The problem:

Splunk has customers in over 110 countries for analyzing big data, assessing security threats and monitor IT services. The company needed access to machine learning and data science in an automated manner.

The solution:

Splunk started using Ansible and Playbook to schedule requests, have a competent queuing system and monitor the activities of the cloud repositories. It started using the Ansible’s Provisioning callback feature to help with automation and auto scaling within cloud-based environments.

6. Security and Compliance

It goes without saying that working in an IT environment entails paramount security - security of data, customer privacy, security of IT systems, the list goes on. Let’s examine how NASA started using Ansible in Cloud and AWS for automation and security.

The problem:

Security was a concern because NASA had an environment that spanned many virtual private clouds and AWS. They needed to ensure that system administrators had access to every server, for every simple task. Delivering better operations and security was the first priority.

The solution:

NASA partnered with Ansible and Ansible partnered with DISA STIG (a government standard for security) to secure their systems. And since Ansible is agentless, there is no need to have a separate security policy because it is all possible with WinRM and SSH infrastructure. Credits: Ansible Resources

Combining Ansible and Docker

The perfect combination of Ansible with Docker enables you to automate software development successfully. Apart from operationalizing your Docker container, you can go through the process of building and deploying apps automatically. Ansible offers a reliable framework for managing infrastructures. It can remotely handle SSH connection and complexities involved in DevOps. When Docker and Ansible works together, you can come up with a number of benefits:

Continuous Deployment using Ansible and Docker

Docker and Ansible works together to provide continuous delivery and continuous deployment. As Ansible works with multi-node software deployment, configuration management and ad hoc task execution, it is widely used in software development. The modules in Ansible work over JSON, while you can use any programming language for the standard output. Here are the 5 main roles to think about when working with Ansible, described in bddyml: - hosts: all remote_user: vagrant sudo: yes roles: - etcd - confd - docker - nginx - bdd These are the first four roles - etcd, confd, docker and nginx and it makes sure blue-green deployment tools are present. Here is how the Docker role is explained: - name: Docker is present apt: name=docker.io state=present tags: [docker] - name: Python-pip is present apt: name=python-pip state=present tags: [docker] - name: Docker-py is present pip: name=docker-py version=0.4.0 state=present tags: [docker] Docker is important for managing containers. Docker can be installed using apt-get, Python pip and Docker-py. By looking at the code snippet above, you can see that it is very easy to use Ansible. It provides a number of advantages when compared to Puppet and Chef. Install the tools, after which you can examine the last Ansible rolebdd. In this step, you perform deployment. Deployment follows the blue-green technique. Testing is very important to ensure that the deployment is correctly done. Do the following to ensure zero downtime and a successfully run application. • Get the newest version of the application container • Run the old application in parallel with the new one • Run post-deployment tests • Notify etcd about the new release (port, name, etc) • Convert nginx configuration to point to the new release • Finally, halt the old release It is important to ensure that the previous containers have passed the unit and functional tests because you run the new version in parallel with the old one. Here is how rolebdd looks like: - name: TOML is present copy: src=bdd.toml dest=/etc/confd/conf.d/bdd.toml tags: [bdd] - name: Config template is present copy: src=bdd.conf.tmpl dest=/etc/confd/templates/bdd.conf.tmpl tags: [bdd] - name: Deployment script is present copy: src=deploy-bdd.sh dest=/usr/local/bin/deploy-bdd.sh mode="0755" tags: [bdd] - name: Deployment is run shell: deploy-bdd.sh tags: [bdd] There are 3 tasks here: In the first, the template resource add.toml is present. confd uses to describe template, its designation and the commands to be executed… In the second, ensure confd template (bdd.conf.tmpl)[https://github.com/vfarcic/provisioning/blob/master/ansible/roles/bdd/files/bdd.conf.tmpl] is present. confd with add.tom will point to new release during deployment. In the third, the deployment script deploy-bdd.sh is present and the last to run. The colors to be used for deployment is also used here. It could blue or green for deployment, depending on your preference. BLUE_PORT=9001 GREEN_PORT=9002 CURRENT_COLOR=$(etcdctl get /bdd/color) if [ "$CURRENT_COLOR" = "" ]; then CURRENT_COLOR="green" fi if [ "$CURRENT_COLOR" = "blue" ]; then PORT=$GREEN_PORT COLOR="green" else PORT=$BLUE_PORT COLOR="blue" fi After this, stop and remove any existing containers. However, the current release will continue operating. docker stop bdd-$COLOR docker rm bdd-$COLOR Next the container with the new release is started and run side-by-side with the existing one. Here, BDD Assistant container vfarcic/bdd is deployed. docker pull vfarcic/bdd docker run -d --name bdd-$COLOR -p $PORT:9000 vfarcic/bdd Now, it’s the time to conduct the final set of tests. For BDD Assistant, unit tests and functional tests are run for the container build process. But in order to make sure that the deployed application works as expected, it’s important to run integration and stress tests. A set of BDD scenarios are run using PhantomJS docker pull vfarcic/bdd-runner-phantomjs docker run -t --rm --name bdd-runner-phantomjs vfarcic/bdd-runner-phantomjs --story_path data/stories/tcbdd/stories/storyEditorForm.story --composites_path /opt/bdd/composites/TcBddComposites.groovy -P url=http://172.17.42.1:$PORT -P widthHeight=1024,768 After all the tests are passed, store the information about the new release using etcd and run confd to update our nginx configuration. Till then nginx was handling all the requests to the old release. From this point, users will be redirected to this deployed version. etcdctl set /bdd/color $COLOR etcdctl set /bdd/port $PORT etcdctl set /bdd/$COLOR/port $PORT etcdctl set /bdd/$COLOR/status running confd -onetime -backend etcd -node 127.0.0.1:4001 Now getting Docker and Ansible to work together It is now time for a piece of action. First, you can create a Vagrant file that will in turn have a Ubuntu Virtual machine. Next, run Ansible playbook. This will install and configure all that’s needed. Make sure Git, Vagrant, VirtualBox are installed. It is now time to deploy the application. git clone https://github.com/vfarcic/provisioning.git cd provisioning/ansible vagrant up If you are doing it the first time, it could take a few minutes because all the components have to be downloaded first. The faster the bandwidth, the better it would be. Run the following to start deployment vagrant provision In the next step, you can SSH to the VM, then the version changes from blue (port 9001) to green (port 9002). This happens for each vagrant provision run. vagrantssh sudodockerps This will ensure the application runs smoothly. Confirm this by typing http://localhost:8000/ on your browser. Credits: Technology Conversations

Docker Container Management Using Ansible Modules

The most popular file-format for container development (Linux based) is Docker. You can easily automate Docker in your environment through Ansible. It is possible to operationalize, build and deploy Docker container process easily with Ansible modules. Let’s see how we can manage Docker using Ansible modules like docker, docker_image, docker_network and docker_service.

1. docker - For managing Docker Containers

docker is original Ansible module used for running Docker container life cycle. There are new modules available as additional options. To know more about Docker container orchestration with Ansible, refer https://github.com/ansible/ansible/blob/devel/docsite/rst/guide_docker.rst. The requirements needed are: - python >=2.6 - docker-py >=0.3.0 - The docker server >=0.10.0 Examples: # Ensure that a data container with the name "mycontainer" exists. If no container # by this name exists, it will be created, but not started. - name: Application container docker: name: mycontainer image: myapp:v1 state: present volumes: - /vol/app - name: application container docker: name: applicationname image: cabotdocker/dockerimage state: present pull: always links: - "mydb" devices: - "/dev/sda:/dev/xvdb:rwm" ports: - "8080:80" extra_hosts: host1: "192.168.1.1" host2: "192.168.1.2" env: SOME_VAR: variablename # Ensure that an application server is running, using the volume from the data # container. Expose the port 80. # Ensure that a container of your application server is running. This will: # - pull the latest version of your application image from DockerHub. # - ensure that a container is running with the specified name and exact image. # If any configuration options have changed, the existing container will be # stopped and removed, and a new one will be launched in its place. # - link this container to the existing database container “mydb” launched above with # an alias. # - grant the container read write permissions for the host's /dev/sda device # through a node named /dev/xvdb # - bind TCP port 8080 within the container to port 80 on all interfaces # on the host. # - set the environment variable SOME_VAR to “variablename” Credits: docker

2. docker_image - For managing Docker Images

This is all about images - building, loading or pulling images. Images are made available while creating containers. You can also tag an image into a repository or archive it into .tar file. The requirements are: - python >=2.6 - docker-py >=1.7.0 - Docker API >=1.20 Examples: -name: Build an image and push it to a private repo docker_image: path: ./cabotdocker/ name: registry.ansible.com/cabotdocker/appname tag: v1 push: yes - name: pull an image docker_image: name: cabotdocker/elk - name: Archive image docker_image: name: registry.ansible.com/cabotdocker/elk tag: v1 archive_path: elk_dockerimage.tar Credits: docker_image

3. docker_network - For managing Docker Networks

Using docker_image module, you can create and remove Docker networks, connect containers to networks, or create a network with options. It is also possible to delete networks while disconnecting all containers. This module performs almost the same function as that of the CLI subcommand, “docker network”. The requirements are: - python >=2.6 - docker-py >=1.7.0 - The Docker server >=1.9.0 Examples: - name: Create a network docker_network: name: cabot_network - name: Remove all but selected list of containers docker_network: name: cabot_network connected: - container1 - container2 - container3 - name: Create a network with options docker_network: name: cabot_network driver_options: com.docker.network.bridge.name: net2 ipam_options: subnet: ‘10.10.0.0/16' gateway: 10.10.10.1 iprange: '10.10.1.0/24' - name: Delete a network, disconnecting all containers docker_network: name: cabot_network state: absent force: yes Credits: docker_network

4. docker_service - For managing Docker services and containers

This is run with Docker Compose to start, shutdown and scale services. It supports Compose Version 1 and 2. To read Compose, you can open docker-compose.yml (or .yaml) file or inline using the definition option. The requirements are: - python >=2.6 - docker-compose >=1.7.0 - Docker API >=1.20 - PyYAML >=3.11 Examples: - name: Run using a project directory hosts: localhost connection: local gather_facts: no tasks: - docker_service: project_src: mydockerapp state: absent - docker_service: project_src: mydockerapp register: output - debug: var: output - docker_service: Project_src: mydockerapp build: no register: output - debug: var: output - assert: that: "not output.changed " - name: Run with inline v2 compose hosts: localhost connection: local gather_facts: no tasks: - docker_service: project_src: appname state: absent - docker_service: project_name: appname definition: version: '2' services: db: image: mysql volumes: - ""{{ app_db }}":/var/lib/mysql web: build: "{{ app_dir }}/appname" volumes: - "{{ appname_dir }}/appname:/code" ports: - "80:80" depends_on: - db register: output - debug: var: output - assert: that: - "web.appname_web_1.state.running" - "db.appname_db_1.state.running" Credits: docker_service

5. docker_login

Example: - name: Log into DockerHub docker_login: username: dockeruser password: passwrd email: [email protected] Wrap Up When Ansible works with Docker, you can be assured that deployment becomes easy and foolproof. When you are doing it in a traditional setting, install JDK, web server, etc. to make sure the configuration files are set, the OS is configured and Docker is ready. The work of Ansible is simplified through containers and once everything is properly set, you can work towards a containerized world. Plenty of enterprise are presently using Ansible and Docker together and they’ve been hugely successful in doing it. The fact that both Ansible and Docker have almost the same syntax, is an added advantage here. Want to implement the power of Ansible and Docker in your next software product development ? Come to us; we will guide you. Contact Us Today! secure-enterprise-mobility-cta

SHARE THIS BLOG ON

STAY UP-TO-DATE WITH US

Subscribe to our newsletter and know all that’s happening at Cabot.

SHARE