Create AWS EC2 instance with webserver installed using Ansible
Hello all, in this blog we are going to learn how to provision/create/spawn an AWS EC2 instance using Ansible tool.
We will make all the steps automatic so that our ec2 instance will be created and a webServer starts running without any manual intervention.
We will write two ansible roles:
- One for creating aws ec2 instances with desired ingress port opening
- Another for installing and configuring apache webserver
Go to ansible default role path .
cd /etc/ansible/roles
and create two ansible roles using below commands
ansible-galaxy init ec2_instance
ansible-galaxy init custom_webserver
Paste the below content in ec2_instance/tasks/main.yml
--- # tasks file for ec2_instance ######################################################## ##### Below file is required to login aws account ####### - include_vars: secure.yml ##################################################### ###### Creating a security group for ec2 instance ### - name: create a security group ec2_group: name: "{{ security_group }}" # reading variable from vars section using jinja2 format description: "An ec2 group for ansible webserver" region: "{{ region }}" # which aws region need to start ec2 instance vpc_id: "vpc-f735d19c" # pre-defined VPC id from aws console state: present # to create a new ec2 instance. aws_access_key: "{{ myuser }}" # variable will be read from ansible vault defined later aws_secret_key: "{{ mypass }}" # variable will be read from ansible vault defined later rules: # List of firewall inbound rules to enforce in this group - proto: tcp from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 rule_desc: "Allow ssh on port 22" - proto: tcp ports: - 80 - 8080 - 8123 cidr_ip: 0.0.0.0/0 rule_desc: "ansible-grp-80" rules_egress: # List of firewall outbound rules to enforce in this group
- proto: all cidr_ip: 0.0.0.0/0 register: grp_result tags: create_group ################################################ #### Starting our ec2 instance on aws ######### - name: provision an ec2 instance ec2: key_name: "{{ keypair }}" instance_type: "{{ instance_type }}" image: "{{ image }}" wait: yes #wait_timeout: 600 instance_tags: Name: "{{ instance_tag }}" count: "{{ count_instance }}" vpc_subnet_id: "subnet-8b000de3" assign_public_ip: yes region: "{{ region }}" state: present # group_id: "sg-0416e7537feb95257" group: "{{ security_group }}" aws_access_key: "{{ myuser }}" aws_secret_key: "{{ mypass }}" register: ec22 when: grp_result.failed == false tags: create_ec2 ############################################# ##### Waiting to let ec2 instance comes up ### - name: wait for SSH to come up wait_for: host: "{{ item.public_ip }}" port: 22 state: started with_items: "{{ ec22.instances }}" when: ec22.failed == false
The identifiers with "{{ }}" is a variable declaration in jinja2 syntax. We will define their values in ec2_instance/vars/main.yml file as below:
---
# vars file for ec2_instance
security_group: ansible_web_sg # This name can be any group name
keypair: "EC2 Tutorial" # This name should match to your Key pair generated from aws console
image: "ami-0ebc1ac48dfd14136" # This is the aws in-built image id. Can provide any valid image id
instance_type: "t2.micro" # This instance type comes with aws free tier
region: "ap-south-1" # Need to provide aws valid region code where we want to create out ec2 instance
instance_tag: "webserver" # Can be any unique name but it is required later
count_instance: 1 # number of ec2 instances to be created
To pass AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY to our tasks we can create one ansible vault with password to protect them.ansible-vault create --vault-id prod@prompt secure.yml
and save below variables in secure.yml
myuser: <AWS_ACCESS_KEY_ID>
mypass: <
AWS_SECRET_ACCESS_KEY>
We can not print the exact content of secure.yml using normal cat command. We have to use below command to print the actual content of secure.yml.ansible-vault view --vault-id prod@prompt secure.yml
To create the second role, add below content in custom_webserver/tasks/main.yml
---
# tasks file for webserver
########################################
### Installing apcahe server software ###
- name: Install httpd server
package:
name: httpd
state: present
# when: ansible_distribution == "RedHat"
register: install_res
tags: webconf
###########################################
#### Custom Document Root of web server ###
- name: Create document root folder
file:
path: "{{ dr_dir }}" # /var/www/paul/
state: directory
register: dr_create_res
#################################
#### use custom document root ###
- name: Configure httpd web server
template:
src: localserver.conf.j2
dest: /etc/httpd/conf.d/paul.conf
when: install_res.rc == 0
notify: restart httpd server
tags: webconf
##################################
#### Load webpages from github ###
- name: Download webpages from url
get_url:
dest: "{{ dr_dir }}" # /var/www/paul/
url: "https://raw.githubusercontent.com/PaulRepo/DevOpsAL_task1/master/index.html"
when: dr_create_res.failed == false
####################################
#### Start apache (httpd) server ###
- name: Start apache server
service:
name: httpd
state: started
Add below content in custom_webserver/vars/main.yml for the tasks variables.
---
# vars file for webserver
dr_dir: "/var/www/paul/"
myport: 8123
region: "ap-south-1"
Now Since we have added one template in our task so we have to define the template at custom_webserver/templates/localserver.conf.j2
Listen {{ myport }}
<VirtualHost {{ ipv4_address }}:{{ myport }}>
DocumentRoot {{ dr_dir }}
</VirtualHost>
We have also defined one handler in our task, so put the below code in custom_webserver/handlers/main.yml
---
# handlers file for webserver
- name: restart httpd server
service:
name: httpd
state: restarted
So we have defined both our ansible roles.
Since we are dynamically spawning ec2 instances so we need to create dynamic inventory for our playbook. Download ec2.py and ec2.ini files from ansible github account and give execute permission to ec2.py using:
chmod u+x ec2.py
Also change the interpreter of ec2.py file as per your installed python. Since I am having python3 installed so I'll change the interpretor to
#!/bin/bin/python3
Now its time to create our ansible playbook. Create below directory and go inside it
mkdir -p /root/ansibleWorks/mycode/customWebserver && cd /root/ansibleWorks/mycode/customWebserver
and paste below content in ec2_deploy.yml
- name: provisioning EC2 instance in aws hosts: localhost connection: local gather_facts: false tasks: - include_role: name: "ec2_instance" vars: region: "ap-south-1" instance_tag: "web" count_instance: 2 - name: Refresh ec2 instances command: python3 /root/ansibleWorks/mycode/customWebserver/ec2.py --refresh-cache - name: refresh ec2 inventory meta: refresh_inventory - name: configure a webserver hosts: tag_Name_web gather_facts: true become: yes remote_user: ec2-user roles: - role: custom_webserver ipv4_address: "{{ ansible_default_ipv4.address }}"
Here we are running both roles one by one. First role will create two ec2 instances and then update the dynamic inventory cache. While second role installs and start apache webserver with custom configuration.
To update the dynamic inventory cache we have to create three shell variables as
export AWS_ACCESS_KEY_ID="your-aws-key"
export AWS_SECRET_ACCESS_KEY="your-aws-secret"
export AWS_REGION="ap-south-1"
Now we will run this playbook to create our ec2 instances and configuring web servers in it dynamically.
ansible-playbook --vault-id prod@prompt ec2_deploy.yml
We can verify our newly created ec2 instances in aws console also.
We can check the final output in web browser as
Happy Learning
Comments
Post a Comment