After writing a post about installing Ansible Automation Platform across 3 nodes, there are also circumstances where installing Ansible Automation Platform on a standalone node is much preferred.
In this post, I will write about installing AAP on a standalone node with internal database (Without Automation Hub). The prerequisites remains the same as the installation across 3 nodes as my previous post. However, I will write the pre-requisites here as well for reference.
This Installation wascompleted on a workstation running Fedora Linux 37 (Workstation Edition) OS with 32 Gigabytes of RAM.
Pre-requisites
- Virtual Machine Manager — VirtualBox, VMware, etc. I will using Red Hat Virtual Machine Manager, or Virt Manager for short.
- Base ISO file for the VM — In this case would be RHEL 9.1, which can be downloaded here. (RHEL Subscription is needed to register the system with subscription manager)
Note: Boot ISO file is for the bare minimal installation without a package source, compared to the DVD ISO. The process would be similar on Linux Image such as Fedora/CentOS as well. - A workstation with sufficient amount of RAM. Automation Controller and Automation Hub each requires at least 8GB of RAM. Database Node will work with 4GB of RAM. These would consume about 20 GB of RAM.
Note: Both Automation Hub and Database Node can be installed on a single VM to conserve RAM. The RAM specifications can be changed in the script as well (bundle/roles/preflight/defaults/main.yml — Not Recommended) - Red Hat Developers Account — For AAP Developer Subscription and AAP Bundle Download. AWX open source are configured through pip. Able to find the GitHub here.
Provisioning Nodes
Download a Linux VM Image ISO file of your choice. The commands may have to be executed according to the Linux Operating System.
Create a new virtual machine in the Virt Manager. Follow the steps of the interface.
Choose ISO or CDROM from local folder to install media. Specify the RAM and CPUs needed. As the specification is MiB, 1 GB is about 1000 MiB. Specify about 9000MiB to 10000MiB. As the available RAM will be reduced once the VM has been deployed. Specify about 4 CPU Cores. More information regarding specifications can be found here.
Create Disk Space. Create more space for the database node based on the space needed for the application, otherwise 80–100GB would be more than sufficient.
Finish the installation.
Configure the VM when it boots up:
- Register the System OS (if required)
- Change the host name from localhost to different names. (Can be changed through command line if required)
- Set up the OS as a Server with GUI.
- Create password for root user and ensure the account is not locked
- Create a user with administration rights if needed.
Best Practice:
Different from installation across 3 nodes, the host name of the node need not be renamed in a format to ensure successful DNS resolution. However, it is best practice to do so. (If your node does not have resolvable DNS hostname from the start)
Ensure to put . in the hostname for like a common link for successful DNS resolution, mainly used for pulling images from Docker to push Execution Environment into Automation Hub when installing across multiple nodes.
Setting the Hostname:
Run the command in the terminal and restart the node:
[root@ansibleautomation ~]# hostnamectl hostname ansibleautomation.local
IP address and Hosts Configuration
Figure out the IP address:
[root@ansibleautomation ~]# ip addr
Write the host IP address with hostname in /etc/hosts:
[root@ansibleautomation ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.123.456 ansibleautomation.local
Ensure the nodes can be pinged:
[root@ansibleautomation ~]# ping ansibleautomation.local
PING ansibleautomation.local (192.168.123.456) 56(84) bytes of data.
64 bytes from ansibleautomation.local (192.168.123.456): icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from ansibleautomation.local (192.168.123.456): icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from ansibleautomation.local (192.168.123.456): icmp_seq=3 ttl=64 time=0.091 ms
64 bytes from ansibleautomation.local (192.168.123.456): icmp_seq=4 ttl=64 time=0.104 ms
^C
--- ansibleautomation.local ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3057ms
rtt min/avg/max/mdev = 0.035/0.070/0.104/0.028 ms
Extra Steps:
Installing Ansible Engine on the Controller node to test if the hosts can be pinged from Ansible.
Set the hosts in the /etc/ansible/hosts:
[root@ansibleautomation ~]# cat /etc/ansible/hosts
ansibleautomation.local ansible_user=root ansible_ssh_pass=password
To ensure that the hosts can be accessed using hostnames, set host_key_checking to false under /etc/ansible/ansible.cfg:
[root@ansibleautomation ~]# cat /etc/ansible/ansible.cfg
# Since Ansible 2.12 (core):
# To generate an example config file (a "disabled" one with all default settings, commented out):
# $ ansible-config init --disabled > ansible.cfg
#
# Also you can now have a more complete file by including existing plugins:
# ansible-config init --disabled -t all > ansible.cfg
# For previous versions of Ansible you can check for examples in the 'stable' branches of each version
# Note that this file was always incomplete and lagging changes to configuration settings
# for example, for 2.9: https://github.com/ansible/ansible/blob/stable-2.9/examples/ansible.cfg
[defaults]
host_key_checking = false
Ping the hosts from ansible:
[root@ansibleautomation ~]# ansible -m ping all
ansibleautomation.local | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Configure SSH
Generate SSH Keygen:
[root@ansibleautomation ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
...
The key's randomart image is:
+---[RSA 3072]----+
|=+ . |
|ooE. . |
| oo = o |
| B = . |
| = X = S |
| O # . . |
| .. @ = |
| +O.+. |
| o**B+ |
+----[SHA256]-----+
Transfer SSH key:
[root@ansibleautomation ~]# ssh-copy-id ansibleautomation
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ansible-automation (192.168.123.456)' can't be established.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ansible-automation's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ansibleautomation'"
and check to make sure that only the key(s) you wanted were added.
Testing out SSH:
[root@ansibleautomation ~]# ssh ansibleautomation.local
root@ansibleautomation.local password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Mon Apr 10 00:25:22 2023 from 192.168.123.456
[root@ansibleautomation ~]#
Alternative for copying keygen:
There will be instances where ssh-copy-id command cannot be run. This can be a solution:
[root@ansibleautomation ~]# cd ~/.ssh
[root@ansibleautomation .ssh]# cp -p id_rsa.pub authorized_keys
Download and Configure AAP Inventory
Obtain the AAP bundle from here. Unzip the file and configure the inventory for installation. There are lesser variables to configure for standalone configuration.
For security, the passwords can be vaulted in passwords.yml in the bundle folder and referenced from the inventory file. SSH key based authentication can also be used. (I will talk more about Ansible Vaults in a different post)
(In this setup, direct password indications will be used for simplicity. Please change user, password and database names to your preference)
Note: Privileged user with administration rights required for SSH.
Configure the node:
Specify the host name and installation of the controller on the localhost.
[automationcontroller]
ansibleautomation.local ansible_connection=local
Configure Admin Password and Registry Account:
- Set the admin password to log into the Ansible Automation Controller
- Set the variable pg_password for internal postgresql database password
- Set the registry for the Execution Environment Configuration only if your environment is a connected environment. If your environment is a disconnected environment, these variables are redundant.
[all:vars]
admin_password='password' # Login for AAP, admin and password
pg_host=''
pg_port=5432
pg_database='awx'
pg_username='awx'
pg_password='password'
pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL
# Execution Environment Configuration
registry_url='registry.redhat.io'
registry_username='abc@redhat.com'
registry_password='password'
# ee_from_hub_only =
Install the Ansible Automation Platform
Run the installation as root till completion. (There may be other errors involved depending on your server/vm node, e.g. CIS hardened server.)
[root@ansibleautomation aap-setup-bundle-2.3-1]# ./setup.sh
Extra Steps:
As mentioned before, there may be other errors depending on your server’s configuration. Some of the simple resolution maybe:
- Writing the ssh pass in the inventory when configuring the node:
[automationcontroller]
ansibleautomation.local ansible_ssh_pass=password ansible_connection=local
2. Privilege escalation in etc/ansible/ansible.cfg:
[root@ansibleautomation ~]# cat /etc/ansible/ansible.cfg
# Since Ansible 2.12 (core):
# To generate an example config file (a "disabled" one with all default settings, commented out):
# $ ansible-config init --disabled > ansible.cfg
#
# Also you can now have a more complete file by including existing plugins:
# ansible-config init --disabled -t all > ansible.cfg
# For previous versions of Ansible you can check for examples in the 'stable' branches of each version
# Note that this file was always incomplete and lagging changes to configuration settings
# for example, for 2.9: https://github.com/ansible/ansible/blob/stable-2.9/examples/ansible.cfg
[defaults]
host_key_checking = false
[privilege escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: The list is not exhaustive.
You will be able to access the automation controller from your browser.
Configuring Ansible Automation Platform on the Controller:
Once logged in there will a configuration page for the subscription:
Under the Username/Password Tab, enter your Red Hat Developer Account details.
You are able to obtain a trial subscription or utilize a Red Hat Developer Subscription for Individuals:
Setup User and Automation Analytics
Press Next then Submit. It will redirect to the AAP dashboard.
Now the installation is completed, you will be able to run your playbooks and set up projects through the Controller.