Ansible, Satellite and Katello Automation

Katello 3.16

Katello and Red Hat Satellite are great tools for managing the lifecycle of your servers. From provisioning to content to patch management, the tools have made great strides over the last few years.

In preparation for the Red Hat Satellite 7 release, I thought I would take a look how to spin up a Katello instance and configure it in a consistent way. The Foreman community is very welcoming and if you can highlight issues now, it will make for a better products in the future, be that in the Open Source or commercial derivatives.

After doing lots of work with Ansible recently, I was very keen to hear about the latest Ansible Collections for Red Hat Satellite. Much of the documentation and workflows on Katello and Satellite are applicable to both set of products. Ansible Collections is a new technology and I thought this would be a great opportunity to try the new Foreman Ansible Modules

The goal is:

  • Take a running CentOS server, configured with enough resources (RAM, CPU, Disk Space) to run Katello
  • Install a configurable version of Katello on this server
  • Configure Katello – Organizations, Locations, Products, Views, Activation Keys, Sync Plans
  • Repeat for ‘stage’ and ‘prod’ environments, but potentially with different variables
  • Repeat for Red Hat Satellite.

Getting started

I have a RHEL 7.8 server with Ansible 2.9.10 installed, which is my Ansible Control host. ssh key equivalence is setup against my Katello server(s) and they are configured with enough CPU, memory and disk.

As we like to have everything in Git, let’s create a directory for our playbooks.

mkdir katello && cd katello
mkdir collections
ansible-galaxy collection install theforeman.foreman -p collections

This will download and install the theforeman.foreman Ansible collection into a local directory. Notice that I’ve chosen to install the collection locally on the Control host in this specific directory. This means my Git repo will have a ‘snapshot’ of the foreman collection at this point in time and in theory I can deploy this same repo into another server without having to re-rerun the “ansible-galaxy collection install” command. At the time of writing, version 1.0.1 of the collection was available.

We’ll add a custom Ansible configuration file (ansible.cfg) with a path to the collection directory:

[defaults]
collections_paths = ./collections

We’ll need an inventory file to define our Katello server(s). The idea is that we’ll have multiple Katello servers possibly running different versions and possibly with different configurations. There are two format files for the inventory file – YAML and INI. Here are two examples, the first of which is in YAML format:

all:
  children:
    katello_3_16_RC5:
      hosts:
        katello01.test.example.com:
          ansible_user: developer
          katello_cli_admin_user: katello
    katello_3_15:
      hosts:
        katello.production.example.com:
        katello.stage.example.com:
    production:
      hosts:
        katello.production.example.com:
          server_domain: production.example.com
    stage:
      hosts:
        katello.stage.example.com:
          server_domain: stage.example.com
    test:
      hosts:
        katello01.test.example.com:
          server_domain: test.example.com

And the same content, but in INI format:

[katello_3_16_RC5]
katello01.test.example.com ansible_user=developer katello_cli_admin_user=katello

[katello_3_15]
katello.production.example.com
katello.stage.example.com

[test]
katello01.test.example.com server_domain=test.example.com

[stage]
katello.stage.example.com server_domain=stage.example.com

[production]
katello.production.example.com server_domain=production.example.com

Both of the syntax’s should result in identical behaviour. Use the version you are most familiar with and which is likely to suit your environment.

We begin by creating a group_vars directory and populating the configuration variables we’ll use for version 3.16 RC5 of Katello:

mkdir group_vars

And we create group_vars/katello_3_16_RC5.yml which defines the repositories we want to use for this release. In this case, I’ve got the details from Katello 3.16 Installation Guide.

---
foreman_release: https://yum.theforeman.org/releases/2.1/el7/x86_64/foreman-release.rpm
katello_repos: https://fedorapeople.org/groups/katello/releases/yum/3.16/katello/el7/x86_64/katello-repos-latest.rpm
puppet_repos: https://yum.puppet.com/puppet6-release-el-7.noarch.rpm
epel_repos: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
install_repos: base,updates,extras
apypie_url: https://yum.theforeman.org/client/2.0/el7/x86_64/python2-apypie-0.2.1-2.el7.noarch.rpm
katello_admin_user: admin
katello_installer_extra_options: --foreman-proxy-tftp false --tuning default

Hopefully the descriptions above make sense. install_repos are the CentOS repositories which should be enabled at install time (the playbook will disable other repositories). katello_admin_user is used with the Katello installer to define the initial login (the default is admin which matches the installer default). You can provide additional installer options via the katello_installer_extra_options stanza. Here I’ve opted to disable the TFTP service at install time.

Finally we create foreman.j2 in our top level katello directory, which is used to populate foreman.yml (used by hammer) on the Katello server.

:foreman:
 :username: '{{ katello_admin_user }}'
 :password: '{{ katello_admin_password }}'

The structure will start to look as follows as you work through the steps.

├── 01_katello_rpms.yml
├── 02_install_katello.yml
├── 03_apypie.yml
├── ansible.cfg
├── foreman.j2
├── group_vars
│   ├── katello_3_16_RC5.yml
│   └── test.yml
├── hosts
└── collections
    └── ansible_collections
        └── theforeman
            └── foreman

Installing the Katello packages

Ensure that your Ansible control host can connect to the Katello server you wish to deploy to. In the sample inventory above, we specify we’ll connect as the user developer when connecting to the katello01.test.example.com server. The following playbook 01_katello_rpms.yml will remove any existing puppet configuration on the server, configure the Katello repositories, update the server and install the Katello installation RPMs. If you wish, you can uncomment out the section to reboot the server automatically. (Note, I’ve not used the rhsm_repository module as I’ve not got the subscription_manager package installed on my CentOS image, but you could uncomment this out this is option is available to you).

---
- name: Install the Katello software
  hosts: all

  tasks:
  - name: Check installed packages
    package_facts:
      manager: auto

  - fail:
     msg: Katello is already installed on this host
    when:
    - "'katello' in ansible_facts.packages"

  - name: Stop existing puppet instance
    service:
      name: puppet
      state: stopped
    become: true
    ignore_errors: true

  - name: Remove existing puppet installation
    yum:
      name: "{{ packages }}"
      state: absent
    become: true
    vars:
      packages:
      - facter
      - puppet
      - ruby-shadow

  - name: Remove old puppet directories
    file:
      name: "{{ item }}"
      state: absent
    become: true
    loop:
      - /etc/puppet
      - /var/puppet
      - /var/lib/puppet

#  - name: Disable all repositories except those required for installation
#    rhsm_repository:
#      name: "{{ install_repos }}"
#      purge: True

  - name: Disable all repositories
    shell: yum-config-manager --disable "\*"
    become: true

  - name: Enable repositories required for installation
    shell: "yum-config-manager --enable {{ install_repos }}"
    become: true

  - name: Install REPO packages
    yum:
      name: "{{ packages }}"
      state: installed
    become: true
    vars:
      packages:
      - "{{ foreman_release }}"
      - "{{ katello_repos }}"
      - "{{ puppet_repos }}"
      - "{{ epel_repos }}"

  - name: Install Foreman Release SCL
    yum:
      name: foreman-release-scl
      state: installed
    become: true
    register: yum_output

  - debug:
      var: yum_output

  - name: Update system
    yum:
      name: '*'
      state: latest
    become: true
    register: system_output

  - debug:
      var: system_output

#  - name: Reboot host and wait for it to restart
#    reboot:
#      msg: "Reboot initiated by Ansible"
#      connect_timeout: 5
#      reboot_timeout: 600
#      pre_reboot_delay: 0
#      post_reboot_delay: 30
#      test_command: whoami
#    become: true

  - name: Install Katello package
    yum:
      name: katello
      state: latest
    become: true
    register: katello_output

  - debug:
      var: katello_output

Run the playbook on the ‘test’ platform according to our inventory:

ansible-playbook -i hosts --limit test 01_katello_rpms.yml -D

(You can use add -C option to test the run first. The -D option will show the differences as the playbook runs).

The installation of the RPMs takes a while, it’s worthwhile tailing /var/log/messages on the Katello host. When complete, remember to pencil in a reboot of the server, if the O/S has brought in new kernel or glibc updates and you did not uncomment the reboot section of the playbook.

Run the Katello installer

The following playbook can be used to run the installer. The nice thing about Ansible and variables is that we can override certain parameters either at an environment level (eg production, test, stage) or Katello version level (eg Katello 3.15, Katello 3.16) via group_vars or at a host level in host_vars or in the Ansible inventory file. The playbook 02_install_katello.yml looks as follows:

---
- name: Install the Katello software
  hosts: all

  tasks:
  - name: Check installed packages
    package_facts:
      manager: auto

  - fail:
     msg: Katello is not installed on this host, run 01_katello_rpms.yml
    when:
    - "'katello' not in ansible_facts.packages"

  - name: Check that ~/.hammer/cli.modules.d/foreman.yml exists
    stat:
      path: "~{{ katello_cli_admin_user }}/.hammer/cli.modules.d/foreman.yml"
    register: stat_result

  - fail:
     msg: Katello may already be configured on this host, please check manually 
    when:
    - stat_result.stat.exists

  - name: Check katello_admin_password is provided
    run_once: "yes"
    fail:
      msg: "Run the command with password argument:
      -e 'katello_admin_password=XXXXXX' or set in host_vars/group_vars"
    when: katello_admin_password is not defined

  - name: Run Katello installer
    shell: foreman-installer --scenario katello
       --foreman-initial-admin-password {{ katello_admin_password }}
       {{ katello_installer_extra_options | default() }}
    become: true
    register: installer

  - debug:
      var: installer.stdout

  - name: Setup hammer directory
    file:
      name: "{{ item }}"
      state: directory
      owner: "{{ katello_cli_admin_user }}"
    loop:
      - ~{{ katello_cli_admin_user }}/.hammer
      - ~{{ katello_cli_admin_user }}/.hammer/cli.modules.d
    become: true

  - name: Setup hammer config file
    template:
      src: foreman.j2
      dest: "~{{ katello_cli_admin_user }}/.hammer/cli.modules.d/foreman.yml"
      mode: '0600'
      owner: "{{ katello_cli_admin_user }}
    become: true

This is a default installation but with tftp disabled via the katello_installer_extra_options stanza in group_vars/katello_3_16_RC5.yml Run the playbook on the ‘test’ platform according to our inventory:

ansible-playbook -i hosts --limit test 02_install_katello.yml -D -e 'katello_admin_password=XXX'

Note that at the end of the installer, we populate the file ~/.hammer/cli.modules.d/foreman.yml on the Katello server with the Katello admin username and password, where ~ is defined by the katello_cli_admin_user variable. Hammer is another great way to manipulate Katello and script up recipes. If this is a security issue for you, go ahead and remove that file now.

Apypie

Now that Katello is installed, we can start to leverage those powerful Ansible Collection utilities. First of all though, we need to install apypie on the Ansible control host. The following playbook, 03_apypie.yml will do this:

---
- name: Install the apypie utility on Ansible Control Host
  hosts: all
  connection: local

  tasks:
  - name: Check installed packages
    package_facts:
      manager: auto

  - fail:
     msg: apypie is already installed on this host
    when:
    - "'python2-apypie' in ansible_facts.packages"

  - name: Install apypie
    yum:
      name: "{{ packages }}"
      state: installed
    become: true
    vars:
      packages:
      - "{{ apypie_url }}"

This can be run as:

ansible-playbook -i hosts --limit test 03_apypie.yml

Preparing the configuration

The way we’ve setup the structure, we can have different Katello configurations in each environment. Let’s prepare a configuration file with some variables for our test environment. We create group_vars/test.yml as follows

---
katello_orgs:
   - katello_org_name: Default Organization
   - katello_org_name: My Org
katello_locations:
   - katello_location_name: Default Location
     katello_location_org:
      - Default Organization
      - My Org
   - katello_location_name: My Location
     katello_location_org:
      - Default Organization
      - My Org

Creating the organisation and locations

The following playbook 04_create_org.yml will ensure that the organizations and locations defined above are created. On the Organization side, we retain the existing ‘Default Organization’ and create a new one called ‘My Org’. We retain the ‘Default Location’ but add ‘My Org’ to it, and we create a new Location called ‘My Location’. I’ve deliberately included the defaults in this example, but you don’t need to – you can just create the artefacts that apply to your environment. You will need to provide katello_admin_password to the playbook, either on the command line as a variable or you could include it in group_vars or host_vars. (Later, we’ll look how to retrieve this value from the Katello server).

---
- hosts: all
  connection: local
  gather_facts: no
  collections:
    - theforeman.foreman

  tasks:
    - name: Check katello_admin_password is provided
      fail:
        msg: katello_admin_password needs to be supplied
      when:
        - "katello_admin_password is not defined"

    - name: Set Katello URL
      set_fact:
        katello_url: "https://{{ inventory_hostname }}"

    - name: Create Organizations
      organization:
        server_url: "{{ katello_url }}"
        validate_certs: no
        username: "{{ katello_admin_user }}"
        password: "{{ katello_admin_password }}"
        name: "{{ item.katello_org_name }}"
        # description: "{{ item.katello_org_desc }}"
        state: present
      loop: "{{ katello_orgs }}"

    - name: Create Locations
      location:
        server_url: "{{ katello_url }}"
        validate_certs: no
        username: "{{ katello_admin_user }}"
        password: "{{ katello_admin_password }}"
        name: "{{ item.katello_location_name }}"
        organizations: "{{ item.katello_location_org }}"
        state: present
      loop: "{{ katello_locations }}"

To run the playbook:

ansible-playbook -i hosts --limit test 04_create_org.yml -e "katello_admin_password=XXX" -D

If all is well, you should have a new organization and a new location in your Katello server.

Git

At this point, it’s worth mentioning that saving your configuration in Git is a good idea. As well as helping you keep track of changes, it makes collobaration with others easier. One important thing to remember, don’t store passwords in Git. Take a look at Ansible Resources And Best Practices with AWX and Tower for some further details on storing credentials

Summary

We’ve just scratched the surface of the power of Ansible with Katello / Satellite. The best part of this deployment is that your Katello configuration is available in an easy to read format which can be shared as code between team members. Deploying a new Katello version or spinning up something for testing becomes a lot more straightforward.

I’ll be updating this page as time allows to demonstrate the following:

  • Creating GPG Keys
  • Creating productions and repositories
  • Creating a sync plan
  • Creating activation keys
  • Updating the Ansible collection
  • Retrieve and use the passwords from the foreman.yml file we configured at install time
  • Installing Katello on RHEL/CentOS 8 as and when it becomes available

Feel free to leave feedback below if this page has been helpful or if you have questions.

4 thoughts on “Ansible, Satellite and Katello Automation

  1. Just a small thing, but among others, your invocation style for Ansible’s yum module is now deprecated. Everything’s in an array now of the form [‘package1’, ‘package2’, ‘package3’].

Leave a Reply

Your email address will not be published. Required fields are marked *