Skip to content

Blog

Juniper SRX: Understanding Firewall Filters

juniper-srx-320

This post is going to be focussed on Juniper SRX firewall filters. They are important to understand and configure because they protect your firewall from malicious traffic passing through or destined to the firewall.

What is a Firewall Filter?

A firewall filter in Juniper SRX is a security feature that defines a policy that evaluates the context of connections and permits or denies traffic based on the context (source IP address, destination IP address, port numbers, TCP sequence information, and TCP connection flags), updating this information dynamically.

Firewall filters are also known as access control lists by other vendors like Cisco. Other names also include authorization profile and packet filter.

How does Juniper SRX Firewall Filter Work?

The firewall filter inspects each and every packet coming in and going out of the SRX device interfaces. It is stateless and does not keep track of the state of the connections. It can be configured to accept or discard a packet before it enters or exits a port or interface. That is how we control the type and quantity of traffic that enters the device or exits the device.

If a packet is inspected and deemed to be acceptable, a class-of-service and traffic can be applied. If a packet arrives on an interface for which no firewall filter is applied for the incoming traffic on that interface, the packet is accepted by default. By default, a packet that does not match a firewall filter is discarded.

Firewall filters can be applied to all interfaces, including the loopback interface to filter traffic entering or exiting the device.

An IPv6 filter cannot be applied to an IPv4 interface that is because the protocol family of the firewall filter and interface must match.

Firewall Filter Components

Terms

A term is a named structure in which match conditions and actions are defined. Each term has a unique name. A firewall filter contains one or more terms, and each term consists of match conditions and actions. Let's note that a firewall filter with a large number of terms can adversely affect both the configuration commit time and the performance of the Routing Engine. The order of terms is important and impact the results. A firewall filter include a default term that discards all traffic that other terms did not explicitly permit.

The implicit term looks like:

term implicit-discard-all {
  then discard;
}
Match Conditions

A match condition or packet filtering criteria defines the values or fields that the packet must contain to be considered a match. If no match condition is specified, all packets are a match. So if we want an action to be taken for all packets, we can just ommit the match condition. We use the from keyword to specify the match statement. If a packet contains multiple match conditions, the packet must match all conditions to be considered as a match for the term.

If a single match condition is configured with multiple values, such as a range of values, a packet must match only one of the values to be considered a match for the firewall filter term.

The match condition that is selected for the term depends on the protocol family the we select for the firewall filter.

Example of match conditions:

  • Source IP
  • Destination IP
  • TCP and UDP ports
  • TCP flags
  • IP options
  • Incoming interface
  • Outgoing interface
  • ICMP packet type
  • etc...
Actions

If all match conditions specified in the term are true, the action is taken. If the match condition of a term is ommited, the action specified is also taken.

It is a good practice to explicitly configure one or more actions per firewall filter term. Any packet that matches all the conditions of the term is automatically accepted unless the term specifies other or additional actions.

There are three (3) types of actions:

Terminating actions
  • Stops the evealuation for the filter for a specified packet

  • The specified action is performed, no additional term is evaluated

  • Terminating actions include accept, discard, and reject.

    The accept action causes the system to accept the packet. The discard action causes the system to silently drop the packet without sending and ICMP message back to the source address. The reject action causes the systemt to discard the packet and send an ICMP message back to the source address.

Nonterminating actions

Nonterminating actions are used to perform other actions on a packet that do not halt the evaluation of the filter. Those actions include incrementing a counter (count), logging information about the packet header (log), sampling the packet data, sending information to a remote host using the system log functionality (syslog), or rate limiting traffic (policer).

If a term contains a nonterminating action without an explicit terminating action, such as accept, discard, or reject, the system will accept the matching packet by default. If we don't want the firewall filter action to terminate, we can use the next term action after the nonterminating action.

example 1: term 2 never get evaluated because term 1 action is nontermiating. So, the default accept action is taken right after log.

[edit firewall filter demo]
term 1 {
    from {
        source-address {
           192.168.10.0/24;
        }
    }
    then {
        log;
    }
}
term 2 {
    then {
        discard;
    }
}

example 2: The have term 2 evaluated, we explicitly said it in term 1 using next term

[edit firewall filter test]
term 1 {
    from {
        source-address {
            192.168.11.0/24;
        }
    }
    then {
        log;
        next term;
    }
}
term 2 {
    then {
        reject;
    }
}
Flow control actions

A flow control action enables a device to perform configured actions on the packet and then evaluate the following term in the filter, rather than terminating the filter.

A standard firewall filter can have a maximum of 1024 next term actions. A commit error will occur if we exceed this number.

Firewall filter configuration

interfaces ge-0/0/1 {
  unit 0 {
    family inet {
      filter {
        input: inbound-filter-demo;
        ouput: outbound-filter-demo;
      }
    }
  }
}

input is used to filter traffic entering the interface and output is used to filter traffic exiting the interface.

Example of firewall filter configurations

# to enter firewall filter configuration
edit firewall filter
Block all bad ICMP messages
# create the filter
edit firewall filter BLOCK-BAD-ICMP

# create the term with the matching condition
set term ALLOW-TRUSTED-ICMP from protocol icmp 

# allow icmp from selected ips
set term ALLOW-TRUSTED-ICMP from source-address 192.168.10.10/32 
set term ALLOW-TRUSTED-ICMP from source-address 172.16.24.24/32 

# accept ICMP from trusted sources
set term ALLOW-TRUSTED-ICMP then accept

# block untrusted ICMP
set term BLOCK-UNTRUSTED-ICMP from protocol icmp 
set term BLOCK-UNTRUSTED-ICMP then discard

# allow all other traffice
set term ALLOW-OTHER-TRAFFIC then accept
# apply filter to interface
edit interfaces ge-0/0/1 unit 0 family inet

filter input BLOCK-BAD-ICMP

commit

See the configured filter

show firewall filter BLOCK-BAD-ICMP
Block all telnet
edit firewall filter BLOCK-ALL-TELNET

set term BLOCK-TELNET from protocol tcp
set term BLOCK-TELNET from destination-port telnet
set term BLOCK-TELNET then discard

set term ALLOW-OTHER-TRAFFIC then accept

then apply it to the loopback interface then commit the configuration

Learn more about firewall filters here

Juniper SRX: Initial Lab Setup

juniper-srx-320

The Juniper SRX device is Juniper security appliance with security, routing, and networking features. The security feature includes NGF, IPS, UTM, and more. SRX stands for security, routing, and networking.

I started the setup of my 2 Juniper SRX 320 device today and it did not start the way I thought it would. Let me tell you what happen.

What I got in the boxes

Here is what I got in the box:

  • the SRX320 firewall device
  • two console cables (DB9 to RJ-45 and usb to mini-usb)
  • and a quite big PSU

It box contains basically anything you would need to get up and running.

Configuring the Juniper SRX320 Device

I am going to configure the device for my homelab and this is the initial configuration. So there will not be anything much in it. Just the basic to start with then change the configuration based on the lab I am working on. I will be posting a series of the labs I am doing in my blog here.

Junos version

See what version of Junos came with the device:

show version
show system information

junos-version

Factory configuration

To see the factory configuration, run:

show configuration

We can specify the topic after this command to see the configuration of the selected topic. For example show configuration security to see the security configuration of the SRX device.

We can even select a sub topic to see even filtered configuration. For example show configuration security policies to see security policies related configuration.

This is going to help us later to filter the configuration to see only the configuration we want to see.

Initial cleanup

disable-auto-img-upgrade

After power on the device, I started receiving the logs you can see on the screen. Clearing it with the command delete chassis auto-image-upgrade did not work. It required the root password to be setup first. After the root setting up the root password, the problem disappeared.

Root user password

Juniper device comes with the root user created without a password. So, the first business of the day is to setup the root user password. Here is how we do it in the CLI.

junos-login

set system root-authentication plain-text-password

Now the root user is setup. See the configuration with:

show configuration system root-authentication

Hostname, date, and timezone

For the initial setup, the device time is not going to be synchronized with an NTP server. That may be part of a future lab. The date module takes YYYYMMDDHHMM time format. The date and time is setup in the operational mode and not in the configuration mode.

set date 202512241105

To view the time and date, run:

show system uptime

sys-uptime

Since we have two SRX device distinct hostname would be helpful.

[edit]
set system host-name SRX1

set system time-zone America/Chicago

To view the configured timezone:

show configuration system timezone

User accounts and permissions

Junos devices came with the root user account. I am going to need a non root user for my labs.

To create a new user, run:

[edit]

set system login user sam full-name "Mamadou Sandwidi"

Let's add the new user to a login class. For now I am going to use a predefined login class. We will make our own later during lab time.

set system login user sam class super-user

then add the password for the new user with:

set system login user sam authentication plain-text-password

View the newly configured user with:

[edit]

show system login user sam 

user-account

Interfaces and VLANs

Let see the available interfaces.

show interfaces terse | no-more

interfaces

interfaces

That is a lot. Let only see the gigabit interfaces since they are the one I will be working the most with.

show interfaces ge-* terse

ge-interfaces

Clear SRX device data

request system services 

Conclusion

From here I think we are all good for the first basic Juniper SRX labs. See you in a moment.

Networking: The OSI and TCP/IP Models

The OSI Model

OSI stands for Open Systems Interconnection. It is a standard and fundamental model for desribing how network communication is processed in a network device. The model has 7 layers:

7.Application Layer

6.Presentation Layer

5.Session Layer

4.Transport Layer

3.Network Layer

2.Data Link Layer

1.Physical Layer

The layers are stack on each other with layer 1, the physical layer, at the bottom.

Layer 1: The Physical Layer

This layer refers to the cabling and connectors that allow the communication signals to reach to the devices in the network.

This layer enables the communication in the same local area network. It is also called the switching layer. Here the network devices use MAC addresses to forward/send packets.

Layer 3: The Network Layer

This layer is also called the routing layer. In this layer, network devices use IP addresses to determine where to send network traffic.

Layer 4: The Transport Layer

This layer is responsible for providing the appropriate protocal for transporting data accross the network. This is where we can find TCP or UDP protocols.

Layer 5: The Session Layer

The session layer helps manage the communication between network devices using protocols like NetBIOS, SOCKS, and NFS.

Layer 6: The Presentation Layer

The presentation layer formats the data received into a format human can understand. For example png, mp4, and more.

Layer 7: The Application Layer

The application layer is the top layer in the OSI model. It provides an interface between the computer applications and the underlying network. We find http, dnf, ftp, in this layer.

The TCP/IP Model

The TCP/IP model is derrived from the OSI model but it has four layers instead of 7:

Layer 1: The Network Access Layer

This layer combines the physical layer and the data link layer from the OSI model into a single layer.

Layer 2: The Internet Layer

The network layer from OSI model becomes the internet layer.

Layer 3: The Transport Layer

The transport layer stayed the same.

Layer 1: The Application Layer

The session, presentation, and application layers from the OSI model are combined to become the application layer in the TCP/IP model.

Ansible: More on Playbooks

In my previous post in this serie, I talked about what ansible playbook is and how to get started. In this post, I am going to talk about few things that make working with playbook more fun. At first, ansible playbook sounds basic and connot do much beside pingging hosts, installing packages, copying files, and checking services (at least the way presented it previously). But in ansible can do way more than that using a debugger, variables, and more.

Ansible Debug Module

Here is a simple example of a playbook that displays debug messages. It does not peroform any particular task but showing you how debug messages can be used in a playbook.

---
- name: A playbook with example debug messages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Simple message
    ansible.builtin.debug:
      msg: This is a simple message

  - name: Showing a multi-line message
    ansible.builtin.debug:
      msg: 
      - This is the first message
      - This is the second message

  - name: Showing host facts
    ansible.builtin.debug:
      msg: 
      - The node's hostname is {{ inventory_hostname }}

ansible.builtin.debug has 3 parameters.

  • msg: The debug message we want to show

  • var: The variable we want to debug and show in logs we the playbook is ran. It cannot be used simultanuously with msg.

  • verbosity: An integer that represents the debug level when the playbook is ran. It can have a value between 1 and 5 (-v to -vvvvv). The default value is 0, meaning no verbosity.

---
- name: A playbook with example debug messages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Debug a variable
    ansible.builtin.debug:
      var: inventory_hostname

  - name: Debug a variable with verbosity of 3
    ansible.builtin.debug:
      msg: This is a message with a verbosity of 3
      verbosity: 3

When we run the playbook without the verbosity flag, the messages with verbosity will not be logged. So, if we want to show all messages, we should run:

ansible-playbook my-playbook.yml -vvv

-vvv designate the verbosity level 3.

Defining variables in a playbook

We can define variables to store data we want to use in multiple places in a playbook. We define variable in the following way:


  more code...

  vars:
    var1: Hello world
    var2: 15
    var3: true
    var4:
    - Apples
    - Green
    - 1.5

  more code...


  more code...

  vars:
    grouped:
      var5: Hi there
      var6: 30
      var7: false

  more code...

Debugging multiple variables

 more code...

  tasks:
  - name: Display multiple variables
    ansible.builtin.debug:
      msg: |
        var1: {{ var1 }}
        var2: {{ var2 }}
        var3: {{ var3 }}
        var4: {{ var4 }}

 more code...    

 more code...

  tasks:
  - name: Display multiple variables
    ansible.builtin.debug:
      var: grouped

 more code...    

Storing Outputs with Registers

Most ansible modules run and return a success or failure outputs. But, sometimes we want the resulting output of a task for later use. We can use a register to store that output. Here is an example:

---

- name: This is a playbook showcasing the use of registers
  become: 'yes'
  hosts: servers

  tasks:
  - name: Using a register to store output
    ansible.builtin.shell: ssh -V
    register: ssh_version

  - name: Showing the ssh version
    ansible.builtin.debug:
      var: ssh_version

We store the output in the veriable in the register key and then we can use var or msg from the debug module to show.

Storing Data with Set_Fact Module

set_fact is used to store data associated to a node. It takes key: value pairs to store the variables. The key is the name of the variable and value is its value. For example:

---

- name: This is a playbook showcasing the use of set_fact
  become: 'yes'
  hosts: servers

  tasks:
  - name: Using a register to store output
    ansible.builtin.shell: ssh -V
    register: ssh_version

  - ansible.builtin.set_fact:
      ssh_version_number: "{{ ssh_version.stderr }}"

  - ansible.builtin.debug:
      var: ssh_version_number

Are you wondering why I used stderr instead of stdout or stdout_lines? That is ssh -V normal behavior.

Reading Variables at Runtime

For data we cannot hard code in the playbook, we can pass them to the playbook at runtime using the vars_prompt module.

---

- name: This is a playbook showcasing the use of vars_prompt
  become: 'yes'
  hosts: localhost

  vars_prompt:
  - name: description
    prompt: Please provide the description
    private: no

  tasks:
  - ansible.builtin.debug:
      var: description

Date, Time, and Timestamp

ansible_date_time

ansible_date_time is coming from the facts. The playbook needs to gather the facts of the nodes. Otherwise it will be undefined.

---

- name: This is a playbook showcasing ansible_date_time
  become: 'yes'
  hosts: localhost
  gather_facts: true

  tasks:
  - ansible.builtin.debug:
      msg: "Datetime data {{ ansible_date_time }}"

  - ansible.builtin.debug:
      msg: "Date {{ ansible_date_time.date }}"

  - ansible.builtin.debug:
      msg: "Time {{ ansible_date_time.time }}"

  - ansible.builtin.debug:
      msg: "Timestamp {{ ansible_date_time.iso8601 }}"

Conditional Statements

when

A task with when conditional statement will only execute if the statement is true. For example:

---

- name: This is a playbook showcasing the use of `when` conditional statement
  become: 'yes'
  hosts: localhost
  gather_facts: true

  tasks:

  - ansible.builtin.debug:
      msg: "Date {{ ansible_date_time.date }}"
    when: ansible_date_time is defined

The debug task will only run if ansible_date_time is define.

failed_when

---

- name: This is a playbook showcasing the use of `failed_when` conditional statement
  become: 'yes'
  hosts: localhost
  gather_facts: false

  tasks:

  - name: Check connection
    command: ping -c 4 mywebapp.local
    register: ping_result
    failed_when: false # never fail

In the above example, the task never fails. But when failed_when is given a statement that evaluate to true or false, the task will be kipped if the result of that statement is evaluated to false. Otherwise it will be executed.

changed_when

When ansible runs on a host, it may change something on that host. Sometime we want to define ourself when to considere the system as changed. That's what changed_when is for.

---

- name: This is a playbook showcasing the use of `changed_when` conditional statement
  become: 'yes'
  hosts: localhost
  gather_facts: false

  tasks:

  - name: Check connection
    command: ping -c 4 mywebapp.local
    register: ping_result
    failed_when: false # never fail
    changed_when: false # never change anything

Handlers

Handlers are use to manage task dependencies. When we want to run a task only after another one has completed with changed=true we a handler. In the example below, we are only enabling nginx service after nginx is installed successfully.

---

- name: This is a playbook showcasing the use of handlers
  become: 'yes'
  hosts: servers
  gather_facts: true

  tasks:

    - name: Install nginx
      ansible.builtin.dnf:
        name: nginx
        state: present
      notify:
        - Enable nginx service

  handlers:

    - name: Enable nginx service
      ansible.builtin.service:
        name: nginx
        enabled: true
        state: restarted

Ansible Vault

The vault is where we keep our secrets secret. When we have confidential information that we want to keep secure, we use ansible vault. It allows a seemless encryption and decryption of sensitive data with a smooth integration with other ansible features suc has ansible playbook.

Encrypt a variable

ansible-vault encrypt-string "secret token string" --name "api_key"

Encrypt a file

ansible-vault encrypt myfile.txt

Dencrypt a file

ansible-vault decrypt myfile.txt

View content of encrypted file

ansible-vault view myfile.txt

Edit content of an encrypted file

ansible-vault edit myfile.txt

Change encrypted file encryption key

ansible-vault rekey myfile.txt
---

- name: This is a playbook showcasing the use of handlers
  become: 'yes'
  hosts: servers
  gather_facts: true

  vars:
    my_secret: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  15396363646563646365353331396364333839346632333964353531386132323034353163346432
                  6365313938653033613538366132353631626430373032620a653030326634376663613964366164
                  33373965656433346466326266363438376330386561386563353764646237643061613337323733
                  3633383934636236620a353132306539343363326437316539633432363436653437333866353534
                  3738

  tasks:

    - ansible.builtin.debug:
        var: my_secret # never print secrets
      no_log: true

The playbook will not be executed until we provide the key to decrypt the encrypted variable.

Conclusion

I am going to stop here for now but will come back later in other posts to talk more in about ansible playbook. Stay worm, everyone.

Virtualization Technologies

Traditional System Configuration

in traditional server systems, applications are ran directly on top of the host operating system. If we have applications that needs isolation, we need to have a dedicated physical server for each of them, despite the fact that a single server would have enough resources to run multiple applications.

It is inefficient and very expensive in this scenario. To solve this problem, we use virtualization to allow running multiple operating systems on the same physical server.

Virtualization

With virtualization, instead of installing applications directly on top of the physical server host operating system, we install a hypervisor on top of the physical server, then install guest operating systems in virtual machines. Those virtual machines are managed by the hypervisor, that is also called virtual machine manager or VMM.

An hypervisor is software that seats on top of a bare metal server to allow the sharing of resources between multiple operating systems (also called guess operating systems). There are two types of hypervisors: Type 1 hypervisor that seats directly on top of bare metal server, and Type 2 hypervisor that is a software package installed on a host operating system.

Running multiple operating systems on a single host, allows saving money, time, and space since VM management is easier, and we have less physical equipment to purchase, install, and manage.

  • VMWare ESXi

  • Oracle OVM/OLVM

  • Microsoft Hyper-V

  • Citrix Xen Server

  • RedHat KVM

  • Proxmox VE

  • XCP-ng

  • Incus

VMWare ESXi

I have some lab experience with VirtualBox, Proxmox, and Incus but not VMWare yet. That is about to change. Since I am also looking into oportunities in data centers, I think it is a good time to start learning about VMWare technologies along with my system and network automation journey I recently started.

Ansible: Introduction to Playbooks

What is a Playbook

A Playbook in ansible is a file containing a set of instructions for automating systems configuration. It like a bash script but in ansible "language". A ad-hoc command is suitable for a basic single line task. But if we want to perform a complex and repeatable deployement, we certainly must use a playbook.

Playbooks are written in YAML following a structured syntax. A playbook contains an ordered list of plays that runs in order from top to bottom by default.

---
- name: Update servers
  hosts: servers
  remote_user: ans-user

  tasks:
  - name: Update nginx
    ansible.builtin.dnf:
      name: nginx
      state: latest

To ping all hosts;

---
- name: Ping all hosts
  hosts: servers

  tasks:
  - name: Ping servers
    ansible.builtin.ping:

This is how you run a playbook:

ansible-playbook my_playbook.yml

or in dry run mode:

ansible-playbook --check my_playbook.yml

or to check the syntax of our playbook:

ansible-playbook --syntax-check my_playbook.yml

or to list all hosts:

ansible-playbook --list-hosts my_playbook.yml

or to list all tasks:

ansible-playbook --list-tasks my_playbook.yml

We can a lot more configuration to the playbook to perform advanced automation tasks. We are going to leave that for a future post.

By default, ansible gather facts about the nodes before executing the playbook To disable this feature, we can add gather_facts: false to our playbook:

---
- name: Ping all hosts
  hosts: servers
  gather_facts: false

  tasks:
  - name: Ping servers
    ansible.builtin.ping:

Example of Simple Playbooks

Ping hosts

We've already seen how to do that. This is a simple way to ping nodes using ansible playbook:

---
- name: Ping Linux hosts
  hosts: servers
  gather_facts: false

  tasks:
  - name: Ping servers
    ansible.builtin.ping:

Install/Uninstall packages

---
- name: Install/uninstall packages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Install OpenSSH on Linux servers
    ansible.builtin.dnf:
      name: openssh
      state: present

  - name: Uninstall Apache
    ansible.builtin.dnf:
      name: httpd
      state: absent

Update packages

---
- name: Update packages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Update OpenSSH on Linux servers
    ansible.builtin.dnf:
      name: openssh
      state: latest

  - name: Uninstall Apache
    ansible.builtin.dnf:
      name: nginx
      state: latest

Enable/Disable services

---
- name: Enable nginx service
  hosts: servers
  become: 'yes'

  tasks:
  - name: Install nginx
    ansible.builtin.service:
      name: nginx
      state: present

  - name: Enable nginx service
    ansible.builtin.service:
      name: nginx
      state: started
      enabled: yes

  - name: Disable Apache service
    ansible.builtin.service:
      name: cups
      state: stopped
      enabled: no

Ansible: Ad-Hoc Commands

An ansible ad-hoc command is a single command sent to an ansible client. For example:

ansible servers -m setup

setup is an ansible module, which is a set of tools that handle some specific operations. setup gathers information about selected ansible clients.

We can pass a filter to setup argument to gather information we are interested in from the managed node. For example:

ansible servers -m setup -a "filter=ansible_all_ip_addresses"
ansible servers -m ping

This ad-hoc command ping all clients from the servers group in the configured inventory.

Run Shell Commands on Ansible Clients

ansible servers -m shell -a "ip addr show"
ansible servers -m shell -a "uptime"

This commands sends a shell command to all nodes in the group. -a is used to specify the argument for the ad-hoc command which is the command we want to run on each ansible client.

Copy Files from Ansible Controle Node to Clients

ansible servers -m copy -a "src=/home/me/my-file.txt dest=/etc/ansible/data/my-file.txt"
ansible servers -m copy -a "content='My Text file content' dest=/etc/ansible/data/my-text-file.txt"

This command copy a file from ansible control node to all clients. We can choose whether we want to copy the content of the file or file itself by using the content or src params. Note that the destination folder in which the file is going must exist in the clients. When uploading a file content, we must specify the complete destination file name (dest=/data/upload/my-file.conf).

If the file exist in the destination, ansible will use the md5 checksum of the file to determine if that task was previously done. If the file was not modified ansible will not re-upload the file. Otherwise, whether the file was updated in the control node or in the clients, ansible with upload the file again in the selected clients.

Create and Delete File and Folders in Ansible Clients

To create a new file in ansible clients:

ansible servers -m file -a "dest=FILE OR DIRECTORY DESTINATION state=touch"

To delete a file in ansible clients:

ansible servers -m file -a "dest=FILE OR DIRECTORY DESTINATION state=absent"

To create a directory, change the state to directory:

ansible servers -m file -a "dest=/my/directory/data state=directory"

A directory deletion is performed like a file deletion. Just specify the diirectory name in the dest and state=absent then ansible will delete the directory.

Install and Uninstall Packages on Ansible Clients

We can use use the shell or dnf/apt ansible module.

ansible servers -m shell -a "sudo dnf install nginx"
ansible servers -m dnf -a "name=nginx state=present" -b

If the operation requires root user priviledge, we can pass sudo to the shell command. But if we are using dnf/apt module, the ansible user must have root priviledge and we also need to add -b option to the command.

Use the latest state to update already installed package.

ansible servers -m dnf -a "name=nginx state=latest" -b

The state can be one of absent, installed, present, removed, and latest.

Understanding ansible ad-hoc commands is important for understanding ansible playbook. From here, we are going to move slowly towards efficient ways to automate tasks using ansible.

Ansible: Control Node Reasonable Setup

This post will focus on coming up with an reasonable Ansible control node setup for a homelab. By reasonable setup I mean a setup that will allow me to properly send tasks to managed nodes with a lower likelyhood of failure. From this point I would like to focus on learning the important parts of ansible instead of juggling left and right to fix basic setup errors.

Create or select a working folder

To keep things simple, I am going to have my inventory in /etc/ansible-admin/ own by the ansible-admin group.

Where to keep ansible.cfg

The default ansible.cfg can be left where it is. For managing our nodes, I am going to keep my own ansible configuration inside /etc/ansible-admin/ansible.cfg

Where to keep the inventories

The lab inventories can be kept in /etc/ansible-admin/inventory/

Disable the host key verification

From ansible.cfg:

[defaults]
host_key_checking = False

or from an environment variable:

export ANSIBLE_HOST_KEY_CHECKING=False

or from the command line:

ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook my_playbook.yml

Ansible: More about the Inventory File

Ansible default inventory is located at /etc/ansible/hosts. But we can have it elsewhere. For example at /home/me/ansible/hosts.ini. Then point ansible to it using the -i flag.

ansible web -m ping -i ./hosts.ini

Or I can just confirgure ansible.cfg to point to the path of my inventory file.

[defaults]

inventory = /etc/ansible/inventory/hosts.ini

Hosts can be organized in groups inside the inventory file. A group name must be unique and following the criteria of a valid variable name.

Here are example of groups: web and db

[web]
192.168.10.15
192.168.10.16

[db]
192.168.12.15
192.168.12.16
192.168.12.17

Here is the same inventory in YAML format

web:
  hosts:
    192.168.10.15:
    192.168.10.16:
db:
  hosts:
    192.168.12.15:
    192.168.12.16:
    192.168.12.17:

Ansible automatically creates the all and ungrouped groups behind the scene. The all group contains all hosts, and ungrouped group contains all that are not in any group.

So, ansible -m ping all will all hosts listed in the inventory file, and ansible -m ping ungrouped will ping all hosts not listed in any group.

Do more in your inventory

  • A host can be part of multiple groups

  • Groups can also be grouped

prod:
  children:
    web:
    db:
test:
  children:
    web_test:
[prod:children]
web
db

[test:children]
web_test
  • Add a range of hosts
[servers]
192.168.11.[15:35]
servers:
  hosts:
    192.168.11.[15:35]:
  • Add variables to hosts or groups
[prod]
192.168.10.15:4422

prod1 ansible_port=4422 ansible_host=192.168.10.22

You can do way more than what I have listed above, I am not going to bore with everything about Ansible inventory here because I don't need to use them at this stage of my learning. But if you feel like you want to learn more about this topic, go here

Good bye for now

Ansible: Initial Setup

In my previous post, I went quickly through ansible installation and initial setup. I did not really setup anything. I just showed you where the find things that are brought by ansible by default.

In this post I will go deeper in the setup process. But I am still not going to try to impress you here. Let keep that for future posts.

Ansible Control Node

Ansible config file is locate at /etc/ansible/ansible.cfg by default. We are going to use this file later to customize our installation of Ansible.

If you have just a fiew nodes, you can SSH into each one of them to make sure you can correctly connect. That also means that if you have just a few nodes, Ansible might not the right tool.

Use ssh-copy-id key.pub node-user@192.168.10.10 to add the controller ssh key to authorized hosts that can connect to the nodes.

Ansible Inventory

The inventory contains the nodes you want ansible to manage. The default inventory file is located at /etc/ansible/hosts. The nodes are put into groups for ease of management. The group names must be unique and they are case sensitive. The inventory file contains the IP addresses or FQDN of the managed hosts.

If we want to use the default inventory file we can just run:

# to ping all nodes in the web group
ansible -m ping web

But if we are working on a dedicated inventory file, like my_nodes.ini, we should tell ansible that we are providing and inventory file by adding -i [INVENTORY FILE]. For example, ansible web -i my_nodes -m ping

The inventory in the ini format looks like:

[web]
192.168.12.13
192.168.12.14

[db]
192.168.13.13
192.168.13.15

But the inventory file can also be written in the YAML format:

my_nodes:
  hosts:
    node_01:
      ansible_host: 192.168.10.12
    node_02:
      ansible_host: 192.168.10.13

[web] is a group name. It is unique accross the inventory file. We can have multiple groups in a inventory file.

To run ansible command on multiple groups we do separate the groups name with colons. For example:

ansible web:db -m ping -i my_nodes.ini --ask-pass

This command will nodes in the web and db groups. --ask-pass allows prompting for password if somehow the SSH daemon in the managed nodes is asking for the user password.

If our command requires an input to function, maybe we are doing it the wrong way. Ansible is suppose to facilitate automation. A command should be able to run until completion without additional user input. In my initial ansible setup, I provided input twice when I was running the the ping command: The first was the host keys verification, the second was to provide the node password because the ssh keys were not setup properly. We are going to fix this in our next posts.

How to Manage Nodes with Ansible

Until now we only learned how to ping our nodes using ansible ping module. ansible web -m ping is the language to tell ansible to use the ping module to ping the web group.

Key Points to Remember

  • Ansible is used to automate repetitive tasks we perform on network devices

  • Ansible inventory contains grouped list of nodes we want to manage

  • The inventory can be written in the ini or YAML format

  • Ansible comes with prebuilt modules like ping to faciliate the nodes management.

In my next posts, I will be going deeper on each importaint part of Ansible such as inventory and playbook.

So, read me soon.