Skip to content

Blog

Juniper SRX: Network Address Translation

juniper-srx-320

Junos NAT Types

  • Source NAT: Many to one translation of source IP addresses

  • Destination NAT: One to many translation of destination IP addresses

  • Static NAT: One to one translation of one IP address

Source NAT

The source NAT is a very common NAT configuration. It is commonly used to translate multiple private addresses to one public address. It only allows outgoing connections.

Common uses include:

  • Translate one IP address to another IP address

  • Translate one contiguous block of addresses to another block of addersses of the same size or less.

  • Translate one contiguous block of addresses to one IP address

  • Translate one contiguous block of addresses to the address of the egress interface

There are two types of Source NAT Translations:

  • Interface-based: the source address is translated to the address configured on the egress interface. This is also called interface NAT. The interface-based translation uses the port address translation and does not require the configuration of an address pool.

  • Pool-based: it uses a set of IP addresses for translation.

We configure source NAT using rules. A rule requires:

  • a traffic direction: here we need to specify from interface, from zone, or from routing-instance and to interface, to zone, or to routing-instance

  • the packet information: here we need the source and destination IP addresses or subnets, source port numbers or port ranges, destination port numbers or port ranges, and protocols or applications.

If multiple source NAT rules overlap, the more specific will take precedence.

Three actions can be configured in a source NAT rule:

  • interface: the source address will be translated to the address configured on the egress interface

  • pool: the source addresses will be translated to a pool of addresses

  • off: the source NAT will not be applied

Source NAT configuration
Interface-based NAT configuration
edit security nat

# create rule-set
edit source rule-set ZONE-A-TO-ZONE-B

# add traffic direction
set from zone ZONE-A
set to zone ZONE-B

# create rule
edit rule R1

# add rule match criteria
set match source-address 0.0.0.0/0
set match destination-address 0.0.0.0/0

# add action
set then source-nat interface 

See allocated port with:

show security nat interface-nat-ports
Pool-based NAT configuration

To create a source pool,

edit security nat

edit source pool SOURCE-POOL-1

set address 172.16.1.1/32 to 172.16.1.50/32

Change the rule set rule action to use the pool.

edit security nat source

set rule-set ZONE-A-TO-ZONE-B rule R1 then source-nat pool SOURCE-POOL-1
Proxy ARP

A proxy ARP configuration is required with pool-based source NAT. Here is how to conigure a proxy ARP on the SRX device.

edit security nat

edit proxy-arp interface ge-0/0/1
set address 172.16.1.1/32 to 172.16.1.50/32

With source NAT, port address translation (PAT) is enabled by default. If PAT is disabled, the number of translations is limited by the number of IP addresses available in the pool. To disable PAT, run:

edit security nat source pool SOURCE-POOL-1
set port no-translation

To see NAT usage, run:

show security nat resource-usage source-pool SOURCE-POOL-1

The overflow pool is a pool to to be used if the original pool is exhausted. It could be a user defined source NAT pool or an egress interface.

To configure an overflow pool:

edit security nat source pool SOURCE-POOL-1

set overflow-pool interface

Destination NAT

Destination NAT is used to translate the destination address of a packet. It commonly translate the public IP address of a packet to a private internal IP address. Destination NAT only allows incoming connections.

Common uses include:

  • Translate a destination IP address to another address

  • Translate a destination IP address and PORT to another address and port

  • Translate a contiguous block of address to another contiguous block of addresses

Destination NAT supports only pool-based NAT.

Destination NAT Rules:

  • Traffic direction: from interface, from zone, or from routing-instance

  • Packet information: source and destination IP addresses or subnets, source port or port ranges, destination port or port ranges, and protocols or applications

There are only two actions we can configure for destination NAT:

  • Pool

  • Off

If we have overlapping rules in the destination rule set, the most specific rule will take precedence.

Destination NAT configuration

To create a destination pool:

edit security nat

edit destination pool DESTINATION-POOL-1

set address 192.168.1.1/32
edit security nat destination

edit rule-set RS1

set from zone ZONE-A

edit rule R1

set match destination-address 12.1.1.5/32

set then destination-nat pool DESTINATION-POOL-1

We also need to add proxy ARP because the destination address does not belong to any interface.

We can then define a security policy that is configured to look for the translated address since the security policy lookup happens after the translation.

Static NAT

Static NAT is a combinaison of source NAT and destination NAT. Static NAT translation is always one to one. For each private IP address, a public IP address must be allocated and we don't need to configure an address pool.

To configure a static NAT, we need:

  • the traffic direction. Only the from portion is required.

  • the packet information. the protocols or applications are not needed here

A proxy ARP is also required here.

Since static NAT allows the communication in both directions, we need to configure two security policies.

Learn more about firewall security policies

Juniper SRX: Firewall Security Policies

juniper-srx-320

Security policies are used to enforce rules on transit traffic. Transit traffic is traffic that is not destined to the SRX device. Host inbound traffic is not controlled using security policies.

Security policies affect the traffic from one zone and exiting another zone. The combinaison of a from-zone and a to-zone is called context. Every context has an ordered list o policies and the list is processed top to bottom.

Security policies are stateful in nature. That means that return traffic is allowed by default. The SRX device will drop all traffic that is not explicitly permitted by a security policy.

Packet Processing in an SRX device

Initial Policy Lookup

  • Source zone (based on ingress interface)
  • Destination zone (based on route lookup)
  • Source IP address
  • Destination IP address (after static and destination NAT translation)
  • Source port
  • Destination port (after destination NAT translation)
  • Logical system
  • User identity
  • Protocol

Session Lookup

  • Source IP address
  • Destination IP address
  • Source port
  • Destination port
  • Protocol

The SRX device uses these 5 elements to determine whether a packet belongs to an existing session or not.

Security Policy Configuration

To see configured security policies, run:

show security policies

juniper-security-policies

To create an ew security policy , run:

edit security policies from-zone TRUST to-zone ZONE-A

set policy ALLOW-INTERNET match source-address any
set policy ALLOW-INTERNET match destination-address any
set policy ALLOW-INTERNET match application any

set policy ALLOW-INTERNET then permit

Since policies are evaluated from top to bottom, if there is a need to move a policy, we can do that with:

edit security policies from-zone ZONE-A to-zone UNTRUST

insert policy ALLOW-INTERNET before policy DENY-ALL

Source and destination addresses are two of the five match criteria that should be configured in a security policy. You can now configure wildcard addresses for the source and destination address match criteria in a security policy. A wildcard address is represented as A.B.C.D/wildcard-mask. For example 10.10.10.10/255.255.0.255.

The wildcard address usage is not restricted to full octets only. You can configure any wildcard address. For example, the wildcard address 172.16.0.1/255.255.18.255. But The first octet of the wildcard mask should be greater than 128. For example, a wildcard mask represented as 0.255.0.255 or 1.255.0.255 is invalid.

Configuring wildcard security policies on a device affects performance and memory usage based on the number of wildcard policies configured per from-zone and to-zone context. Therefore, you can only configure a maximum of 480 wildcard policies for a specific from-zone and to-zone context.

Security Policy Actions

  • permit: the packet is permitted based on the initial packet policy lookup

  • reject: for TCP packet, and TCP reset is sent. UDP, ICMP, and any other IP protocol, an ICMP reset is sent.

  • deny: the packet is silently dropped

  • count: counts bytes or kilobytes of all traffic the policy allows to pass through the devices in both directions.

  • log: logs traffic information for the policy

Policy Precedence

Multiple security policies may have similar match criteria. Policy precedence rules will determine which policy will be applied first. Here is the matching order:

  1. Intrazone policies: The ingress and egress interfaces are in the same zone. For example from-zone ZONE-A to-zone ZONE-A

  2. Interzone policies: The ingress and egress interfaces are in different zones. For example from-zone ZONE-A to-zone ZONE-B.

  3. Global policies: They are evaluated if the packet does not match intrazone or interzone context. Global security policies are ordered and also evaluated from top to bottom.

  4. Default action: The default policy denies all traffic by default. It can be configured with set security policies default-policy deny-all. This policy is evaluated if the packet does not much the context of intrazone, interzone, global policies.

Schedulers

A scheduler is a configuration that allows a security policy to be activated during certain time. For example if we want to allow certain vendors on weekends.

A scheduler can be associated with multiple security policies but a policy can be associated with only one scheduler. When a scheduler is inactive, a policy is unavailable for lookup.

Scheduler configuration

edit schedulers scheduler VENDER-WEEKEND-SCHEDULE

set saturday all-day
set sunday all-day

To see the status of schedulers, run:

show schedulers

To attach a scheduler to a policy,

edit security policies from-zone ZONE-A to-zone ZONE-B policy VENDER-POLICY

set scheduler-name VENDER-WEEKEND-SCHEDULE

Application firewall

Traditional security policies permit or reject traffic based on layer 3 or layer 4 information. We use IP addresses and port number to determine what traffic is allow to go through the SRX device. For example, we can control applications such as HTTP, SMTP, and DNS because these applications used well-known standards ports only. This approach is limited especially when dealing with evasive applications.

Juniper Networks application firewall (AppFW) provides policy-based enforcement and control on traffic based on application signatures. By using AppFW, you can block any application traffic not sanctioned by the enterprise. AppFW enables us to enforce the policy control on Layer 7 traffic.

For AppFW to work, we need to have the Application identification license installed on the SRX device. We also need to download and install the application signatures package, a predefined signature database of applications.

AppFW support

  • Traditional AppFW is supported in Junos OS 18.2 and lower

  • AppFW with Unified Policies is supported from Junos OS 18.2

Unified Security Policies

Unified security policies allow the use of dynamic applications as match criteria along with layer 3 and layer 4 information. So traffic is classified using layer 4 to layer 7 information and policy actions are applied based on identified application.

Unified policies leverage the application identity information from the application identification (AppID) service to permit, deny, reject, or redirect the traffic. A unified policy configuration handles all application firewall functionality and simplifies the task of configuring a firewall policy.

Unified security policies are easier to configure and is more granular.

To block Facebook:

set security policies from-zone ZONE-A to-zone ZONE-B policy BLOCK-FACEBOOK match dynamic-application junos:FACEBOOK-ACCESS

Application identification licnese installation

Use show system license in operational mode to see if the required license is installed on the SRX device. Make sure appid-sig is installed and available.

Download and install the application signature package

Download the application signatures with:

  • request services application-identification download. Check the status of the download with request services application-identification download status.

Install the package with:

  • request services application-identification install. Check the status of the installation with request services application-identification install status.

To learn more about any predefined application, run the operational command show services application-identification application detail junos:FACEBOOK-ACCESS.

Intrusion Detection and Prevention - IDP

To enable the IDP, we need to have the appropriate licnese installed, and we need to have the signature database downloaded and installed. Then we can configure IDP policy and enable security policy for IDP inspection.

A IDP policy configuration looks like:

set security policies from-zone ZONE-A to-zone ZONE-B policy ALL-WEB then permit application-services idp-policy IDP-POLICY-1

IDP is part of the security policy configuration. It is enabled per policy bases.

When an attack have been identified, Junos execute an IDP action. Here are IDP actions:

  • no-action
  • ignore-connection
  • difserv-marking
  • class-of-service
  • drop-packet
  • drop-connection
  • close-client
  • close-server
  • close-client-and-server
  • recommended

IDP policy configuration

Install IPD license

See installed license with show system license. IDP license is indicated by idp-sig in the SRX device.

Download and install signature database

Run request security idp security-package download to download the security package. Append status to see the download status. check-server shows more details about the package to be downloaded.

Configure IDP policy

We can download the IDP policy template with request security idp security-package download polkcy-templates. Install the package with request security idp security-package install policy-templates.

Then add the download templates.xsl into the configuration database. For that we run set system scripts commit file templates.xsl

To learn more about an attack object, run:

show security idp attack attack-list predefined-group [GROUP-NAME]

To learn more about an IDP policy, run:

show security idp attack attack-list policy [POLICY-NAME]

To create a custom IDP policy, run:

set security idp idp-policy POLICY-1

then configure the policy to fit our need.

Configure security policy for IDP inspection

To configure an IDP inspect in a security policy, run:

set security policies from-zone ZONE-A to-zone ZONE-B policy ALL-WEB then permit application-services idp-policy POLICY-1
show security idp status 

will show us IDP status.

Integrated User Firewall

The integrated user firewall is a mechanism to use user information as match criteria for security policies. This feature retrieves the user-to-ip address mapping information from Windows Active Directory.

Note that tracking for non Windows Active Directory is not supported and multiple users logged to the same device is also not supported. In addition the LDAP authentication performed is simple authentication. So the username and passowrd are sent in clear text.

Learn more about firewall security policies

Juniper SRX: Firewall Security Zones

juniper-srx-320

Security zones are important elements in Juniper SRX firewall devices.

What is a firewall security zone?

A security zone in Juniper SRX device is a logical unit used to divide a network into segments that may have different security requirements. Interfaces are then associted with security zones. Each interface can be associated to only one security zone, and each security zone can have multiple interfaces with the same security requirements for inbound and outbound traffic.

Juniper SRX Series Firewall secures a network by inspecting, and then allowing or denying, all connection attempts that require passage from one security zone to another.

Security zones have the Trust zone which is available only in the factory configuration and is used for initial connection to the device. After you commit a configuration, the trust zone can be overridden

# to see the zones configured on the device
show security zones

juniper-srx-zones

juniper-srx-zones-terse

juniper-srx-zone-details

Security zone components

Security policies

Security policies are rules that regulates traffic going from one zone to another. They are processed in the order they are defined.

Policies allow us to deny, permit, reject, encrypt and decrypt, authenticate, prioritize, schedule, filter, and monitor the traffic attempting to cross from one security zone to another. We decide which users and what data can enter and exit, and when and where they can go.

Screens

Screens are predefined configurations that are used to block common network level attacks. Screens configurations are applied to ingress packets only. They are check at the beginning of the packet flow so that packets can be dropped as early as possible.

Screens categories
Statistics based screens

This is ued to determine normal network behavior and form a baseline level. Any activity outside the baseline is flagged as abnormale

Signature based screens

Use patterns or signagures to identify malicious behaviors.

Screen configuration
# create screen
edit security screen ids-option ZONE-A-SCRENN

# 50 icmp packet per second for a destination
set icmp flood threshold 50

# attach screen to zone
set security zones security-zone ZONE-A screen ZONE-A-SCREEN

To see the stats of the screen:

show security screen statistics zone ZONE-A
Address books

Address books are made of IP addresses and address sets used to make it easier to apply policies to them. By default, the SRX device configuration has an address book called global. The global address book is not attached to any security zone. But any additional address book created bust be attached to a security zone.

Address object defined in one zone cannot be used in ahother zone. But address objects defined in the global address book can be used in any zone.

# create a new address book
edit security address-book BOOK-A

set address DNS-SERVER 172.16.20.10/32 
set address STAGING-SERVERS 192.168.10.0/26

juniper-srx-addr-books

Address objects
IP prefix
  • LAN1 192.168.50.1/24
  • DNS Server 192.168.40.1/32
edit security address-book ZONE-A

set address DNS-SERVER 9.9.9.9/32
IP range
  • Servers 172.16.1.1-172.16.1.50
edit security address-book ZONE-A

set address SERVERs range-address 192.168.40.20 to 192.168.40.80

set attach zone ZONE-A
DNS address
  • Syslog server log.mywebsite.com
edit security address-book ZONE-A

set address WEBAPP dns-name myapp.mysite.com

set attach zone ZONE-A
Wildcard address
  • 172.16.20.50/255.255.0.255 - matches 172.16.*.50
edit security address-book ZONE-A

set address LAB-SERVERS wildcard-address 10.10.10.10/255.255.0.255

set attach zone ZONE-A
Interfaces

That is the list of interfaces in the zone. An interface can belong to only one security zone. By default, interfaces are in the null zone. The interfaces will not pass traffic until they have been assigned to a zone.

TCP RST

This feature is used to instruct the device to drop any packet that does not belong to an existing session and does not have the SYNchronize flag set.

How to create security zzones in Juniper SRX device?

# create a new security zone and attach an interface
set security zones security-zone ZONE-A interfaces ge-0/0/1.0

Make sure the port is a routed port

# remove ethernet switching
delete interfaces ge-0/0/6 unit 0 family ethernet-switching

juniper-srx-new-zone

juniper-srx-new-zone-created

What is a functional zone in Juniper SRX?

A functional zone is used to host the management interfaces. The MGT zone is the only zone currently supported functional zone. Interfaces in the MGT zone allows an out-of-band management.

# create a functional zone and attach and interface
set security zones functional-zone management interfaces ge-0/0/2.0

Host inbound traffic

Host inbound traffic is the traffic that is terminating at the SRX device. It is the traffic destined to the SRX device itself. That is different from transit traffic wich enters from one interface and exits from another interface.

# to view host inbound traffic configuration
show security zones security-zones ZONE-A
# enable ssh, ping, and http web management in host inboud traffic
set security zones security-zones ZONE-A host-inbound-traffic system-services ssh
set security zones security-zones ZONE-A host-inbound-traffic system-services ping
set security zones security-zones ZONE-A host-inbound-traffic system-services http

juniper-srx-host-inbound-traffic

The host inboud traffic can be configured in the zone level or in the interface attached to the zone. When the host inbound traffic is configured at the interface level, that configuration takes precedence over the configuration at the zone level.

# enable ssh, ping, and http web management in host inboud traffic via the interfaces
set security zones security-zones ZONE-A interfaces ge-0/0/1.0 host-inbound-traffic system-services ssh
set security zones security-zones ZONE-A interfaces ge-0/0/1.0 host-inbound-traffic system-services ping
set security zones security-zones ZONE-A interfaces ge-0/0/1.0 host-inbound-traffic system-services http

There a two host inbound traffic that we can configure:

  • system-services

  • protocols

Juniper application objects

The Junos default configuration group is hidden.

# to see the default configuration group
show configuration groups junos-defaults 

# to see the default applications
show configuration groups junos-defaults applications

We can also create custom applications

# go to applications configuration
edit applications

# add a custom application
set application CUSTOM-APP application-protocol http
set application CUSTOM-APP application-port 8080

# add another custom application
set application CUSTOM-APP-2 application-protocol http
set application CUSTOM-APP-2 application-port 8443

# add an application set and previous applications
set application-set WEB-APPS application CUSTOM-APP
set application-set WEB-APPS application CUSTOM-APP-2

To specify multiple criteria for an application, for example multiple destination ports, we use a term.

# adding terms to a custom application
set application CUSTOM-APP term 1 destination-port 8080
set application CUSTOM-APP term 2 destination-port 8081

Learn more about firewall security Zones

Juniper SRX: Firewall Filters

juniper-srx-320

This post is going to be focussed on Juniper SRX firewall filters. They are important to understand and configure because they protect your firewall from malicious traffic passing through or destined to the firewall.

What is a Firewall Filter?

A firewall filter in Juniper SRX is a security feature that defines a policy that evaluates the context of connections and permits or denies traffic based on the context (source IP address, destination IP address, port numbers, TCP sequence information, and TCP connection flags), updating this information dynamically.

Firewall filters are also known as access control lists by other vendors like Cisco. Other names also include authorization profile and packet filter.

How does Juniper SRX Firewall Filter Work?

The firewall filter inspects each and every packet coming in and going out of the SRX device interfaces. It is stateless and does not keep track of the state of the connections. It can be configured to accept or discard a packet before it enters or exits a port or interface. That is how we control the type and quantity of traffic that enters the device or exits the device.

If a packet is inspected and deemed to be acceptable, a class-of-service and traffic can be applied. If a packet arrives on an interface for which no firewall filter is applied for the incoming traffic on that interface, the packet is accepted by default. By default, a packet that does not match a firewall filter is discarded.

Firewall filters can be applied to all interfaces, including the loopback interface to filter traffic entering or exiting the device.

An IPv6 filter cannot be applied to an IPv4 interface that is because the protocol family of the firewall filter and interface must match.

Firewall Filter Components

Terms

A term is a named structure in which match conditions and actions are defined. Each term has a unique name. A firewall filter contains one or more terms, and each term consists of match conditions and actions. Let's note that a firewall filter with a large number of terms can adversely affect both the configuration commit time and the performance of the Routing Engine. The order of terms is important and impact the results. A firewall filter include a default term that discards all traffic that other terms did not explicitly permit.

The implicit term looks like:

term implicit-discard-all {
  then discard;
}
Match Conditions

A match condition or packet filtering criteria defines the values or fields that the packet must contain to be considered a match. If no match condition is specified, all packets are a match. So if we want an action to be taken for all packets, we can just ommit the match condition. We use the from keyword to specify the match statement. If a packet contains multiple match conditions, the packet must match all conditions to be considered as a match for the term.

If a single match condition is configured with multiple values, such as a range of values, a packet must match only one of the values to be considered a match for the firewall filter term.

The match condition that is selected for the term depends on the protocol family the we select for the firewall filter.

Example of match conditions:

  • Source IP
  • Destination IP
  • TCP and UDP ports
  • TCP flags
  • IP options
  • Incoming interface
  • Outgoing interface
  • ICMP packet type
  • etc...
Actions

If all match conditions specified in the term are true, the action is taken. If the match condition of a term is ommited, the action specified is also taken.

It is a good practice to explicitly configure one or more actions per firewall filter term. Any packet that matches all the conditions of the term is automatically accepted unless the term specifies other or additional actions.

There are three (3) types of actions:

Terminating actions
  • Stops the evealuation for the filter for a specified packet

  • The specified action is performed, no additional term is evaluated

  • Terminating actions include accept, discard, and reject.

    The accept action causes the system to accept the packet. The discard action causes the system to silently drop the packet without sending and ICMP message back to the source address. The reject action causes the systemt to discard the packet and send an ICMP message back to the source address.

Nonterminating actions

Nonterminating actions are used to perform other actions on a packet that do not halt the evaluation of the filter. Those actions include incrementing a counter (count), logging information about the packet header (log), sampling the packet data, sending information to a remote host using the system log functionality (syslog), or rate limiting traffic (policer).

If a term contains a nonterminating action without an explicit terminating action, such as accept, discard, or reject, the system will accept the matching packet by default. If we don't want the firewall filter action to terminate, we can use the next term action after the nonterminating action.

example 1: term 2 never get evaluated because term 1 action is nontermiating. So, the default accept action is taken right after log.

[edit firewall filter demo]
term 1 {
    from {
        source-address {
           192.168.10.0/24;
        }
    }
    then {
        log;
    }
}
term 2 {
    then {
        discard;
    }
}

example 2: The have term 2 evaluated, we explicitly said it in term 1 using next term

[edit firewall filter test]
term 1 {
    from {
        source-address {
            192.168.11.0/24;
        }
    }
    then {
        log;
        next term;
    }
}
term 2 {
    then {
        reject;
    }
}
Flow control actions

A flow control action enables a device to perform configured actions on the packet and then evaluate the following term in the filter, rather than terminating the filter.

A standard firewall filter can have a maximum of 1024 next term actions. A commit error will occur if we exceed this number.

Firewall filter configuration

interfaces ge-0/0/1 {
  unit 0 {
    family inet {
      filter {
        input: inbound-filter-demo;
        ouput: outbound-filter-demo;
      }
    }
  }
}

input is used to filter traffic entering the interface and output is used to filter traffic exiting the interface.

Example of firewall filter configurations

# to enter firewall filter configuration
edit firewall filter
Block all bad ICMP messages
# create the filter
edit firewall filter BLOCK-BAD-ICMP

# create the term with the matching condition
set term ALLOW-TRUSTED-ICMP from protocol icmp 

# allow icmp from selected ips
set term ALLOW-TRUSTED-ICMP from source-address 192.168.10.10/32 
set term ALLOW-TRUSTED-ICMP from source-address 172.16.24.24/32 

# accept ICMP from trusted sources
set term ALLOW-TRUSTED-ICMP then accept

# block untrusted ICMP
set term BLOCK-UNTRUSTED-ICMP from protocol icmp 
set term BLOCK-UNTRUSTED-ICMP then discard

# allow all other traffice
set term ALLOW-OTHER-TRAFFIC then accept
# apply filter to interface
edit interfaces ge-0/0/1 unit 0 family inet

filter input BLOCK-BAD-ICMP

commit

See the configured filter

show firewall filter BLOCK-BAD-ICMP
Block all telnet
edit firewall filter BLOCK-ALL-TELNET

set term BLOCK-TELNET from protocol tcp
set term BLOCK-TELNET from destination-port telnet
set term BLOCK-TELNET then discard

set term ALLOW-OTHER-TRAFFIC then accept

then apply it to the loopback interface then commit the configuration

Learn more about firewall filters

Juniper SRX: Initial Lab Setup

juniper-srx-320

The Juniper SRX device is Juniper security appliance with security, routing, and networking features. The security feature includes NGF, IPS, UTM, and more. SRX stands for security, routing, and networking.

I started the setup of my 2 Juniper SRX 320 device today and it did not start the way I thought it would. Let me tell you what happen.

What I got in the boxes

Here is what I got in the box:

  • the SRX320 firewall device
  • two console cables (DB9 to RJ-45 and usb to mini-usb)
  • and a quite big PSU

It box contains basically anything you would need to get up and running.

Configuring the Juniper SRX320 Device

I am going to configure the device for my homelab and this is the initial configuration. So there will not be anything much in it. Just the basic to start with then change the configuration based on the lab I am working on. I will be posting a series of the labs I am doing in my blog here.

Junos version

See what version of Junos came with the device:

show version
show system information

junos-version

Factory configuration

To see the factory configuration, run:

show configuration

We can specify the topic after this command to see the configuration of the selected topic. For example show configuration security to see the security configuration of the SRX device.

We can even select a sub topic to see even filtered configuration. For example show configuration security policies to see security policies related configuration.

This is going to help us later to filter the configuration to see only the configuration we want to see.

Initial cleanup

disable-auto-img-upgrade

After power on the device, I started receiving the logs you can see on the screen. Clearing it with the command delete chassis auto-image-upgrade did not work. It required the root password to be setup first. After the root setting up the root password, the problem disappeared.

Root user password

Juniper device comes with the root user created without a password. So, the first business of the day is to setup the root user password. Here is how we do it in the CLI.

junos-login

set system root-authentication plain-text-password

Now the root user is setup. See the configuration with:

show configuration system root-authentication

Hostname, date, and timezone

For the initial setup, the device time is not going to be synchronized with an NTP server. That may be part of a future lab. The date module takes YYYYMMDDHHMM time format. The date and time is setup in the operational mode and not in the configuration mode.

set date 202512241105

To view the time and date, run:

show system uptime

sys-uptime

Since we have two SRX device distinct hostname would be helpful.

[edit]
set system host-name SRX1

set system time-zone America/Chicago

To view the configured timezone:

show configuration system timezone

User accounts and permissions

Junos devices came with the root user account. I am going to need a non root user for my labs.

To create a new user, run:

[edit]

set system login user sam full-name "Mamadou Sandwidi"

Let's add the new user to a login class. For now I am going to use a predefined login class. We will make our own later during lab time.

set system login user sam class super-user

then add the password for the new user with:

set system login user sam authentication plain-text-password

View the newly configured user with:

[edit]

show system login user sam 

user-account

Interfaces and VLANs

Let see the available interfaces.

show interfaces terse | no-more

interfaces

interfaces

That is a lot. Let only see the gigabit interfaces since they are the one I will be working the most with.

show interfaces ge-* terse

ge-interfaces

Clear SRX device data

request system services 

Conclusion

From here I think we are all good for the first basic Juniper SRX labs. See you in a moment.

Networking: The OSI and TCP/IP Models

The OSI Model

OSI stands for Open Systems Interconnection. It is a standard and fundamental model for desribing how network communication is processed in a network device. The model has 7 layers:

7.Application Layer

6.Presentation Layer

5.Session Layer

4.Transport Layer

3.Network Layer

2.Data Link Layer

1.Physical Layer

The layers are stack on each other with layer 1, the physical layer, at the bottom.

Layer 1: The Physical Layer

This layer refers to the cabling and connectors that allow the communication signals to reach to the devices in the network.

This layer enables the communication in the same local area network. It is also called the switching layer. Here the network devices use MAC addresses to forward/send packets.

Layer 3: The Network Layer

This layer is also called the routing layer. In this layer, network devices use IP addresses to determine where to send network traffic.

Layer 4: The Transport Layer

This layer is responsible for providing the appropriate protocal for transporting data accross the network. This is where we can find TCP or UDP protocols.

Layer 5: The Session Layer

The session layer helps manage the communication between network devices using protocols like NetBIOS, SOCKS, and NFS.

Layer 6: The Presentation Layer

The presentation layer formats the data received into a format human can understand. For example png, mp4, and more.

Layer 7: The Application Layer

The application layer is the top layer in the OSI model. It provides an interface between the computer applications and the underlying network. We find http, dnf, ftp, in this layer.

The TCP/IP Model

The TCP/IP model is derrived from the OSI model but it has four layers instead of 7:

Layer 1: The Network Access Layer

This layer combines the physical layer and the data link layer from the OSI model into a single layer.

Layer 2: The Internet Layer

The network layer from OSI model becomes the internet layer.

Layer 3: The Transport Layer

The transport layer stayed the same.

Layer 1: The Application Layer

The session, presentation, and application layers from the OSI model are combined to become the application layer in the TCP/IP model.

Ansible: More on Playbooks

In my previous post in this serie, I talked about what ansible playbook is and how to get started. In this post, I am going to talk about few things that make working with playbook more fun. At first, ansible playbook sounds basic and connot do much beside pingging hosts, installing packages, copying files, and checking services (at least the way presented it previously). But in ansible can do way more than that using a debugger, variables, and more.

Ansible Debug Module

Here is a simple example of a playbook that displays debug messages. It does not peroform any particular task but showing you how debug messages can be used in a playbook.

---
- name: A playbook with example debug messages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Simple message
    ansible.builtin.debug:
      msg: This is a simple message

  - name: Showing a multi-line message
    ansible.builtin.debug:
      msg: 
      - This is the first message
      - This is the second message

  - name: Showing host facts
    ansible.builtin.debug:
      msg: 
      - The node's hostname is {{ inventory_hostname }}

ansible.builtin.debug has 3 parameters.

  • msg: The debug message we want to show

  • var: The variable we want to debug and show in logs we the playbook is ran. It cannot be used simultanuously with msg.

  • verbosity: An integer that represents the debug level when the playbook is ran. It can have a value between 1 and 5 (-v to -vvvvv). The default value is 0, meaning no verbosity.

---
- name: A playbook with example debug messages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Debug a variable
    ansible.builtin.debug:
      var: inventory_hostname

  - name: Debug a variable with verbosity of 3
    ansible.builtin.debug:
      msg: This is a message with a verbosity of 3
      verbosity: 3

When we run the playbook without the verbosity flag, the messages with verbosity will not be logged. So, if we want to show all messages, we should run:

ansible-playbook my-playbook.yml -vvv

-vvv designate the verbosity level 3.

Defining variables in a playbook

We can define variables to store data we want to use in multiple places in a playbook. We define variable in the following way:


  more code...

  vars:
    var1: Hello world
    var2: 15
    var3: true
    var4:
    - Apples
    - Green
    - 1.5

  more code...


  more code...

  vars:
    grouped:
      var5: Hi there
      var6: 30
      var7: false

  more code...

Debugging multiple variables

 more code...

  tasks:
  - name: Display multiple variables
    ansible.builtin.debug:
      msg: |
        var1: {{ var1 }}
        var2: {{ var2 }}
        var3: {{ var3 }}
        var4: {{ var4 }}

 more code...    

 more code...

  tasks:
  - name: Display multiple variables
    ansible.builtin.debug:
      var: grouped

 more code...    

Storing Outputs with Registers

Most ansible modules run and return a success or failure outputs. But, sometimes we want the resulting output of a task for later use. We can use a register to store that output. Here is an example:

---

- name: This is a playbook showcasing the use of registers
  become: 'yes'
  hosts: servers

  tasks:
  - name: Using a register to store output
    ansible.builtin.shell: ssh -V
    register: ssh_version

  - name: Showing the ssh version
    ansible.builtin.debug:
      var: ssh_version

We store the output in the veriable in the register key and then we can use var or msg from the debug module to show.

Storing Data with Set_Fact Module

set_fact is used to store data associated to a node. It takes key: value pairs to store the variables. The key is the name of the variable and value is its value. For example:

---

- name: This is a playbook showcasing the use of set_fact
  become: 'yes'
  hosts: servers

  tasks:
  - name: Using a register to store output
    ansible.builtin.shell: ssh -V
    register: ssh_version

  - ansible.builtin.set_fact:
      ssh_version_number: "{{ ssh_version.stderr }}"

  - ansible.builtin.debug:
      var: ssh_version_number

Are you wondering why I used stderr instead of stdout or stdout_lines? That is ssh -V normal behavior.

Reading Variables at Runtime

For data we cannot hard code in the playbook, we can pass them to the playbook at runtime using the vars_prompt module.

---

- name: This is a playbook showcasing the use of vars_prompt
  become: 'yes'
  hosts: localhost

  vars_prompt:
  - name: description
    prompt: Please provide the description
    private: no

  tasks:
  - ansible.builtin.debug:
      var: description

Date, Time, and Timestamp

ansible_date_time

ansible_date_time is coming from the facts. The playbook needs to gather the facts of the nodes. Otherwise it will be undefined.

---

- name: This is a playbook showcasing ansible_date_time
  become: 'yes'
  hosts: localhost
  gather_facts: true

  tasks:
  - ansible.builtin.debug:
      msg: "Datetime data {{ ansible_date_time }}"

  - ansible.builtin.debug:
      msg: "Date {{ ansible_date_time.date }}"

  - ansible.builtin.debug:
      msg: "Time {{ ansible_date_time.time }}"

  - ansible.builtin.debug:
      msg: "Timestamp {{ ansible_date_time.iso8601 }}"

Conditional Statements

when

A task with when conditional statement will only execute if the statement is true. For example:

---

- name: This is a playbook showcasing the use of `when` conditional statement
  become: 'yes'
  hosts: localhost
  gather_facts: true

  tasks:

  - ansible.builtin.debug:
      msg: "Date {{ ansible_date_time.date }}"
    when: ansible_date_time is defined

The debug task will only run if ansible_date_time is define.

failed_when

---

- name: This is a playbook showcasing the use of `failed_when` conditional statement
  become: 'yes'
  hosts: localhost
  gather_facts: false

  tasks:

  - name: Check connection
    command: ping -c 4 mywebapp.local
    register: ping_result
    failed_when: false # never fail

In the above example, the task never fails. But when failed_when is given a statement that evaluate to true or false, the task will be kipped if the result of that statement is evaluated to false. Otherwise it will be executed.

changed_when

When ansible runs on a host, it may change something on that host. Sometime we want to define ourself when to considere the system as changed. That's what changed_when is for.

---

- name: This is a playbook showcasing the use of `changed_when` conditional statement
  become: 'yes'
  hosts: localhost
  gather_facts: false

  tasks:

  - name: Check connection
    command: ping -c 4 mywebapp.local
    register: ping_result
    failed_when: false # never fail
    changed_when: false # never change anything

Handlers

Handlers are use to manage task dependencies. When we want to run a task only after another one has completed with changed=true we a handler. In the example below, we are only enabling nginx service after nginx is installed successfully.

---

- name: This is a playbook showcasing the use of handlers
  become: 'yes'
  hosts: servers
  gather_facts: true

  tasks:

    - name: Install nginx
      ansible.builtin.dnf:
        name: nginx
        state: present
      notify:
        - Enable nginx service

  handlers:

    - name: Enable nginx service
      ansible.builtin.service:
        name: nginx
        enabled: true
        state: restarted

Ansible Vault

The vault is where we keep our secrets secret. When we have confidential information that we want to keep secure, we use ansible vault. It allows a seemless encryption and decryption of sensitive data with a smooth integration with other ansible features suc has ansible playbook.

Encrypt a variable

ansible-vault encrypt-string "secret token string" --name "api_key"

Encrypt a file

ansible-vault encrypt myfile.txt

Dencrypt a file

ansible-vault decrypt myfile.txt

View content of encrypted file

ansible-vault view myfile.txt

Edit content of an encrypted file

ansible-vault edit myfile.txt

Change encrypted file encryption key

ansible-vault rekey myfile.txt
---

- name: This is a playbook showcasing the use of handlers
  become: 'yes'
  hosts: servers
  gather_facts: true

  vars:
    my_secret: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  15396363646563646365353331396364333839346632333964353531386132323034353163346432
                  6365313938653033613538366132353631626430373032620a653030326634376663613964366164
                  33373965656433346466326266363438376330386561386563353764646237643061613337323733
                  3633383934636236620a353132306539343363326437316539633432363436653437333866353534
                  3738

  tasks:

    - ansible.builtin.debug:
        var: my_secret # never print secrets
      no_log: true

The playbook will not be executed until we provide the key to decrypt the encrypted variable.

Conclusion

I am going to stop here for now but will come back later in other posts to talk more in about ansible playbook. Stay worm, everyone.

Virtualization Technologies

Traditional System Configuration

in traditional server systems, applications are ran directly on top of the host operating system. If we have applications that needs isolation, we need to have a dedicated physical server for each of them, despite the fact that a single server would have enough resources to run multiple applications.

It is inefficient and very expensive in this scenario. To solve this problem, we use virtualization to allow running multiple operating systems on the same physical server.

Virtualization

With virtualization, instead of installing applications directly on top of the physical server host operating system, we install a hypervisor on top of the physical server, then install guest operating systems in virtual machines. Those virtual machines are managed by the hypervisor, that is also called virtual machine manager or VMM.

An hypervisor is software that seats on top of a bare metal server to allow the sharing of resources between multiple operating systems (also called guess operating systems). There are two types of hypervisors: Type 1 hypervisor that seats directly on top of bare metal server, and Type 2 hypervisor that is a software package installed on a host operating system.

Running multiple operating systems on a single host, allows saving money, time, and space since VM management is easier, and we have less physical equipment to purchase, install, and manage.

  • VMWare ESXi

  • Oracle OVM/OLVM

  • Microsoft Hyper-V

  • Citrix Xen Server

  • RedHat KVM

  • Proxmox VE

  • XCP-ng

  • Incus

VMWare ESXi

I have some lab experience with VirtualBox, Proxmox, and Incus but not VMWare yet. That is about to change. Since I am also looking into oportunities in data centers, I think it is a good time to start learning about VMWare technologies along with my system and network automation journey I recently started.

Ansible: Introduction to Playbooks

What is a Playbook

A Playbook in ansible is a file containing a set of instructions for automating systems configuration. It like a bash script but in ansible "language". A ad-hoc command is suitable for a basic single line task. But if we want to perform a complex and repeatable deployement, we certainly must use a playbook.

Playbooks are written in YAML following a structured syntax. A playbook contains an ordered list of plays that runs in order from top to bottom by default.

---
- name: Update servers
  hosts: servers
  remote_user: ans-user

  tasks:
  - name: Update nginx
    ansible.builtin.dnf:
      name: nginx
      state: latest

To ping all hosts;

---
- name: Ping all hosts
  hosts: servers

  tasks:
  - name: Ping servers
    ansible.builtin.ping:

This is how you run a playbook:

ansible-playbook my_playbook.yml

or in dry run mode:

ansible-playbook --check my_playbook.yml

or to check the syntax of our playbook:

ansible-playbook --syntax-check my_playbook.yml

or to list all hosts:

ansible-playbook --list-hosts my_playbook.yml

or to list all tasks:

ansible-playbook --list-tasks my_playbook.yml

We can a lot more configuration to the playbook to perform advanced automation tasks. We are going to leave that for a future post.

By default, ansible gather facts about the nodes before executing the playbook To disable this feature, we can add gather_facts: false to our playbook:

---
- name: Ping all hosts
  hosts: servers
  gather_facts: false

  tasks:
  - name: Ping servers
    ansible.builtin.ping:

Example of Simple Playbooks

Ping hosts

We've already seen how to do that. This is a simple way to ping nodes using ansible playbook:

---
- name: Ping Linux hosts
  hosts: servers
  gather_facts: false

  tasks:
  - name: Ping servers
    ansible.builtin.ping:

Install/Uninstall packages

---
- name: Install/uninstall packages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Install OpenSSH on Linux servers
    ansible.builtin.dnf:
      name: openssh
      state: present

  - name: Uninstall Apache
    ansible.builtin.dnf:
      name: httpd
      state: absent

Update packages

---
- name: Update packages
  hosts: servers
  become: 'yes'

  tasks:
  - name: Update OpenSSH on Linux servers
    ansible.builtin.dnf:
      name: openssh
      state: latest

  - name: Uninstall Apache
    ansible.builtin.dnf:
      name: nginx
      state: latest

Enable/Disable services

---
- name: Enable nginx service
  hosts: servers
  become: 'yes'

  tasks:
  - name: Install nginx
    ansible.builtin.service:
      name: nginx
      state: present

  - name: Enable nginx service
    ansible.builtin.service:
      name: nginx
      state: started
      enabled: yes

  - name: Disable Apache service
    ansible.builtin.service:
      name: cups
      state: stopped
      enabled: no

Ansible: Ad-Hoc Commands

An ansible ad-hoc command is a single command sent to an ansible client. For example:

ansible servers -m setup

setup is an ansible module, which is a set of tools that handle some specific operations. setup gathers information about selected ansible clients.

We can pass a filter to setup argument to gather information we are interested in from the managed node. For example:

ansible servers -m setup -a "filter=ansible_all_ip_addresses"
ansible servers -m ping

This ad-hoc command ping all clients from the servers group in the configured inventory.

Run Shell Commands on Ansible Clients

ansible servers -m shell -a "ip addr show"
ansible servers -m shell -a "uptime"

This commands sends a shell command to all nodes in the group. -a is used to specify the argument for the ad-hoc command which is the command we want to run on each ansible client.

Copy Files from Ansible Controle Node to Clients

ansible servers -m copy -a "src=/home/me/my-file.txt dest=/etc/ansible/data/my-file.txt"
ansible servers -m copy -a "content='My Text file content' dest=/etc/ansible/data/my-text-file.txt"

This command copy a file from ansible control node to all clients. We can choose whether we want to copy the content of the file or file itself by using the content or src params. Note that the destination folder in which the file is going must exist in the clients. When uploading a file content, we must specify the complete destination file name (dest=/data/upload/my-file.conf).

If the file exist in the destination, ansible will use the md5 checksum of the file to determine if that task was previously done. If the file was not modified ansible will not re-upload the file. Otherwise, whether the file was updated in the control node or in the clients, ansible with upload the file again in the selected clients.

Create and Delete File and Folders in Ansible Clients

To create a new file in ansible clients:

ansible servers -m file -a "dest=FILE OR DIRECTORY DESTINATION state=touch"

To delete a file in ansible clients:

ansible servers -m file -a "dest=FILE OR DIRECTORY DESTINATION state=absent"

To create a directory, change the state to directory:

ansible servers -m file -a "dest=/my/directory/data state=directory"

A directory deletion is performed like a file deletion. Just specify the diirectory name in the dest and state=absent then ansible will delete the directory.

Install and Uninstall Packages on Ansible Clients

We can use use the shell or dnf/apt ansible module.

ansible servers -m shell -a "sudo dnf install nginx"
ansible servers -m dnf -a "name=nginx state=present" -b

If the operation requires root user priviledge, we can pass sudo to the shell command. But if we are using dnf/apt module, the ansible user must have root priviledge and we also need to add -b option to the command.

Use the latest state to update already installed package.

ansible servers -m dnf -a "name=nginx state=latest" -b

The state can be one of absent, installed, present, removed, and latest.

Understanding ansible ad-hoc commands is important for understanding ansible playbook. From here, we are going to move slowly towards efficient ways to automate tasks using ansible.