Skip to content

Blog

Linux: Hardening Techniques

Password Composition Controls

Password Complexity

Password complexity requires different characters types in a password, like uppercase, lowercase, digits, and symbols. it ensures that user passwords include a mix of character types, such as uppercase letters, lowercase letters, numbers, and special characters. Password complexity is managed through PAM. To the user password complexity, edit /etc/security/pwquality.conf

# password should include at least 4 characters of different categories
minclass = 4

Password Length

Password length sets the minimum number of characters required in a password. It is also configured through the pam_pwquality module.

# set minimum password length
minlen = 12

Password lifecycle Controls

Password lifecycle controls require users to change their passwords regularly.

Password Expiration

Password expiration forces users to change their passwords after a certain number of days. chage is used to control this setting per user account basis.

# To change user password max age
chage -M 90 samuel

Password History

Password History keeps track of old passwords to support Password reuse.

Password Reuse

Password reuse prevents users from reusing old passwords. pam_pwhistory tracks old passwords in order to block password reuse. To change the settings, edit /etc/pam.d/common-password.

# to prevent any user from reusing their last 5 passwords
password requisite pam_pwhistory.so remember=5

Checking existing breach lists

Have I Been Pwned - HIBP

Checks email addresses against known public breaches.

Have I Been Pwned haveibeenpwned.com

Via API:

# check for email in breach
https://haveibeenpwned.com/api/v3/breachedaccount/jdoe@email.com

DeHashed

DeHashed provides deeper insight with email, phone number, username, ip address, and document searches in breach data.

Via API:

# to search break data for a selected email address
https://api.dehashed.com/search?query=jdoe@email.com&size=20

Intelx.io

Intelx.io provides enterprise-grade OSINT solution, aggregating data from dark-web forums, paste sites, and public breache dumps with powerful query syntax and API access.

Restricted shell use

/sbin/nologin

/sbin/nologin prevents interactive login.

ex:

# to create a user withn o shell access. Go for automated services
useradd -s /sbin/nologin backupbot

/bin/rbash

/bin/rbash provides limited shell access to users. It restricts actions like changing directories, modifying environment variables, or executing programs rom unexpected locations.

ex:

# create a user with restricted bash shell
useradd -s /bin/rbash -m reports

pam_tally2

pam_tally2 helps monitor and respond to failed login attempts.

/etc/pam.d/common-auth

/etc/pam.d/login

# to lock account after 5 failed attempts
# and automatically unlock it after 10 minutes
auth required pam_tally2.so onerr=fail deny=5 unlock_time=600

pam_tally2 # to view a summary of all failed attempts

pam_tally2 --user john # to view a summary of all failed attempts for a selected user

Avoid Running as root user

sudoers

/etc/sudoers is edited using visudo to prevent errors.

# give the user access to restart Nginx and nothing more
john ALL=(ALL) /sbin/systemctl restart nginx

PolKit (PolicyKit)

pkexec runs a command as another user

pkaction lists available privileged operations on the system

pkcheck checks whether a user is authorized to perform a specific action

pkttyagent provides a prompt for authentication in a terminal session

Linux: Firewall

firewall Configuration and Management

Zones

A zone is a named profile that carries itw own rule set for which services and ports are allowed through the zone. To create a zone, run firewall-cmd --permanent --new-zone=<ZONE-NAME> && firewall-cmd --reload. Run firewall-cmd --get-zones to see all zones.

Runtime Settings

Runtime settings take effect immediately and stays active until the next reboot or manual reload.

Permanent Settings

Permanent settings persists across reboots but does not touch the running firewall until reload.

firewall-cmd

firewall-cmd is the command line tool used to manage firewalld configurations. The general syntax is firewall-cmd <OPTION> <OPTION VALUES>.

Useful options include:

  • --get-zones: display all zones
  • --get-active-zones: shows only zones that currently have bound interfaces
  • --list-all --zone=<ZONE>: displays every rule in a given zone
  • --add-port=<PORT>/<PROTOCOL>: opens individual ports
  • --remove-port=<PORT>/<PROTOCOL>: closes individual ports
  • --runtime-to-permanent: to copy current rule set to disk
  • --set-default-zone=<ZONE>: to change the default zone assigned to new interfaces
  • --zone=<ZONE> --change-interface=<INTERFACE>: to assign an interface to a zone

Rules and Access Control

Ports
# to add port
firewall-cmd --zone=internal --add-port=8080/tcp --permanent

# to remove a port
firewall-cmd --zone=internal--remove-port=8080/tcp --permanent
Services
# to add https service
firewall-cmd --zone=internal --add-service=https --permanent

# to remove https service
firewall-cmd --zone=internal --add-service=https --permanent
Rich Rules

Rich rules extend firewalld with "if-this-then-that" logic.

# to add rich rule to a zone
firewall-cmd --zone=public --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.10" service name="ssh" accept' --permanent

Uncomplicated Firewall - (UFW)

By default UFW blocks all incoming traffic and allows all outgoing traffic. It writes every change directly to its configuration file and loads it boot.

ufw enable # to enable UFW service

ufw disable # to disable UFW service

ufw allow 8080/tcp # to add a allow rule

ufw allow ssh # to add a allow rule

ufw deny 23 # to add a deny rule

ufw delete allow http # to delete a allow rule

ufw allow from 192.168.1.10 # to allow traffic from specific IP address

ufw deny from 192.168.1.10 # to deny traffic from specific IP address

ufw allow from 192.168.1.0/24 to any port 22 # to allow subnet to access specific port

ufw status numbered # to see numbered rule set

ufw delete 2 # to delete numbered rule set

ufw default deny incoming # to set default incoming (deny)

ufw default allow outgoing # to set default outgoing (allow)

iptables

iptables is a command line utility used for traffic filtering and alteration. It is build around tables. The main tables are:

  • filter
  • nat
  • mangle
  • raw
  • security

Each table contains Chains:

  • INPUT: inspects packets destined for local system
  • OUTPUT: filters packets originating from local system
  • FORWARD: filters packets moving through the system

The general syntax is iptables [-t <TABLES>] -A <CHAIN> -p <PROTOCOL> [MATCH OPTION] -j <TARGET>.

ex:

# to accept SSH traffic into a host
iptables -t filter -A INPUT -p tcp --dport 22 -j ACCEPT

ipset

ipset groups many ip addresses or subnets into sets to let servers match and process packets more efficiently than checking each one individually. The generic syntax is ipset [OPTIONS] <COMMAND> <SETNAME> [PARAMS]. Common commands include: create, add, del, list.

# to keep a dynamic deny list of know-bad ip addresses and tie it back into iptables

# create new set
ipset create bad_hosts_list hash:ip

# add offending ip address to set
ipset add bad_hosts_list 172.0.0.25

# view ipset list
ipset list bad_hosts_list

nftables

nftables is a single framework that merges tables and rules. It is a modern successor of iptables. The general syntax is nft [OPTIONS] add rule <FAMILY> <TABLE> <CHAIN> <EXPRESSION>

# to allow ssh on port 22
nft add rule inet filter input tcp dport 22 ct state new accept

Netfilter Module

The Netfilter module is a Linux kernel module that acts like the digital gatekeeper, examining every data packet entering or leaving the system. and deciding if a packet should be blocked or allowed according to the predefined rules. It is the backend for iptables, ip6tables, and nftables.

Stateful and Stateless Firewall

Stateless Firewall

A stateless firewall treats each incoming packet independently using pre-defined rules.

# accept http traffic
iptables -A INPUT -p tcp --dport 80 -j ACCEPT

# drop all other traffic
iptables -A INPUT -j DROP

Every packet will be checked to see if it is destined to port 80. If not the packet will be dropped.

Stateful Firewall

Stateful firewall remembers ongoing communications sessions between computers.

# Allow established and related packets
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow new ssh connections
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT

# Drop everything else
iptables -A INPUT -j DROP

IP Forwarding

IP forwarding allows system to pass network traffic from one interface to another, acting like a router. IP forwarding is disabled by default. Set ip forwarding permanently with net.ipv4.ip_forward = 1 in /etc/sysctl.conf. For temporary enabling ip forwarding, run sysctl -w net.ipv4.ip_forward=1.

Linux: Authorization, Authentication, and Accounting

Local Authentication

PAM (Pluggable Authentication Modules)

PAM handles the core authentication process: validating usernames, password, and enforcing policies. PAM relies on other modules to handle specific part of the authentication process. These modules are configured in files located in /etc/pam.d/ directory.

PAM module types

  • auth: verifies user identity
  • account: enforces access policies
  • password: handle password updates
  • session: manages tasks that happen at the start or end of a session

PAM uses controls flags to determine how each module's result should affect the overall outcome.

Module flags:

  • required: the module must pass, processing continues even if it fails
  • requisite: the module must pass, failure causes immediate termination
  • sufficient: success means authentication may succeed early if no required module failed
  • optional: only evaluated if it's the only module in the group

Polkit (PolicyKit)

Polkit manages authorization: deciding if regular users can perform administrative or system-level actions without switching to root. The rules are configured in files in /etc/polkit-1/rules.d/ or /etc/polkit-1/localauthority/ directories.

Directory-based Identity Management

Kerberos

Kerberos handles secure authentication using a ticket-based system to prove identity without repeatedly sending passwords. It is a secured network authentication protocol that allows users and services to prove their identity without sending passwords over the network.

LDAP (Lightweight Directory Access Protocol)

LDAP provides a structured directory for storing user accounts, groups memberships, and organizational information. It is a standardized protocol used to access and manage directory information. It is where the usernames, group definitions, and user attributes are stored.

SSSD (System Security Service Daemon) and Winbind

SSSD and Winbind act as intermediary on Linux for connection and using these centralized services seamlessly.

Network / Domain Integration

realm

realm is a tool that simplifies the process of joining systems to domains and sets up authentication with minimal manual configuration. realm enables identity and login integration with Windows domains, but it doe snot handle file or printer sharing.

ex:

realm discover my.domain.com # to discover domains

realm join --user=admin my.corporation.com # to join my.corporation.com domain using the admin credentials

realm list # verify the configurations

realm permit --all # to permit all users to login

realm permit admin@my.domain.com # to allow specific user to login

realm permit -g "Administrators" # to allow a group to login

realm leave my.domain.com # to leave a domain

Samba

Samba provides a deeper integration with Windows environments. It is focussed on file sharing, printer access, and Windows-compatible network services. The main configuration is located in /etc/samba/smb.conf

example file share:

[global]
  workgroup = WORKGROUP
  server string = Samba Server
  security = user

[Public]
  path = /srv/samba/public
  browsable = yes
  writable = yes
  guest ok = yes
udo systemctl start smb nmb # to start samba service

sudo systemctl enable smb nmb # tp enable samba a system start

Logging

/var/log

/var/log is the central directory on most Linux systems where log files are stored.

  • messages General system messages
  • /var/log/syslog System-wide log
  • /var/log/kern.log Kernel-specific messages
  • /var/log/auth.log / /var/log/secure Authentication and authorization events
  • /var/log/boot.log Boot process messages
  • /var/log/dmesg Kernel ring buffer messages
  • /var/log/cron Cron job execution logs
  • /var/log/maillog / /var/log/mail.log Mail server logs
  • /var/log/Xorg.0.log / /var/log/X server graphical session logs
  • /var/log/apt/ / /var/log/yum/ Package manager logs
  • /var/log/journal/ Systemd journal storage

rsyslog

rsyslog is a high-performance logging service that receives and stores log messages from the kernel, services, and applications. The configurations are stored in /etc/rsyslog.conf and /etc/rsyslog.d/*.conf

ex:

auth.* /var/log/auth.log # to store all authentication messages to a file

kern.warning /var/log/kern.log # to log only kernel warning messages and above

*.* @@log.server.com:514 # to send log messages to a remote server

Message severity levels

emerg # system unusable

alert # immediate action required

crit # critical conditions

err # errors

warning # warnings

notice # normal but significant

info # informational messages

debug # debug messages

journalctl

journalctl is a systemd tool used to view messages store by systemd journal.

journalctl -b # to view all logs for the current boot

journalctl -b -1 # to view all logs for the previous boot

journalctl -f # to tail log

journalctl -k # to view logs from the kernel

journalctl -u nginx.service # to view logs nginx

logrotate

logrotate is a tool for managing the size and rotation of log files, ensuring that logs do not fill up the disk over time. The main configuration is located in /etc/logrotate.conf and /etc/logrotate.d/.

ex:

logrotate -d /etc/logrotate.conf # to check configuration

logrotate -f /etc/logrotate.conf # to force log rotation

System Audit

auditd

auditd is a service that records audit events to disk, and administrators control i witht he systemd utility.

audit.rules

audit.rules is the configuration file that tells the audit subsystem precisely which activity to record. The configuration is located in /etc/audit/rules.d/audit.rules.

ex:

-w /etc/passwd -p wa -k passwd_changes # to tell the audit system to watch for password changes

-w /var/log/lastlog -p wa -k login_logs # to watch for user login

-w /var/run/faillock -p wa -k failed_logins # to watch for failed logins

ausearch -k passwd_changes # to search logs for keys

Linux: Backup and Recovery

Basics

. refers to the current directory

.. refers to the parent directory of the current directory

~ refers to the home directory

Archiving

Archiving combines multiple files into one package, making them easier to backup, transfer, or organize. tar and cpio are popular tools used for archiving.

tar

tar packages multiple files or directories into a single archive file. The syntax is tar [OPTIONS] <ARCHIVE NAME> [FILE1 FILE2, DIR1...]. Common options includes:

  • -c to create an archive
  • -x to extract files
  • -t to list the contents of an archive
  • -v for verbose output
  • -r to append files to an existing archive
  • -f to specify te archive file name
  • -z for gzip
  • -j for bzip2
  • -J for xz

ex:

tar -czvf backup.tar.gz data/ # to create an archive of the data/ directory using gzip

cpio (copy in/out)

cpio get the list of file to archive from another command like find or ls. The general syntax using find is find [FILES] | cpio -ov > [ARCHIVE NAME].cpio. The following are the main 3 modes:

  • -o to create an archive (copy-out)
  • -i to extract and archive (copy-in)
  • -p to copy files (copy-pass)

additional options:

  • -d to create directories as needed
  • -v for verbose output
  • -u to override existing files
  • -t to list archive content

ex:

find /configs -type f | cpio -o > config_bk.cpio # to create an archive

cpio -id < backup.cpio # to extract an   archive

cpio -it < backup.cpio # to list the content of the archive

find data/ -name "*.conf" | cpio -pvd /backups/configs # to copy files

Compression Tools

Compression tools helps shrink files size.

gzip

gzip is widely used for its speed and simplicity. It uses the .gz format. For backup it is recommended to use tar + gzip (-cvfz). Common options include

  • -d to decompress files
  • -f to override files without asking
  • -n to skip storing the original file name and timestamp
  • -N to save the original file name and timestamp
  • -q for quiet mode
  • -r to compress directories recursively
  • -l to show statistics
  • -t to test the integrity of the compressed file
  • -v for verbose mode
  • -1...-9 to specify compression level

ex:

gzip myfile.txt # to compress a file and delete the original

gzip -k myfile.txt # to compress a file and keep the original

gzip -k myfile1.txt myfile2.txt myfile3.txt # to compress a file and keep the original

gzip -vr /var/log/ # to compress the content of the folder with verbose output

gzip -9 image.iso # to compress with maximum level (levels range 1-9 default is 6)

zcat myfile.txt.gz # to view compress file content

gunzip myfile.txt.gz # to uncompress an archive

bzip2

bzip2 offers a better compression but slower to complete compare to gzip. The syntax is bzip2 [OPTIONS] <FILE NAME>

  • bzip2 is used for compressing files
  • bunzip2 to uncompress files
  • bzcat to view content of a compressed file without extracting it
  • bzip2recover to attempt to recover data from a damaged archive
  • bzless and bzmore to scroll through compressed text files one page at a time

ex:

bzip2 myfile.txt # to compress a file and delete the original file

bzip2 -k myfile.txt # to compress a file and keep the original file

bunzip2 myfile.txt.bz2 # to decompress a file

bzip2 -t myfile.txt.bz2 # to test the integrity of a compressed file

bcat myfile.txt.bz2 # to list the content of a compressed file

xz

xz is a newer compression tool that offers a higher compression but is even slower than gzip and bzip2. It is great for archiving files that do not change often. The syntax is xz [OPTIONS] <FILE NAME>. Command options include:

  • -d to decompress an compressed archive
  • -f to override files
  • -q for quite mode
  • -v for verbose mode
  • -t to test compressed file

ex:

xz myfile.txt # to compress a file and delete the original file

xz -k myfile.txt # to compress a file and keep the original file

xz -d myfile.txt.xz # to decompress a file

unxz myfile.txt.xz # to decompress a file

xz -t myfile.txt.xz # to test the integrity of a compressed file

xz -l myfile.txt.xz # to list the content of a compressed file

7-Zip

7-Zip is used where compatibility with Windows system is needed. It is more flexible because it handles multiple archive format like .7z, .zip, and .tar. It is usually available through the p7zip package. Common options include:

  • -a to add files to an archive
  • -x to extract files from an archive
  • -l to list archive content
  • -t to test an archive
  • -d to delete files from an archive

ex:

7z a backup.7z file1 file2 data/ # to create a compressed archive

7z x backup.7z # to extract a compressed file

7z l backup.7z # to list the content of a compressed file

7z t backup.7z # to test a compressed file

7z a -mx=9 backup.7z image.iso

Data Recovery

dd (data duplicator)

dd copy data at the block level and is useful for creating exact images of disks or partitions. It is commonly used for disk cloning, creating bootable USB drive, doing backup and restore, and wiping disks. The basic syntax is dd if=<INPUT FILE> of=<OUTPUT FILE> [OPTIONS]. Common options include:

  • if= input file/device
  • of= output file/device
  • bs= block size. The default is 512 bytes
  • count= number of blocks to copy
  • skip= number of input blocks to skip
  • seek= number of output blocks to skip before writing
  • status=progress to show progress
  • conv=noerror,sync to copy pass read error in bad blocks.

ex:

dd if=image.iso of=/dev/sdb1 bs=4M status=progress # to create a bootable USB drive

dd if=/dev/sda of=diskA.img bs=1M status=progress # to create a disk image

dd if=diskA.img of=/dev/sda bs=1M status=progress # restore data from an image

dd if=/dev/zero of=/dev/sdb bs=1M status=progress # to completely erase a disk

dd if=/dev/zero of=test_file bs=1G count=1 oflag=dsync # to test the write speed of a disk

ddrescue

ddrescue is used to recover data from damaged drives. The basic syntax is dd [OPTIONS] <INPUT FILE> <OUTPUT FILE> <LOG FILE>

ex:

ddrescue /dev/sdb damaged.img rescue.log # to attempt rescuing /dev/sdb

rsync

rsync is used to synchronize files and directories over the network. After the first copy, it copies only differential changes in subsequent copy. The basic syntax is rsync [OPTIONS] <SOURCE> <DESTINATION>. Important options are:

  • -r# to copy recursively
  • -a# to copy in archive mode preserving permissions, symblinks, and timestampts
  • -n# to see what would be copied (Dry run)
  • -z# to enable compression during transfer
  • -h# to see a human-readable output
  • -v# for verbose mode
  • --progress# to show progress
  • --delete# to remove files in destination that are not present in the source

ex:

rsync -avh /home/user/ /mnt/backup/user # to copy user directory with all attributes preserved

rsync -avh user@server:/data/ /home/user/data/ # to sync from remote server to local

rsync -avh --bwlimit=4000 /home/user/ user@server:/backup/ # with a bandwidth limit = 4000KB/s

Compressed File Operations

zcat

zcat displays the full content of a compressed file.

zcat myfile.txt.gz # to show the content of the compressed file

zless

zless allows scrolling through the content of a compressed file interactively

zless myfile.txt.gz # to show the content of the compressed file in a scrollable mode

zgrep

zgrep allows searching through compressed data. The syntax is zgrep [OPTIONS] <SEARCH PATTERN> <FILE NAME> Common options include:

  • -i # to make the search case-insensitive
  • -n # to show line numbers
  • -v to show lines that do not match the query

ex:

zgrep "ERROR" logs.gz # to search for lines containing text 'ERROR'

zgrep -i "failed password" /var/log/auth.log.1.gz # to find all login attempts

Linux: Network Services and Configurations

Basics

Linux uses a layered approach (local files and external resources) to figure out how to resolve internal and external system names.

  • /etc/hosts
  • /etc/resolv.conf
  • /etc/nsswitch.conf

/etc/hosts

/etc/nosts is a plain text file where we manually map hostname to IP addresses so the system can resolve names without relying on DNS. It is useful for environment where DNS is not available.

ex:

192.168.1.101 server1.local

192.168.1.102 server2.local

/etc/resolv.conf

/etc/resolv.conf tells Linux which DNS servers to use when resolving names that are not listed in /etc/hosts. It is useful for troubleshooting internet related issues or the system cannot resolve external domain names.

/etc/nsswitch.conf

/etc/nsswutch.conf controls the order in which the system tries different methods to resolve names and other data.

hosts: files dns # check local /etc/hosts first before querying external DNS

hosts: dns files # check external DNS first before checking local /etc/hosts

NetworkManager

nmcli

nmcli is a command line interface for interacting with NetworkManager, allowing admin to monitor and manage network connections on Linux.

nmcli device status # to show the status of all devices

nmcli general status # to check overall networking health

nmcli connection show # to lists configured connections

nmcli connection up <CONNECTION-NAME> # to activate a specific network connection

nmcli connection don <CONNECTION-NAME> # to deactivate a specific network connection

nmcli connection edit <CONNECTION-NAME> # to open an interactive editor for detailed changes

nmcli connection reload # to reload the settings after editing

nm-connection-editor is a GUI tool for editing NetworkManager connection profiles without needing to use the command line. Just type nm-connection-editor in the terminal to start the GUI.

.nmconnection is a configuration file used by NetworkManager to store settings for a specific network profile. They are located in /etc/NetworkManager/system-connections/

Netplan

Netplan is the default tool used to configure and manage network settings in Debian Linux. It uses YAML files to centralize network configurations.

Configuration Files

Network configuration files are written in YAML. They are stored in /etc/netplan/ directory. Configurations are not applied automatically. They must be activated before they take effect.

ex:

network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      addresses: [192.168.1.50/24]
      gateway4: 192.168.1.1
      nameservers:
        addresses: [8.8.8.8, 1.1.1.1]

This tells netplan to configure a static IP address on eth0.

netplan try

netplan try is used to try a new network configuration temporarily, with a built-in safety mechanism. This commands applies changes for 120 seconds by default before reverting the configuration if not confirmed.

netplan apply

netplan apply is used to permanently apply changes made in the configuration files. This command reads the configurations from /etc/netplan/, applies the changes to the system, and activates the associated network interfaces.

netplan status

netplan status verifies which configuration is active, which interfaces are managed by which renderer, and their current settings.

IP Network Management Tool

ifconfig

ipconfig is a legacy tool used to configure ip network on Linux system. It is getting deprecated in for of ip command suite.

ex:

ifconfig # to view currently active network interface.

ip address

ip address is used to view and manage IP address configuration of system network interfaces.

ex:

ip address show # to view ip address configuration. or simply "ip a"

ip address show dev eth0 # to view ip address config of interface eth0

ip address add 192.168.1.100/24 dev eth0 # to add a ip address to eth0

ip address del 192.168.1.100/24 dev eth0 # to delete a ip address to eth0

ip address flush dev eth0 # to flush all ip addresses from eth0

ip link focuses on network interface link layer (Layer 2 of the OSI model)

ip link show # to view interfaces

ip link set eth0 up # to bring interface up

ip link set eth0 down # to bring interface down

ip link set eth0 mtu 9000 # to change the MTU

ip link set eth0 dev eth0 address 12:23:34:45:56:67 # to change the MAC address of the NIC

ip route

ip route is used to display and manage the kernel's routing table, which determines how network traffic is forwarded.

ip route add 10.0.0.0/24 via 192.168.1.99 # to add a network route

ip route add 172.16.0.0/16 via eth1 # to add a network route via an interface

ip route add default via 192.168.1.1 # to add a default route

ip route del 10.0.0.0/24 # to delete a route

Network Configuration Tools

hostname

hostname is used to view or set the system's network name. Use hostnamectl set-hostname server1 to permanently set the hostname of the system to server1.

arp

arp is used to show or manage the system's Address Resolution Protocol (ARP) table.

arp -n # to all arp cache

arp -a # to display all arp entries

arp -s 192.168.1.10 12:23:34:45:56:67 # to set a static entry

arp -d 192.168.1.10 # to delete an entry

ethtool

ethtool is used for querying and configuring Ethernet network interface configurations. It is used to query driver information, test link status, change interface speed, and change duplex settings.

ex:

ethtool eth0 # to view NIC details such as speed, duplex mode, link status, and firmware info

ethtool eth0 | grep "Link detected" # to check the link status

ethtool -s eth0 autoneg on # to enable speed auto negotiation

ethtool -s eth0 autoneg off # to disable speed auto negotiation

ethtool -s eth0 speed 100 duplex full # to set the speed and duplex mode

ethtool -t eth0 # to test the link

ethtool -S eth0 # to display statistics

Network Connectivity Tools

ping/ping6

ping is used to test basic reachability and round-trip response time to remote systems over IPv4. The syntax is ping <DESTINATION>

ex:

ping 192.168.1.1 # to send ICMP echo request to IP address

ping mysite.com # to send ICMP echo request to hostname

ping -c 3 mysite.com # to send 3 ping

ping -c 4 -i 2 mysite.com # to send 4 ping with a 2 second interval between each ping

ping -s 1400 mysite.com # to send pings with 1400-byte payloads

ping6 is used to test basic reachability and round-trip response time to remote systems over IPv6.

traceroute

traceroute shows the full path packets take to a destination. The syntax is traceroute <DESTINATION>

ex:

traceroute mysite.com # to see all hops along the path including response time

traceroute -I mysite.com # to use ICMP instead of UDP

traceroute -T -p 80 mysite.com # to use TCP port 80

traceroute -m 10 mysite.com # to limit number of hops to 10

traceroute -q 2 mysite.com # to change number of probe packet per hop. Default is 3

tracepath

Similar to traceroute but does not require root privilege. The syntax is tracepath <DESTINATION>.

ex:

tracepath -m 15 google.com # to set the maximum number of hops

mtr

mtr is a short for My Traceroute. it combines the functions of both ping and traceroute into a live, interactive view of each network hop. The syntax is mtr mysite.com.

ex:

mtr mysite.com 

iperf3

iperf3 is an advanced tool used to test actual network throughput between systems and assess bandwidth performance under real conditions. The general syntax is iperf3 -c <DESTINATION>.

ex:

iperf3 -s # to start the server

iperf3 -c 192.168.1.50 # to test client

iperf3 -c 192.168.1.50 -t 60 # to test client for 60 seconds

iperf3 -c 192.168.1.50 -P 8 # to set 8 parallel streams

iperf3 -c 192.168.1.50 -J # to output JSON

Network Scanning and Traffic Analysis Tools

ss

ss is the quickest way to look at the sockets the system is using.

ss -t # to show all TCP connections

ss -u # to show all UDP connections

ss -l # to show listening sockets only

ss -a # to show all connections

ss -p # to show processes using sockets

ss -lnt src :22 # to show SSH connections

nc (netcat)

nc or netcat is a tool to talk to network services.

nc mysite.com 80 # to connect to the host on port 80

nc -l -p 2345 # to listen for connections in server mode

tcpdump

tcpdump is used for network traffic capture and analysis.

tcpdump -i eth0 # to capture packets from eth0

tcpdump -i eth0 -c 100 -w netlog.pcap # to capture first 100 packets from eth0 and write to a file

tcpdump -i eth0 # to capture packets from eth0

tcpdump -i eth0 tcp # to capture only TCP

tcpdump -i eth0 udp # to capture only UPD

nmap (Network Mapper)

nmap is a reconnaissance tool used to scan networks.

ex:

nmap 192.168.0.50 # to scan a single host

nmap 192.168.0.50 192.168.0.51 # to scan multiple hosts

nmap -p 80,443 192.168.0.52 # to scan host or network for specific ports

nmap -p 1-3000 192.168.1.55 # to scan a host for a range of ports

nmap -sV 192.168.1.50 # to detect services running on the host

nmap -O 192.168.1.50 # to guess OS running on the host

nmap -sS -p 22,80,443 -T4 -Pn 192.168.10.0/24 # to scan the network for hosts with open common ports (22, 80,and 443)

nmap -sS 192.168.0.55 # to perform a half open SYN scan which is faster and stealthier

nmap -F 192.168.0.60 # scan top 100 ports

nmap -Pn 192.168.0.55 # scan without ping

nmap --script=vuln 192.168.0.55 # to run vulnerability scan

DNS Tools

nslookup

nslookup is used to look up domain names. nslookup <DOMAIN> [DNS SERVER].

ex:

nslookup # enter the interactive mode

nslookup mysite.com # to query a domain records

nslookup -type=A mysite.com # to query specific record type such as A,AAAA,MX,TXT

nslookup mysite.com 9.9.9.9 # to query using a specific DNS server

dig (Domain Information Groper)

dig is used to get the full DNS exchange when looking up for domain names. It is more powerful than nslookup. dig [@SERVER] <DOMAIN> [TYPE] [OPTIONS]

ex:

dig mysite.com # to lookup a simple domain name

dig mysite.com AAAA # to query a specific record type

dig mysite.com +short # to obtain a short output

dig -x 88.89.90.91 # to perform a reverse DNS lookup

dig @8.8.8.8 mysite.com # to query using a specific DNS server

resolvectl

resolvectl is a DNS resolver service in systemd. resolvectl <VERB> [ARGUMENTS]

ex:

resolvectl query mysite.com # to query DNS

Linux: Containers

A container is a lightweight, portable environment that that packages an application along with everything it needs to run, including applications. library, and configuration files.

The container runtime is the software responsible for running and managing those containers.

There are many runtimes to choose from:

  • runC
  • containerd
  • Docker
  • Podman

Basics

runC

runC is a lightweight command line tool that creates and runs containers directly from the command line.

containerd

It is runtime that handles the entire lifecycle of containers. Uses runC under the hood but provide high level APIs.

Docker

A popular runtime that includes everything needed to build, run, and manage containers. It uses runC and containerd under the hood.

Podman

Podman is like Docker but designed to run without a central daemon. It works the same way as Docker and most Docker commands work with Podman. Podman supports running containers as a regular user without needing root privileges.

Building an Image

FROM

FROM tells what base image to start with; which operating system and environment the container will be build on top of.

ex:

FROM python:3.11-slim 

to start with a Debian Linux that comes with Python 3.11 installed.

USER

USER defines who inside the container will run the remaining commands and processes.

ex:

RUN useradd -m appuser # to create a user

USER appuser # process subsequent command under this user

ENTRYPOINT

ENTRYPOINT defines the main command that will always run when the container starts.

ex:

ENTRYPOINT ["python", "app.py"] # to run this command every time the container starts

CMD

CMD provides default arguments to the ENTRYPOINT, or acts as the command to run if no ENTRYPOINT is set.

ex:

CMD ["--debug"] # to include a default option. So the container will run "python app.py --debug" by default

Example of Dockerfile

Dockerfile

FROM python:3.11-slim

RUN  useradd -m appuser

USER appuser

COPY app.py /home/appuser/app.py

WORKDIR /home/appuser

ENTRYPOINT ["python", "app.py"]

CMD ["--debug"]

then:

  • docker build -t myapp . to build the image
  • docker run myapp to start the container and run the app
  • docker run myapp --test to override CMD line and start the container

Image Retrieval and Maintenance

An container image contains the application code and everything needed to run it.

Image Pulling

Image pulling is the process of downloading a container image from a remote registry to local machine so it can be used to run containers. The general syntax is docker pull <IMAGE-NAME>[:TAG]. The latest tag is pull if the TAG is omitted.

ex:

docker pull ubuntu:20.04 # to pull Ubuntu image with tag 20.04

Image Tags

Tags are labels attached to container images that help identify versions or variants.

ex:

docker pull nginx:latest # to pull the latest version of nginx

Image Layers

Layers are the building blocks of container images. When pulling an image, docker pulls only images that are not previously pulled to increase efficiency.

Image Pruning

Pruning is the process of cleaning up unused containers, images, networks, and volumes to free up space.

Run:

docker system prune # to prune the system

Container Lifecycle Management

Run

docker run is used to create a new container from an image and start it immediately.

ex:

docker run -it ubuntu:20.04 bash # to create and start a Ubuntu:20.04 container with interactive bash shell

Start and Stop

docker start <CONTAINER-NAME> to start a container if it is stopped.

docker stop <CONTAINER-NAME> to stop a container if it is running.

ex:

docker start web-app # to start the container named "web-app"

docker stop web-api # to stop the container named "web-api"

Delete

docker rm is used to delete stopped container that is no longer needed. docker rm <CONTAINER-NAME or ID>

ex:

docker rm webapp-test # to permanently delete a stopped container named "webapp-test"

Prune

docker system prune is used to remove unused resources and free up space. Use -f flag to skip user prompt.

Container Inspection and Interaction

Environment Variables

They are used to pass config variables into a containers at startup. The general syntax is docker un -e <KEY>=<VALUE> <IMAGE-NAME>

ex:

docker run -e NODE_ENV=production node:18

Read Container Logs

Use docker logs <CONTAINER-NAME or ID> to see container logs.

ex:

docker logs web-api # to see the container output and error stream

Inspect Containers

Inspecting a container gives a detailed view of a container's configuration, network settings, mounted volumes, environment variables, an more. Use docker inspect <CONTAINER-NAME or ID> to inspect a container.

docker inspect web-api

Exec

exec is a command that lets users run a command directly inside a running container. The general syntax is docker exec -it <CONTAINER-NAME or ID> <COMMAND>

ex:

docker exec -it db-app bash # open bash shell in interactive mode from the db-app container

Container Storage

Mapping Container Volumes allows users to link a folder from their host machine to a folder inside the container. The general syntax is docker run -v <HOST-PATH>:<CONTAINER-PATH> <IMAGE-NAME>.

ex:

docker run -v /home/user/data:/app/data webapp

Volume Management and Operations

Create Volume

docker volume create <VOLUME-NAME>

ex:

docker volume create apidata # to create a volume named "apidata"

Map Volume

Mapping the volume connects the volume created to a specific location inside a container. docker run -v <VOLUME-NAME>:<CONTAINER-PATH> <IMAGE-NAME>

ex:

docker run -v apidata:/app/data webapi

Prune Volume

Use docker volume prune to remove unused volumes.

Network Management Operations

Create Network

A virtual network gives containers a way to interact with each other or with the outside world securely and efficiently. The general syntax is docker network create [OPTIONS] <NETWORK-NAME>.

ex:

docker network create --driver bridge apps-net # to create a bridge type network named apps-net

Port Mapping

Port Mapping allows containers to communicate with the outside world. The general syntax to map port is docker run -p <HOST-PORT>:<CONTAINER-PORT> <IMAGE-PORT>.

ex:

docker run -p 8080:80 webapp

Local Networks

Bridge Network

The bridge network is the default local network mode for Docker containers on a single host. The general syntax to create a container with a bridge network is docker run --network bridge -p <HOST-PORT>:<CONTAINER-PORT> <IMAGE-NAME>.

ex:

docker run --network bridge -p 8080:80 webapi

Host Network

The host network mode allows the container to share the host system's network stack directly. The container uses the same ip address and port as the host machine. The general syntax is docker run --network host <IMAGE-NAME>

ex:

docker run --network host webapp

none Network

The none network mode disables networking entirely for the container. Containers with this network type cannot communicate with other containers or with the outside world. They are completely isolated. The general syntax is docker run --network none <IMAGE-NAME>

ex:

docker run --network none webapi

Advanced and Overlay Networks

IPvlan

IPvlan network driver allows containers to receive IP addresses from the same subnet as the host, while still maintaining logical isolation between containers.

Macvlan

Macvlan network driver gives each container itw own MAC address and full presence on the physical network, making containers behave like independent network nodes.

Overlay

Overlay network driver is used to link containers across multiple Docker hosts, allowing them to communicate securely and seamlessly.

Linux: Virtualization

Hypervisors

  • KVM = Kernel-based Virtual Machine
  • QEMU = Quick EMUlator

KVM

KVM is a built-in feature of the the Linux kernel that allows the operating system to act as Type 1 hypervisor. It runs virtual machines with their own kernel. QEMU and virsh are tools used to interact with KVM. KVM provides only the virtualization capabilities and we use QEMU or virtsh to create and manage the VMs.

QEMU

QEMU is a user space application that can emulate full hardware systems and run virtual machines entirely on its own, even without KVM (but slower). It emulates CPUs, hard drives, USB controllers, network cards, display adapters, ...

VM Architecture

VirtIO

VirtIO provides a faster, more efficient way for virtual machines to communicate with the hypervisor. To fully use VirtIO, we need paravirtualized Drivers installed within the guest OS. VirtIO provides:

  • virtio-net for virtual networking
  • virtio-blk for virtual block storage
  • virtio-scsi for virtual SCSi storage
  • virtio-fs for virtual shared storage
  • virtio-gpu for virtual GPU
  • virtio-serial for high-speed guest to host communication

These drivers allow the guest to interact directly with the virtualized hardware with high performance.

Nested Virtualization

A feature that allows a VM to act as a hypervisor itself, running other virtual machines inside it.

Operations

VM States

Common VM states are:

  • Running: actively consuming resources
  • Paused: temporarily halted
  • Shut off: completely powered down
  • Suspended: memory contents are saved to disk and the VM can be resumed later
  • Crashed: failed VM

We can use virsh to monitor the states of the VMs.

Disk Image Operations

A disk image is a file that acts as a virtual hard drive for a VM, storing all data of the VM. Disk image can be resized, cloned, snapshotted, or transferred easily between systems.

VM Resources

  • CPU: VMs are assigned virtual CPUs (vCPUs)
  • RAM: the amount of RAM reserved for the VM
  • Storage: the virtual hardware space where the OS, apps, and data are stored. Common storage format are .qcow2, raw, .vmdk
  • Network: VMs are assigned one or more virtual NICs (vNICs)

Network Types

NAT

NAT = Network Address Translation

VMs share the host ip when talking to the outside world. Inbound traffic from the network cannot reach the VM.

Bridged

The VM is in the same network as the host. The VM gets its own IP address in the same network as the host.

Host-only

VMs can only talk to the host machine or other VMs in the same Host-only configuration

Routed

VMs have access to other networks through a virtual router

Open

VMs see all traffic on the network and freely interact with anything it can find

VM Tools

libvirt provides a consistent API for managing common hypervisors. User can interact with libvirt via the command line using virsh or GUI with virt-manager.

Linux: Systemd

Basics

systemd controls how services start, stop, and interact with the system during boot and runtime.

Units

Systemd units defines how each part of the system behaves

  • Services: controls programs and background processes like web servers or networks service
  • Targets: define system states such as multi-user or graphical environments
  • Mounts: handle file systems and ensures disks and network shares are properly attached and available
  • Timers: Trigger services to run at specific times or intervals

System mount unit files are stored in:

/etc/systemd/system/ 

/usr/lib/systemd/system/

Services

Services manage daemons and applications that run in the background. The service file controls how programs start, stop, and behave under different conditions. A service unit file usually contains 3 sections:

  • [Unit]: Requires= and Wants= define dependencies. Before= and After= determine the order in which services start
  • [Service]: Type= defines process behavior. ExecStart= specifies the start command. ExecStop defines how to stop the service. Users= defines non-root execution account. If omitted, the service starts with the root user.
  • [Install]: WantedBy and RequiredBy= defines startup target

example:

[Unit]
Description= Start web app
After=network.target

[Service]
Type=simple
User=appuser
ExecStart=/usr/local/bin/webapp.sh
ExecStop=/bin/kill $MAINPID

[Install]
WantedBy=multi-user.target

Use the following commands to manage the service

systemctl start webapp # to start the service

systemctl enable webapp # to start the service automatically at boot time

systemctl restart webapp # to restart the service

systemctl disable webapp # to disable the service at boot time

systemctl stop webapp # to stop 

systemctl status webapp # to check the service status

journalctl -u webapp # to view the service logs

Targets

Targets defines system states by grouping units together. The common targets include:

  • poweroff.target
  • rescue.target
  • multi-user.target: non-graphical multi-user system
  • graphical.target: a full GUI session that includes everything in the multi-suer.target
  • reboot.target
  • network-online.target: used when services must wait for full network connectivity
  • emergency.target

here are some useful commands:

systemctl get-default # to see the current target

systemctl set-default graphical.target # to set the current target

systemctl isolate graphical.target # to immediately switch a target

systemctl list-units --type=target # to list all targets

When using WantedBy=multi-user.target, you are telling that service to start when the system reaches that target.

Mounts

Defines and automate how file systems are mounted using systemd.

example:

[Unit]
Description=Mount external drive
After=local-fs.target

[Mount]
What=/dev/sdb1 # what location to mount
Where=/mnt/drive # mount point
Type=ext4 # file system type
Options=defaults # mount options

[Install]
WantedBy=multi-user.target

Timers

They are used to schedule tasks to replace or enhance what traditional cron jobs do. They are usually paired with a service file and tells systemd when that service should be started. OnBootSect= directive defiles a delay after system boot before the timer activates. OnCalendar= allows calendar-style scheduling.

example:

[Unit]
Description=Run backup every day at 2AM

[Timer]
OnCalendar=*-*-* 02:00:00 # run at specific time
Persistent=true

[Install]
WantedBy=timers.target

Useful commands are:

systemctl list-timers # to list active timers

systemctl start work.timer # to start a timer

systemctl enable work.timer # to enable a timer at boot

systemctl stop work.timer # to stop a timer

systemctl status work.timer # to check the status of a time

Management Utilities

  • systemctl: It is used to manage systemd units.
  • hostnamectl: manages the system's hostname
  • sysctl: manages the system's kernel

example:

hostnamectl set-hostname host1 # to set a permanent hostname of the system

sysctl -a # to view all parameters

sysctl <NAME> # to view a selected parameter

systctl -w <NAME>=<VALUE> # set a parameter

systemctl edit <UNIT> # to edit a systemd unit without changing the original unit file. ex: systemctl edit nginx.service

systemctl daemon-reload # to reload a systemd unit and apply any new or changed configurations

Configuration Utilities

  • systemd-resolved is a background service that manages DNS resolution and caching for the system.
  • resolvectl is a command line utility for interacting with systemd-resolved.
  • timedatectl allows managing the system clock, timezone, and NTP synchronization.

Useful commands include:

resolvectl status # to see the status of systemd-resolved service

resolvectl query <HOSTNAME> # to see which ip address a hostname resolve to

timedatectl status # to check the system's clock configuration

timedatectl set-timezone <REGION/CITY> # to set the timezone of the system. ex: timedatectl set-timezone America/Chicago

timedatectl set-time "YYYY-MM-DD HH:MM:SS" # to set the system time

timedatectl set-ntp <true or false> # to set whether NTP should be used in the system or not

Diagnosis and Analysis Tools

systemd-analyze reports total boot time and breaks it down into key stages such as firmware, kernel, and user space. It is useful for troubleshooting slow boot time

useful command:

systemd-analyze # to see the system's boot time

systemd-analyze blame # to list services take the longest to start

systemd-analyze security # to see security analysis of services

Linux: Software Configuration and Management

Basics

  • Debian-based systems use apt
  • RHEL-based systems use yum and dnf
  • openSUSE systems use Zypper

Package Managers

We use package managers to search, install, configure, update, and remove software in Linux environments.

apt - Debian-based systems

  • apt update to update all package list
  • apt upgrade to update all packages
  • apt install <PACKAGE> to install a package
  • apt remove <PACKAGE> to remove a package
  • apt show <PACKAGE> to show package details
  • apt search <PACKAGE> to search for a package
  • apt purge <PACKAGE> to delete a package and associated file
  • apt list --installed to show all installed packages
  • apt clean to clear cached downloaded packages
  • apt full-upgrade to the system distribution
  • apt depends <PACKAGE> to show package dependencies
  • apt rdepends <PACKAGE> to show packages that depend on the selected package
  • apt-mark hold <PACKAGE> to lock a package at its current version
  • apt-mark unhold <PACKAGE> to unhold a currently held package
  • apt-mark showhold to show packages currently on hold

dnf - REHL-based systems

  • dnf check-update to update all package list
  • dnf upgrade to update all packages
  • dnf install <PACKAGE> to install a package
  • dnf remove <PACKAGE> to remove a package
  • dnf search <PACKAGE> to search for a package
  • dnf list installed to view all installed packages
  • dnf clean all to clear cached packages
  • dnf history to show transaction history
  • dnf repolist to list enabled repositories
  • dnf versionlock list to list all locked packages
  • dnf versionlock clear to clear all locked packages
  • dnf versionlock add <PACKAGE> to lock a package at its current version
  • dnf versionlock delete <PACKAGE> to delete a "version locked" package
  • dnf config-manager --set-enabled <REPO NAME> to enable a repository
  • dnf config-manager --set-disabled <REPO NAME> to disable a repository

pacman - Arch-based systems

  • pacman -Sy to update all package list
  • pacman -Su to update all packages
  • pacman -S <PACKAGE> to install a package
  • pacman -R <PACKAGE> to remove a package
  • pacman -Ss <PACKAGE> to search for a package
  • pacman -Qi <PACKAGE> to view a package details
  • pacman -Q to list all installed packages
  • pacman -Sc to clear cached packages

zypper - openSUSE-based systems

  • zypper refresh or zypper ref to update all package list
  • zypper upgrade or zypper up to update all packages
  • zypper upgrade <PACKAGE to update a single package
  • zypper info <PACKAGE to view package details
  • zypper install <PACKAGE> or zypper in <PACKAGE> to install a package
  • zypper remove <PACKAGE> or zypper rm <PACKAGE> to remove a package
  • zypper search <PACKAGE> or zypper se <PACKAGE> to search for a package
  • zypper patch-check to check for important patches
  • zypper al <PACKAGE> (add lock) locks a package to prevent it from being updated or removed during system updates
  • zypper rl <PACKAGE> (remove lock) removes a lock
  • zypper mr -d (modify repository) to disable a repository

Source Installation

It is a method used to install software when it is not available in repositories or when it requires a custom build.

Installing a software from source usually includes the following steps:

  • ./configure to configure the system
  • make to build the software
  • make install to install the newly built software
  • make clean to remove temporary build files

GNU GPG Signatures

GNU GNU Not Unix. Used to verify the authenticity of software packages, and files

GPG GNU Privacy Guard - is used to encrypt and sign data.

GPG usage

gpg --import <KEY FILE> to import a public key. ex: gpg --import developer_public_key.asc

gpg --verify <SIGNATURE FILE> <PACKAGE> to verify a signed package. ex: gpg --verify my_program.tar.gz.sig my_program.tar.gz

gpg --list-keys to list all trusted keys

Linux: Processes

Basics

The kernel tags running command with an identifier called PID = Process ID. The kernel also records the relationships between one process, called the Parent Process, and any process it creates by assigning a second number.

Process ID - PID

The PID is used by the kernel to allocated system resources to a running command/program.

cat /proc/<PID> to read kernel statistics.

kill -9 <PID> to shutdown a signal

renice -n 10 -p <PID> to lower process priority

strace -p <PID> to trace a process' system calls

PID 1 or systemd is the master process that starts first and is never exited from

ps, ps aux, ps -ef to view PIDs

ps -C <process-name>. pgrep <process-ame> to view specific process

Parent PID - PPID

ps -e --forest -o pid,ppid,cmd and pstree -p to PPIDs

When a kill <PPID> signal is sent to a process, all descendent processes receives the same signal.

Orphaned Processes

Orphaned processes are processes with parents that have exited before they finish. They are then adopted by PID 1 and their PID becomes PID 1 and continue running.

Zombie Processes

Zombie processes are processes that completed executing but is still in the process table waiting to be removed by the parent.

Process State

The process state identifies what a process is currently doing. A process can have one of these states:

  • R = Running: the process is running or ready to run.
  • S = Sleeping - interrupted: the process is waiting for input or for another event to complete
  • D = Blocked - uninterrupted: the processed is doing something important and cannot be interrupted
  • T = Stopped: the process has been manually paused
  • Z = Zombie: the process has completed and waiting for parent to remove from process table
  • X = Dead
  • I = Idle

Process Priority

nice

The nice priority command sets the "politeness" (priority) of a process when it competes for CPU time. The value ranges from -20 to 19 (the lowest priority). The default is 0.

nice [-n VALUE] <command> [arguments] to set the priority of a process.

ps -eo,pid,ni,cmd to view the nice value.

renice

renice is used to change the nice value of a running process. Only a user with root privileges can set negative nice values.

The generic syntax is renice [-n VALUE] {-p PID | -g PGID | -u user}

renice -n -5 -p 10234 to change tne nice value to -5 for process with PID 10234.

Process Monitoring Tools

ps - process status

Give a quick look at what is running in your machine. ps -ef give information about all processes including all attributes. ps aux also list processes with detailed information.

ps -e list every process on the system

ps -a list processes from other users terminals

ps -eo [COLUMNS] to customize which column to show

ps -C <command> to list processes with a specific name. ex: ps -C tar

ps -p <PID> to view a process information

Column Description
PID Process ID
PPID Parent Process ID
USER Owner of the process
CMD Command used to start the process
%CPU CPU usage
%MEM Memory usage
VSZ Virtual memory size (in KB)
RSS Resident Set Size (physical memory in KB)
STAT Process state (R, S, Z, etc.) + modifiers
TTY Terminal associated with the process
TIME Cumulative CPU time used
  • top

top provides a live updates of the process table.

SHIFT + P to sort the processes by CPU usage

SHIFT + M to sort the processes by memory usage

top -d [REFRESH INTERVAL] to set the refresh frequency in seconds. ex: top -d 1 to refresh the list every seconds.

  • htop

Enhanced top.

  • atop

atop records and stores metrics for later reviews.

Performance Metrics

mpstat tells how busy each CPU core is. It is part of systat package that is a collection of utilities that collect, report, and log essential system performance metrics.

  • -P cpu targets a single core
  • -P ALL targets every core

ex: mpstat 3 10 every 3 seconds, print CPU stats. Do this 10 times in a row. or mpstat -P ALL 2 10 to special all CPUs.

pidstat reports CPU, memory, and I/O usage. pidstat is also part of sysstat package.

ex: pidstat 2 shows per process CPU usage updated every 2 seconds.

  • -u shows CPU usage
  • -r shows memory usage
  • -d shows i/o usage
  • -p <PID> show stats for a specific process

ex: pidstat -u -r -d -p 2345 2 10 shows CPU, memory, i/o activity of process 2345 every 2 seconds for 10 times

Job Control Commands

  • Ctrl + Z suspends the active process and returns to the shell prompt without terminating the job.
  • jobs lets you see what jobs you have running in the shell
  • bg resumes a suspended job in the background. bg %1 resumes job 1 in the background
  • fg brings a job to the foreground. fg %1 brings job 1 to the foreground
  • disown %1 removes job 1 from job control from the shell's job table and keeps it running even after the shell exists
  • kill sends a signal to a job or process to terminate. ex: kill %1 to terminate job 1 or kill 2345 to terminate a job with PID 2345
  • & runs a command in the background from the start. ex mpstat 2 &
  • nohup (no hang up). it is like disown but prevents the job from receiving hang up signal from the start. ex: nohup long_job.sh & to start a job, free the terminal for other commands, and continue the job even if the terminal closes.

Job Scheduling

  • crontab helps run tasks repeatedly at regular intervals, such as every day, week, month... use crontab -e to edit user contrab. ex: crontab -e 0 23 * * * my_script.sh to run my_script.sh every day at 11:00 PM. Use crontab -l to view scheduled jobs.
  • at helps schedule one time tasks that will run at a specific data and time in the future. ex: at 11:30 AM tomorrow with prompt for the command to run tomorrow at 11:30 am.
  • anacron ensures scheduled tasks still run even if the computer is turned off when they where originally scheduled to execute.