Skip to content

Blog

SOCKS5 Protocol Analysis with PCAPs

Overview

This is a runthrough of the SOCKS5 protocol through packet captures demonstrating different commands, authentication methods, and real-world use cases. The goal is just to become familiar with the protocol and be able to understand/identify when it is being used and what that ends up looking like in a pcap. RFC 1928 goes into the nitty gritty, but let's be honest, having some visuals ends up being a lot more helpful in making sense of things.

About These Captures

All PCAP files in this collection show complete, packet flows with detailed header breakdowns, but were fabricated and not derived from actual traffic. They're designed for learning, testing, and protocol analysis.

Network Topology

Client: 192.168.1.100 (MAC: 00:11:22:33:44:55)
  │
  │ Port: Dynamic (54321-54325)
  ↓
Proxy:  192.168.1.200 (MAC: aa:bb:cc:dd:ee:ff)
  │
  │ SOCKS Port: 1080
  ↓
Destination Servers (various)

Available PCAP Files

File Scenario Key Features
socks5_no_auth_http.pcap CONNECT + HTTP No authentication, IPv4, HTTP plaintext
socks5_userpass_auth.pcap CONNECT + HTTPS Username/password auth, domain name, TLS
socks5_bind.pcap BIND command Incoming connections, FTP data channel
socks5_udp_associate.pcap UDP relay DNS queries, UDP encapsulation
socks5_ssh.pcap CONNECT + SSH SSH protocol tunneling, encryption

SOCKS5 Protocol Basics

Handshake Flow

sequenceDiagram
    participant C as Client
    participant P as Proxy
    participant S as Server

    C->>P: 1. Greeting (methods)
    P->>C: 2. Method Selection

    opt Authentication Required
        C->>P: 3. Auth Credentials
        P->>C: 4. Auth Response
    end

    C->>P: 5. Connection Request
    P->>S: 6. Establish Connection
    S->>P: 7. Connection ACK
    P->>C: 8. Reply (success/failure)

    Note over C,S: Tunnel Established
    C->>P: Application Data
    P->>S: Application Data
    S->>P: Application Data
    P->>C: Application Data

Command Types

SOCKS5 Commands

  • CONNECT (0x01): Establish TCP stream to destination
  • BIND (0x02): Listen for incoming TCP connection
  • UDP ASSOCIATE (0x03): Relay UDP datagrams

Scenario 1: No Authentication + HTTP

File: socks5_no_auth_http.pcap

Description

Demonstrates the simplest SOCKS5 flow: client connects to example.com (93.184.216.34:80) without authentication and sends an HTTP GET request.

Packet Flow

Phase 1: TCP Handshake (Packets 1-3)
Client → Proxy:1080 [SYN]
Proxy → Client       [SYN-ACK]
Client → Proxy       [ACK]
Phase 2: SOCKS5 Negotiation

Packet 4: Client Greeting

Direction: Client  Proxy
TCP Payload:
05 01 00
Byte Value Meaning
0 05 SOCKS version 5
1 01 Number of authentication methods
2 00 Method: No authentication required

Packet 6: Method Selection

Direction: Proxy  Client
TCP Payload:
05 00
Byte Value Meaning
0 05 SOCKS version 5
1 00 Selected method (no auth)

Packet 8: Connection Request

Direction: Client  Proxy
TCP Payload:
05 01 00 01 5D B8 D8 22 00 50
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 01 CONNECT command
2 1 00 Reserved
3 1 01 Address type (IPv4)
4-7 4 5D B8 D8 22 93.184.216.34 (example.com)
8-9 2 00 50 Port 80 (HTTP)

wiresharkPacket8 The real destination IP address is conveyed in this CONNECT packet as the Remote Host. This is what Wireshark uses to fill in SOCKS layer details in packet 12 even though packet 12 does not contain actually contain the Remote Host IP address.


Packet 10: Reply

Direction: Proxy  Client
TCP Payload:
05 00 00 01 5D B8 D8 22 00 50
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 00 Success
2 1 00 Reserved
3 1 01 Address type (IPv4)
4-7 4 5D B8 D8 22 Bound address
8-9 2 00 50 Bound port

Tunnel Established

After this packet, the SOCKS tunnel is established. All subsequent traffic is transparent relay.

Phase 3: Application Protocol (HTTP)

Packet 12: HTTP GET Request

GET / HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0
Accept: */*

No SOCKS Headers

Notice there are no SOCKS protocol headers in this packet. The proxy transparently relays the HTTP request. If you are viewing in Wireshark, it will add a SOCKS layer to the Packet Details pane to be helpful, but as mentioned earlier, the bytes are not actually present in the packet.

wiresharkSOCKSGETrequest

Packet 14: HTTP Response

HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 13

Hello World!

Key Observations

  • SOCKS negotiation completes in 4 packets (greeting, method, request, reply)
  • Total overhead: ~20 bytes
  • Application protocol is completely transparent
  • No encryption - HTTP is readable in plaintext

When you follow the TCP Stream for the session, you notice immediately that it does not look like a vanilla http GET request. SOCKShandshake

Scenario 2: Username/Password Authentication

File: socks5_userpass_auth.pcap

Description

Shows SOCKS5 with username/password authentication connecting to example.com:443 (HTTPS), followed by TLS handshake.

Authentication Credentials

Username: alice
Password: secret123

Extended Packet Flow

Packet 4: Client Greeting

05 01 02
Byte Value Meaning
0 05 SOCKS version 5
1 01 Number of methods
2 02 Method: Username/Password

Packet 6: Method Selection

05 02
Byte Value Meaning
0 05 SOCKS version 5
1 02 Selected method (username/password)

Packet 8: Authentication Request

01 05 61 6C 69 63 65 09 73 65 63 72 65 74 31 32 33
Offset Bytes Value Meaning
0 1 01 Auth protocol version
1 1 05 Username length (5 bytes)
2-6 5 61 6C 69 63 65 "alice" (ASCII)
7 1 09 Password length (9 bytes)
8-16 9 73 65 63 72 65 74 31 32 33 "secret123" (ASCII)

Security Warning

Username and password are transmitted in cleartext. Always use SOCKS over an encrypted channel (SSH, TLS) when using password authentication.

Packet 10: Authentication Response

01 00
Byte Value Meaning
0 01 Auth version
1 00 Success (non-zero = failure)

Packet 12: Connection Request with Domain Name

05 01 00 03 0B 65 78 61 6D 70 6C 65 2E 63 6F 6D 01 BB
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 01 CONNECT command
2 1 00 Reserved
3 1 03 Address type (Domain name)
4 1 0B Domain length (11 bytes)
5-15 11 65...6D "example.com"
16-17 2 01 BB Port 443 (HTTPS)

DNS Privacy

When using domain names, DNS resolution happens at the proxy, preventing DNS leaks to local network observers.

Packet 16: TLS Client Hello

ClientHello

16 03 01 00 50 01 00 00 4C 03 03 [32 random bytes]...
Offset Bytes Meaning
0 1 TLS Handshake record type (0x16)
1-2 2 TLS version (3.1 = TLS 1.0)
3-4 2 Record length
5 1 Handshake type (0x01 = Client Hello)
... ... Client random, cipher suites, extensions

Key Observations

  • Authentication adds 2 extra packets to handshake
  • Credentials sent in plaintext within SOCKS connection
  • Domain name resolution delegated to proxy
  • TLS encrypts application data after handshake

Scenario 3: BIND Command

File: socks5_bind.pcap

Description

Demonstrates the BIND command, used for protocols requiring the server to connect back to the client (e.g., FTP data connections).

Note: BIND is not common nowadays

- Most protocols switched to client-initiated models (FTP PASV, HTTP, etc.)
- NAT traversal techniques (STUN/TURN/ICE) handle P2P better
- WebRTC and modern real-time protocols have better solutions
- It's complex to implement and maintain

Modern equivalent: NAT traversal techniques or relay servers (like TURN) handle this better for contemporary protocols. BIND exists mainly for legacy protocol compatibility.

BIND Flow

sequenceDiagram
    participant C as Client
    participant P as Proxy
    participant S as Server

    C->>P: BIND Request
    P->>C: First Reply (listening address)
    Note over P: Proxy listens on<br/>192.168.1.200:5000

    Note over C: Client tells server to<br/>connect to 192.168.1.200:5000

    S->>P: Connection from 10.0.0.50:21
    P->>C: Second Reply (connection established)

    Note over C,S: Data flows through tunnel

Critical Packets

Packet 8: BIND Request

05 02 00 01 00 00 00 00 00 00
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 02 BIND command
2 1 00 Reserved
3 1 01 IPv4
4-7 4 00 00 00 00 0.0.0.0 (any address)
8-9 2 00 00 Port 0 (any port)

Packet 10: First Reply (Bound Address)

05 00 00 01 C0 A8 01 C8 13 88
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 00 Success
2 1 00 Reserved
3 1 01 IPv4
4-7 4 C0 A8 01 C8 192.168.1.200 (proxy IP)
8-9 2 13 88 Port 5000

What Happens Next

The proxy is now listening on 192.168.1.200:5000. The client can tell the remote server (e.g., FTP server) to connect to this address.

Packet 12: Second Reply (Connection Established)

05 00 00 01 0A 00 00 32 00 15
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 00 Success
2 1 00 Reserved
3 1 01 IPv4
4-7 4 0A 00 00 32 10.0.0.50 (server that connected)
8-9 2 00 15 Port 21 (FTP)

Tunnel Ready

The server has connected to the proxy's listening socket. The tunnel is now established for bidirectional data transfer.

Use Cases

  • FTP PORT mode data connections
  • IRC DCC file transfers
  • Peer-to-peer protocols
  • Any protocol requiring reverse connections

Scenario 4: UDP ASSOCIATE

File: socks5_udp_associate.pcap

Description

Shows UDP relay through SOCKS5, using DNS as an example. UDP ASSOCIATE requires a TCP control connection to remain open.

UDP Encapsulation

Every UDP packet sent through SOCKS5 includes a header:

+----+------+------+----------+----------+----------+
|RSV | FRAG | ATYP | DST.ADDR | DST.PORT |   DATA   |
+----+------+------+----------+----------+----------+
| 2  |  1   |  1   | Variable |    2     | Variable |
+----+------+------+----------+----------+----------+

Critical Packets

Packet 8: UDP ASSOCIATE Request

05 03 00 01 C0 A8 01 64 23 28
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 03 UDP ASSOCIATE command
2 1 00 Reserved
3 1 01 IPv4
4-7 4 C0 A8 01 64 192.168.1.100 (client's UDP addr)
8-9 2 23 28 Port 9000 (client's UDP port)

Packet 10: UDP ASSOCIATE Reply

05 00 00 01 C0 A8 01 C8 1F 40
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 00 Success
2 1 00 Reserved
3 1 01 IPv4
4-7 4 C0 A8 01 C8 192.168.1.200 (proxy's UDP relay)
8-9 2 1F 40 Port 8000 (proxy's UDP port)

UDP Communication

Client should now send UDP packets to 192.168.1.200:8000 with SOCKS5 UDP headers.

Packet 12: Encapsulated UDP Packet (DNS Query)

00 00 00 01 08 08 08 08 00 35 [DNS query data]
Offset Bytes Value Meaning
0-1 2 00 00 Reserved
2 1 00 Fragment (0 = no fragmentation)
3 1 01 IPv4
4-7 4 08 08 08 08 8.8.8.8 (Google DNS)
8-9 2 00 35 Port 53 (DNS)
10+ var ... DNS query for google.com

Key Observations

  • TCP connection must stay open for UDP association lifetime
  • Every UDP packet includes 10-byte SOCKS5 header (for IPv4)
  • Fragment field supports UDP fragmentation
  • Closing TCP terminates UDP association

udpassociate


Scenario 5: SSH Through SOCKS

File: socks5_ssh.pcap

Description

Demonstrates SSH protocol tunneled through SOCKS5 to server 10.20.30.40:22.

SSH Protocol Flow

Packet 8: CONNECT Request

05 01 00 01 0A 14 1E 28 00 16
Offset Bytes Value Meaning
0 1 05 SOCKS version 5
1 1 01 CONNECT command
2 1 00 Reserved
3 1 01 IPv4
4-7 4 0A 14 1E 28 10.20.30.40 (SSH server)
8-9 2 00 16 Port 22 (SSH)

Packet 12: SSH Server Banner

SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5\r\n

Protocol Detection

The string "SSH-2.0" immediately identifies this as SSH traffic, regardless of the SOCKS tunnel.

Packet 14: SSH Client Banner

SSH-2.0-OpenSSH_9.0\r\n

Packet 16: SSH Key Exchange Init

00 00 01 14 0A 14 [key exchange data]
Offset Bytes Meaning
0-3 4 Packet length (276 bytes)
4 1 Padding length
5 1 SSH_MSG_KEXINIT (0x14)
6+ var Cookie, algorithms, etc.

Packet 17: Encrypted SSH Traffic

00 00 00 40 [encrypted data]

Encryption

After key exchange, all SSH session data is encrypted. Commands, authentication, and terminal I/O are not visible.

Key Observations

  • SOCKS completely transparent to SSH
  • SSH banner exchange in cleartext
  • Port 22 strong indicator of SSH
  • Post-handshake encryption hides all content
  • SSH operates identically through SOCKS vs direct connection

Protocol Reference

SOCKS5 Version Field

Value Version
04 SOCKS4
05 SOCKS5

Command Field

Value Command Description
01 CONNECT TCP stream connection
02 BIND TCP listening socket
03 UDP ASSOCIATE UDP datagram relay

Address Type (ATYP)

Value Type Length
01 IPv4 4 bytes
03 Domain name 1 byte (length) + N bytes
04 IPv6 16 bytes

Reply Status Codes

Code Status Meaning
00 Succeeded Request successful
01 General failure SOCKS server failure
02 Not allowed Connection not allowed by ruleset
03 Network unreachable Network unreachable
04 Host unreachable Host unreachable
05 Connection refused Connection refused
06 TTL expired TTL expired
07 Command not supported Command not supported
08 Address type not supported Address type not supported

Authentication Methods

Value Method
00 No authentication required
01 GSSAPI
02 Username/Password
03-7F IANA assigned
80-FE Reserved for private methods
FF No acceptable methods

Analysis Techniques

Wireshark Display Filters

# Show SOCKS handshake packets only
tcp.port == 1080 && tcp.len > 0 && tcp.len < 100
# Show post-tunnel application data
tcp.port == 1080 && tcp.len > 100
# Show specific SOCKS commands
tcp.payload[0:1] == 05 && tcp.payload[1:1] == 01  # CONNECT
tcp.payload[0:1] == 05 && tcp.payload[1:1] == 02  # BIND
tcp.payload[0:1] == 05 && tcp.payload[1:1] == 03  # UDP ASSOCIATE
# Show UDP SOCKS packets
udp.port == 8000

Follow TCP Stream

  1. Right-click any packet in the SOCKS conversation
  2. Select Follow → TCP Stream
  3. Observe SOCKS negotiation at the beginning
  4. Application protocol becomes visible after success reply

Identifying SOCKS Traffic

Look for this sequence:

  1. Client greeting: 05 01 XX (version 5, method count, methods)
  2. Server response: 05 XX (version 5, chosen method)
  3. Request packet: 05 [01|02|03] 00... (version, command, reserved)
  4. Server reply: 05 00... (version, status)
  5. Application data: Pure protocol, no SOCKS headers

Security Considerations

What SOCKS Does NOT Provide

Security Limitations

  • No Encryption: SOCKS itself does not encrypt traffic
  • Authentication in Cleartext: Username/password sent unencrypted
  • Proxy Trust: Proxy can see and modify all unencrypted traffic
  • No Integrity: No protection against tampering

Recommendations

Best Practices

  1. Combine with TLS/SSL: Use HTTPS, not HTTP, over SOCKS
  2. SSH Tunneling: Use ssh -D 1080 for encrypted SOCKS tunnel
  3. Don't Rely on SOCKS for Confidentiality: Use application-layer encryption
  4. Strong Authentication: If proxy requires auth, use strong credentials
  5. Trust Your Proxy: It can see everything you send

Privacy Benefits

  • DNS Privacy: Domain resolution at proxy prevents DNS leaks
  • IP Hiding: Destination sees proxy IP, not client IP
  • Port/Protocol Diversity: Works for any TCP/UDP application
  • NAT Traversal: Can help bypass restrictive NATs

Command-Line Tools

tcpdump

# Read PCAP file with ASCII output
tcpdump -r socks5_no_auth_http.pcap -A | less

# Read with hex and ASCII
tcpdump -r socks5_no_auth_http.pcap -X | less

# Filter for SOCKS port
tcpdump -r socks5_no_auth_http.pcap -n port 1080

tshark

# Show all packets with full details
tshark -r socks5_no_auth_http.pcap -V

# Show only SOCKS negotiation
tshark -r socks5_no_auth_http.pcap -Y "tcp.port==1080" -V

# Extract HTTP data
tshark -r socks5_no_auth_http.pcap -Y "http" -V

# Export objects
tshark -r socks5_no_auth_http.pcap --export-objects http,output/

Wireshark

# Open in Wireshark GUI
wireshark socks5_no_auth_http.pcap

# Open multiple files
wireshark socks5_*.pcap

Comparison: HTTP Proxy vs SOCKS

Feature HTTP Proxy SOCKS4 SOCKS5
Protocol Support HTTP(S) only TCP only TCP + UDP
Authentication Basic/Digest None Multiple methods
DNS Resolution Client Proxy (4a only) Proxy
IPv6 Support Yes No Yes
UDP Support No No Yes
BIND Support No Yes Yes
OSI Layer Application (L7) Session (L5) Session (L5)
Header Modification Yes No No
Caching Possible No No

Additional Resources

RFCs

  • RFC 1928 - SOCKS Protocol Version 5
  • RFC 1929 - Username/Password Authentication for SOCKS V5
  • RFC 1961 - GSS-API Authentication Method for SOCKS Version 5

Stop Re-inventing the Wheel

Background

I am tired of re-establishing a fresh version of Kali each time I want to just start fresh. As a student, I'm constantly trying out different tools and futzing around with file structure on this particular box as a work through HTB or other CTF challenges. While I am more intentional on other boxes, I want my Kali box to be easily refreshed with the latest version of Kali and re-established with all of my preferred tools/repos, configurations, etc. and also be established on my hypervisor with the same network configurations. This seemed like a good opportunity to practice some IaC tools previoulsy mentioned in courses like SANS SEC488, SEC530, etc.

If you don't care about the guide/walkthrough, you can just download the repo here

Overview

This guide shows you how to build a fully automated, repeatable Kali VM pipeline on Proxmox using:

  1. Packer to build a golden template
  2. Terraform to clone that template and trigger Ansible
  3. Ansible to finalize configuration (tools, dotfiles, Tailscale)

All commands run from a control host (your laptop or CI), not on the Proxmox hypervisor.


Project Layout

project-root/
├── packer/
│   ├── kali-proxmox-template.pkr.hcl
│   └── files/                  ← static scripts or assets
├── terraform/
│   ├── variables.tf
│   ├── main.tf
│   └── backend.tf              ← (optional Terraform Cloud / remote backend)
└── ansible/
    └── provision.yml
To create this file structure from your pwd:
mkdir .fresh_kali && mkdir .fresh_kali/packer .fresh_kali/terraform .fresh_kali/packer/files .fresh_kali/ansible && cd .fresh_kali && touch packer/kali-proxmox-template.pkr.hcl terraform/variables.tf terraform/main.tf terraform/backend.tf ansible/provision.yml


1. Packer: Golden Template

Packer is a tool for automating the creation of machine images; in this walkthrough we’re using it to build a golden Kali Linux VM template on Proxmox, complete with SSH keys and initial user setup.

packer/kali-proxmox-template.pkr.hcl

Below is your Packer template, with a helper script (packer/files/get_latest_kali_iso.sh) that fetches the most recent live ISO URL automatically at build time.

packer {
  required_plugins {
    proxmox = {
      source  = "github.com/hashicorp/proxmox"
      version = "~> 1.2"
    }
  }
}

variable "proxmox_password" {
  type      = string
  sensitive = true
}

variable "ssh_private_key" {
  type        = string
  description = "Path to private SSH key for root or deploy user"
  default     = "~/.ssh/id_rsa"
}

variable "kali_iso_url" {
  type        = string
  description = "Kali ISO URL; override manually if needed"
  default     = ""
}

source "proxmox-iso" "kali" {
  # API connection
  proxmox_url              = "https://proxmox.local:8006/api2/json" # you should put in whatever your proxmox url is - so if you are just using your IP, you'd use that.
  username                 = "root@pam"
  password                 = var.proxmox_password
  insecure_skip_tls_verify = true

  # Which node to build on
  node                     = "pve"  # you may have renamed this to something else so ensure it is the name of your node

  # VM identity
  vm_id                    = 9000
  vm_name                  = "packer-kali-{{timestamp}}"

  # Hardware
  cores                    = 4
  memory                   = 8192

# Boot ISO
boot_iso {
  type     = "scsi"
  iso_file = var.kali_iso_url != "" ? var.kali_iso_url : "local:iso/kali-default.iso"
  unmount  = true
}

  # Disk
  disks {
    type         = "scsi"
    disk_size    = "64G"
    storage_pool = "local-lvm"
  }

  # Network
  network_adapters {
    model  = "virtio"
    bridge = "vmbr0"
  }

  # SSH
  ssh_username         = "root"
  ssh_private_key_file = var.ssh_private_key
  ssh_timeout          = "20m"
}

build {
  name    = "kali-proxmox-template"
  sources = ["source.proxmox-iso.kali"]

  provisioner "shell" {
    inline = [
      "apt update && apt install -y sudo zsh git curl",
      "useradd -m -s /usr/bin/zsh deploy",
      "echo 'deploy ALL=(ALL) NOPASSWD: /usr/bin/apt-get,/usr/bin/git,/usr/bin/systemctl' > /etc/sudoers.d/deploy",
      "chmod 440 /etc/sudoers.d/deploy",
      "runuser -l deploy -c 'git clone https://github.com/youruser/dotfiles.git ~/dotfiles'",
      "runuser -l deploy -c 'ln -sf ~/dotfiles/.zshrc ~/.zshrc'",
      "curl -fsSL https://tailscale.com/install.sh | sh",
      "systemctl enable --now tailscaled"
    ]
  }

  provisioner "shell" {
    inline = [
      "sed -ri 's/^#?PermitRootLogin\\s+.*/PermitRootLogin no/' /etc/ssh/sshd_config",
      "systemctl reload sshd"
    ]
  }

  # No post-processor needed; proxmox-iso leaves you with a template
}

packer/files/get_latest_kali_iso.sh

The goal of this script is to generate the URL of the latest Kali Linux ISO file for the live-amd64 version. If your goal is to use a different version, you would want to modify this script accordingly. This script is run in order to generate the iso_file variable used in the file packer/kali-proxmox-template.pkr.hcl.

#!/usr/bin/env bash
set -euo pipefail

BASE_URL="https://cdimage.kali.org/current/"
SHAFILE_URL="${BASE_URL}SHA256SUMS"

# 1) Find the ISO filename
ISO_FILENAME=$(curl -fsSL "${BASE_URL}" \
  | grep -oE 'href="kali-linux-[0-9]+\.[0-9]+(\.[0-9]+)?-live-amd64\.iso"' \
  | sed -E 's/href="([^"]+)"/\1/' \
  | head -n1)

# 2) Build the full URL
ISO_URL="${BASE_URL}${ISO_FILENAME}"

# 3) Extract the matching checksum
ISO_CHECKSUM=$(curl -fsSL "${SHAFILE_URL}" \
  | grep " ${ISO_FILENAME}\$" \
  | awk '{print $1}')

# 4) Export them for the caller
export ISO_URL
export ISO_CHECKSUM

Once both files are populated, you would run:

./files/get_latest_kali_iso.sh
packer init . # this command only needs to be run the first time
packer build \
  -var="proxmox_password=$PM_PASS" \
  -var="kali_iso_url=$ISO" \
  kali-proxmox-template.pkr.hcl

# This will set ISO_URL and ISO_CHECKSUM in your shell environment
source files/get_latest_kali_iso.sh

# Debug — ensure they’re set
echo "ISO_URL     = <${ISO_URL}>"
echo "ISO_CHECKSUM = <${ISO_CHECKSUM}>"

# Now run Packer
packer init . # this command will only need to be run once. 
packer build \
  -var="proxmox_password=${PM_PASS}" \
  -var="kali_iso_url=${ISO_URL}" \
  -var="kali_iso_checksum=${ISO_CHECKSUM}" \
  kali-proxmox-template.pkr.hcl
This will download the iso onto your control host and then apply the build instructions to create the template. Next step is taking that template and using it to create the actual VM with the git repo tools we want on it.

Note: If you are not caring to update this file later, you can just rebuild the VM from the existing template after you've created it.

Control Host has ISO file, but file did not upload to proxmox

scp packer/downloaded_iso_path/*.iso root@proxmox.local:/var/lib/vz/template/iso/
You can manually move the file over, but then you'll need to just run the build again with modification: This new hcl file just removes the variables related to the url, references the local file instead, and adjusts the Boot section to reflect the changes.

packer init .
packer build \
  -var="proxmox_password=$PM_PASS" \
  kali-proxmox-template-on-proxmox-already.pkr.hcl

2. Terraform: Clone & Trigger Ansible

Terraform is a declarative “infrastructure as code” engine; following the image build you’d use it to define and spin up Proxmox VMs (and any networking or storage) based on that template in a repeatable, version‑controlled way.

terraform/backend.tf (optional)

Use a remote backend for state & secrets:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "your-org"
    workspaces {
      name = "kali-pipeline"
    }
  }
}

terraform/variables.tf

variable "proxmox_password" { type = string }

Variable is pulled from TF_VAR_proxmox_password env var or your backend.

Dynamic Template Lookup with External Data

data "external" "latest_template" {
  program = ["bash", "-c", <<-EOT
    pvesh get /nodes/pve-node1/qemu --output-format=json | \
      jq -r '.[] | select(.template==1 and .name|startswith("kali-template-")) | .name' | \
      sort | tail -n1 | jq -R '{name: .}'
  EOT]
}
    pvesh get /nodes/pve-node1/qemu --output-format=json | \
      jq -r '.[] | select(.template==1 and .name|startswith("kali-template-")) | .name' | \
      sort | tail -n1 | jq -R '{name: .}'
  EOT]
}

This requires the Proxmox CLI (pvesh) and jq on your control host.

terraform/main.tf

provider "proxmox" {
  pm_api_url = "https://proxmox.local:8006/api2/json"
  pm_user    = "root@pam"
  pm_password= var.proxmox_password
}

data "external" "latest_template" {}

resource "proxmox_vm_qemu" "kali_ephemeral" {
  name        = "kali-${timestamp()}"
  target_node = "pve-node1"
  clone       = data.external.latest_template.result.name

  cores  = 4
  memory = 8192

  disk {
    size    = "64G"
    type    = "scsi"
    storage = "local-lvm"
  }

  network {
    model  = "virtio"
    bridge = "vmbr0"
    tag    = 42
  }

  # SSH key: copies your public key (~/.ssh/id_rsa.pub) into the VM's authorized_keys for the deploy user
  sshkeys = file("~/.ssh/id_rsa.pub")
}

resource "null_resource" "ansible_provision" {
  depends_on = [proxmox_vm_qemu.kali_ephemeral]

  provisioner "local-exec" {
    command = <<-EOT
      ansible-playbook \
        -i '${proxmox_vm_qemu.kali_ephemeral.ipv4_address},' \
        --user=deploy \
        --private-key=~/.ssh/id_rsa \
        ../ansible/provision.yml
    EOT
  }
}

Run (from project-root/terraform):

terraform init    # once or when providers change
terraform apply -var="proxmox_password=$PM_PASS" -auto-approve

3. Ansible: Final Configuration

Ansible is an agentless configuration‑management system; after Terraform provisions the VM, Ansible applies playbooks to configure system settings, install packages like Tailscale, lock down SSH, and enforce your desired state.

ansible/provision.yml

- name: Finalize Kali ephemeral VM
  hosts: all
  become: true

  tasks:
    - name: Update APT cache
      apt:
        update_cache: yes

   - name: Install extra tools
      apt:
        name:
          - htop
          - nmap
          - python3-pip
          - neo4j
        state: latest

    - name: Clone BloodHound
      git:
        repo: https://github.com/BloodHoundAD/BloodHound.git
        dest: /opt/BloodHound
        version: master
        force: yes

    - name: Clone BloodyAD
      git:
        repo: https://github.com/0xmolox/BloodyAD.git
        dest: /opt/BloodyAD
        version: master
        force: yes

    - name: Clone CrackMapExec (CME)
      git:
        repo: https://github.com/byt3bl33d3r/CrackMapExec.git
        dest: /opt/CrackMapExec
        version: master
        force: yes

    - name: Install CrackMapExec requirements
      pip:
        requirements: /opt/CrackMapExec/requirements.txt
        executable: pip3

    - name: Symlink .zshrc
      file:
        src: /home/deploy/dotfiles/.zshrc
        dest: /home/deploy/.zshrc
        state: link

    - name: Install Python pip packages
      pip:
        name:
          - pwntools
          - yara
        executable: pip3

    - name: Ensure Tailscale is running
      service:
        name: tailscaled
        state: started
        enabled: true

    - name: Authenticate Tailscale (if provided)
      shell: |
        tailscale up --authkey "${TS_AUTHKEY}" --hostname "${inventory_hostname}"
      when: "${TS_AUTHKEY}" != ""

Secrets: Export

export TF_VAR_proxmox_password="<your-secret>"
export SSH_PASSWORD="<your-ssh-password>"
export TS_AUTHKEY="<tailscale-auth-key>"
  • TF_VAR_proxmox_password from your Proxmox API token or root password
  • SSH_PASSWORD only if you’re using password auth (otherwise omit)
  • TS_AUTHKEY from the Tailscale admin console under Auth Keys

Day‑to‑Day Usage

Refresh Template (Optional)

cd packer
env PM_PASS=$PM_PASS SSH_PASS=$PM_PASS
./files/get_latest_kali_iso.sh
packer build \
  -var="proxmox_password=$PM_PASS" \
  -var="kali_iso_url=$ISO" \
  kali-proxmox-template.pkr.hcl
  ```

### Spin Up & Provision
```bash
cd ../terraform
env TF_VAR_proxmox_password=$PM_PASS terraform apply -auto-approve

Teardown

When done:

terraform destroy -auto-approve

All steps are run on your control host. No need to log into Proxmox shell or operate directly on the hypervisor. Environment variables are limited to the terminal's session. They will disappear when the terminal is closed - which is what we want given they include credentials.

📘 Cisco IOS Command Cheatsheet


🧱 1. INITIAL SETUP (Switches & Routers)

Mode Command Description Notes
Enable Mode configure terminal Enter global config mode Must be in enable (#) first
Config Mode hostname SW1 Set device hostname Shows in prompt
Config Mode no ip domain-lookup Disable DNS on typos Speeds up error handling
Config Mode service password-encryption Encrypts all plaintext passwords Basic security
Config Mode banner motd #No Access# Set login banner Required for compliance
Privileged (Enable) Mode clock set HH:MM:SS DD MONTH YYYY Set the system clock Useful for log timestamps
Privileged (Enable) Mode copy running-config startup-config Save config to NVRAM Save after every change

🔐 2. BASIC SECURITY CONFIGURATION

Mode Command Description Notes
Config Mode enable secret <password> Set encrypted enable password Stronger than enable password
Config Mode → Line Console line console 0 Enter console line config Use login + password inside
Line Config Mode password cisco
login
Set console password and enable login Prevents unauthorized CLI access
Config Mode → Line VTY line vty 0 4 VTY lines for SSH/Telnet 0–4 = five concurrent sessions
Line Config Mode password cisco
login
Set remote access password Used if no local auth
Config Mode username admin password cisco Create local user account Needed for SSH login
Config Mode ip domain-name lab.local Required for SSH key gen Any domain works
Config Mode crypto key generate rsa Create SSH keys Enables SSH
Line Config Mode transport input ssh Allow SSH only Don’t allow Telnet in prod

🌐 3. INTERFACE & IP CONFIGURATION

Mode Command Description Notes
Config Mode → Interface interface g0/0 Select interface Replace with correct ID
Interface Mode ip address 192.168.1.1 255.255.255.0 Assign IP Needed on routers
Interface Mode no shutdown Bring interface up Always required!
Interface Mode description Link to ISP Add comment Best practice
Enable Mode show ip interface brief Verify interface IPs and status Useful summary

🎛️ 4. SWITCHING & VLAN CONFIGURATION

Mode Command Description Notes
Config Mode vlan 10 Create VLAN VLAN ID must be unique
VLAN Config Mode name Sales Name the VLAN Optional but helpful
Config Mode → Interface interface fa0/1 Select access port One host per access port
Interface Mode switchport mode access Set as access port Required before assigning VLAN
Interface Mode switchport access vlan 10 Assign to VLAN VLAN must exist first
Interface Mode switchport mode trunk Make interface a trunk Use between switches
Interface Mode switchport trunk allowed vlan 10,20 Limit trunk VLANs Reduce unnecessary traffic
Enable Mode show vlan brief Show VLANs and ports Confirm access port assignments
Enable Mode show mac address-table MAC learning table Useful for troubleshooting

🛣️ 5. ROUTING CONFIGURATION

Static Routing

Mode Command Description Notes
Config Mode ip route 10.0.0.0 255.255.255.0 192.168.1.2 Static route For simple environments

OSPF

Mode Command Description Notes
Config Mode router ospf 1 Enable OSPF process Pick a unique process ID
OSPF Config Mode network 192.168.1.0 0.0.0.255 area 0 Advertise a network Wildcard mask required

🔧 6. VERIFICATION & TROUBLESHOOTING

Mode Command Description Notes
Enable Mode show running-config Current active config Always check before saving
Enable Mode show startup-config Saved config in NVRAM After reboot, this loads
Enable Mode show interfaces Detailed interface stats CRCs, drops, duplex info
Enable Mode show ip interface brief IPs and statuses Excellent quick check
Enable Mode show cdp neighbors Discover adjacent Cisco devices Helpful in topologies
Enable Mode show lldp neighbors Discover non-Cisco devices Enable with lldp run first
User or Enable Mode ping <IP> Check reachability Basic Layer 3 test
User or Enable Mode traceroute <IP> Trace path to host Shows hops and delays
Enable Mode show ip route Routing table Look for S, O, or C routes

💽 7. FILES, SAVING, AND RESETTING

Mode Command Description Notes
Enable Mode copy running-config startup-config Save to NVRAM Don’t forget this!
Enable Mode erase startup-config Wipe saved config Use with caution
Enable Mode reload Reboot the device May prompt to save running config

📌 FINAL NOTES ON MODES

Mode Prompt Description
User Exec > Limited view-only commands
Privileged Exec (Enable) # Can view and copy configs
Global Config (config)# Where most setup is done
Interface Config (config-if)# For individual ports/interfaces
Line Config (config-line)# Console, VTY lines, etc.
Routing Protocol Config (config-router)# For OSPF, EIGRP, etc.

IPv6



1. Global Unicast Addresses (GUAs)

  • Purpose: Publicly routable (equivalent to IPv4 public addresses).
  • Prefix: 2000::/3 (first three bits 001).
  • Structure:

  • Global Routing Prefix: typically 48 bits, assigned by your ISP.

  • Subnet ID: 16 bits for internal subnetting.
  • Interface ID: 64 bits (often derived via EUI‑64).
  • Example:
2001:db8:85a3:42::7334

2. Link‑Local Addresses

  • Purpose: Used for NDP (Neighbor Discovery), router advertisements, and on‑link communications only.
  • Standard Prefix: FE80::/10 (per RFC 4291), but in practice every link‑local is configured as FE80::/64.
  • Assignment: Auto‑generated by the host—no DHCPv6 needed.
  • Interface ID: Usually formed via EUI‑64 (from the MAC) or randomly.
  • Zone Index: When testing on hosts you append the interface (e.g. fe80::1%GigabitEthernet0/1).
  • Example:
fe80::c800:ff:feB4:3a9f

3. Unique Local Addresses (ULAs)

  • Purpose: Private‑use (similar to IPv4 RFC 1918).
  • Prefix: FC00::/7; in practice FD00::/8 (the “L” bit set to 1).
  • Layout:

  • Global ID: 40 random bits

  • Subnet ID: 16 bits
  • Interface ID: 64 bits
  • Scope: Routable within an organization but not on the public Internet.
  • Example:
fd12:3456:789a:1::1

4. Multicast Addresses

  • Purpose: One‑to‑many traffic.
  • Prefix: FF00::/8.
  • Format:
|8 bits|4 flags|4 scope|112‑bit group ID|
|11111111| Flgs | Scope | Group ID       |
  • Flags: e.g. P‑bit (permanent vs. transient).
  • Scope values:

    • 1 – node‑local
    • 2 – link‑local
    • 5 – site‑local
    • 8 – organization‑wide
    • E – global
    • Well‑Known Examples:
  • ff02::1 – all‑nodes (link‑local)

  • ff02::2 – all‑routers (link‑local)
  • ff05::2 – all‑routers (site‑local)

5. Anycast Addresses

  • Purpose: Packets delivered to the “nearest” member among a group.
  • How to Create: Assign the same unicast address (GUA or ULA) on multiple devices in the same subnet.
  • Behavior: Routers automatically forward to the topologically closest instance.

6. IPv6 Notation & Abbreviation

  1. Leading zeros in each 16‑bit block must be omitted:

2001:0db8:0000:0000:0000:0000:0000:0001
→ 2001:db8:0:0:0:0:0:1
2. Consecutive all‑zero blocks can be collapsed once with :::

2001:db8:0:0:0:0:0:1
→ 2001:db8::1
3. You cannot use :: more than once in a single address.


Address Configuration

IPv6 hosts can obtain addresses in several ways. On the CCNA you’ll need to understand manual (static) assignment, SLAAC, EUI‑64 interface‑ID formation, and DHCPv6 (both stateful and stateless).


A. Manual (Static) Configuration

  1. Enable IPv6 routing on the router (global config):

Router(config)# ipv6 unicast-routing
2. Assign an address on an interface:

Router(config)# interface GigabitEthernet0/0
Router(config-if)# ipv6 address 2001:db8:1:1::1/64
3. Optional link‑local override (if you need a specific FE80:: address):

Router(config-if)# ipv6 address FE80::1 link-local

Tip: A missing /prefix-length or the ipv6 unicast-routing command are the most common “it doesn’t work” culprits.


B. SLAAC (Stateless Address Auto Configuration)

  • How it works:

  • Host generates a link‑local address (FE80::/64) via EUI‑64 or random.

  • Host sends a Router Solicitation (RS) multicast (FF02::2).
  • Router replies with a Router Advertisement (RA) (FF02::1) containing one or more Prefix Information Options (PIOs).
  • If the RA’s M‑bit is 0 (do not use DHCPv6) and the O‑bit is 0, the host uses the advertised prefix + its interface‑ID to form its global address.

  • RA flags in the PIO:

  • M (Managed) bit = 1 → use DHCPv6 for address (stateful).

  • O (Other) bit = 1 → use DHCPv6 for additional info (DNS, etc.), but SLAAC for address.

  • Verification commands on Cisco:

show ipv6 interface GigabitEthernet0/0
show ipv6 neighbors
show ipv6 route

C. EUI‑64 Interface‑ID Formation

When SLAAC uses EUI‑64, a 48‑bit MAC (e.g. 00‑0C‑29‑3E‑5B‑7C) is transformed:

  1. Split the MAC into two 24‑bit halves:

00:0C:29    |    3E:5B:7C
2. Insert FF:FE in the middle:

00:0C:29:FF:FE:3E:5B:7C
3. Invert the Universal/Local (U/L) bit (bit 7 of the first byte):

  • Original first byte 0x00 → binary 00000000
  • Invert bit 7 → binary 00000010 → 0x02
  • Result → 02:0C:29:FF:FE:3E:5B:7C
  • Interface‑ID = that 64‑bit value, e.g.:
2001:db8:1:1:02c:29ff:fe3e:5b7c/64

Note: Many modern OSes use “privacy extensions” to randomize the IID instead of EUI‑64.


D. DHCPv6

1. Stateful DHCPv6
  • Clients request addresses from a DHCPv6 server (M‑bit = 1).
  • DHCPv6 message flow:

Solicit → Advertise → Request → Reply
* Provides addresses and other options (DNS, domain, etc.).

2. Stateless DHCPv6
  • SLAAC builds the address (M‑bit = 0), but O‑bit = 1 in RA signals the host to get DNS and other options from DHCPv6.
  • Message flow for option retrieval:
Information‑request → Reply
3. Cisco DHCPv6 Server Example
ipv6 dhcp pool MYPOOL
  address prefix 2001:db8:1:1::/64
  dns-server 2001:db8:ffff::1
!
interface GigabitEthernet0/0
  ipv6 address FE80::1 link-local
  ipv6 nd prefix 2001:db8:1:1::/64 3600 1800
  ipv6 dhcp server MYPOOL

These cover all the CCNA‑level address configuration methods for IPv6. Next up, would you like to explore Neighbor Discovery Protocol (NDP) in depth (NS/NA, DAD, RS/RA)?

Neighbor Discovery Protocol (NDP)

NDP replaces ARP, ICMP router redirects, and more from IPv4. On the CCNA exam, you’ll need to know the core packet types, their purposes, and key behaviors.


A. NDP Packet Types

Type Decimal Code Purpose
RS 133 Router Solicitation – host asks for RAs
RA 134 Router Advertisement – router advertises prefix & flags
NS 135 Neighbor Solicitation – like ARP request
NA 136 Neighbor Advertisement – like ARP reply
Redirect 137 Redirect a host to a better next hop

B. Router Solicitation (RS) & Advertisement (RA)

  • RS (Type 133)

  • Sent by hosts to FF02::2 (all‑routers multicast) at boot or when interface comes up.

  • Hop Limit = 255 (ensures on‑link source).
  • No payload other than NDP header.
  • RA (Type 134)

  • Sent by routers periodically (\~200 sec default) or in response to RS.

  • Destination:

    • Unicast to the solicit­ing host, or
    • FF02::1 (all‑nodes) if periodic.
    • Key fields in the Prefix Information Option (PIO):

    • Prefix (64 bits) and prefix length.

    • M‑bit (Managed) → DHCPv6 for address.
    • O‑bit (Other) → DHCPv6 for other info.
    • Valid Lifetime, Preferred Lifetime for SLAAC.

Verification Commands

show ipv6 interface GigabitEthernet0/0
  # shows current RAs received, flags, lifetimes
show ipv6 route
  # prefixes learned via RAs (marked 'R')

C. Neighbor Solicitation (NS) & Advertisement (NA)

  • Solicited-Node Multicast

  • Each IPv6 address has a solicited-node group:

    FF02:0:0:0:0:1:FFXX:XXXX
    

    where XX:XXXX = last 24 bits of the IPv6 address. * NS (Type 135)

  • Used for:

    • Address resolution (like ARP): host asks “Who has X? Tell me.”
    • Duplicate Address Detection (DAD): host probes its own tentative address.
    • Sent to the solicited‑node multicast of the target.
    • Fields:

    • Target Address = the IPv6 address being resolved or probed.

    • Source Link‑Layer Address option (when not DAD) carries sender’s MAC.
    • NA (Type 136)
  • Response to NS for address resolution.

  • Or sent unsolicited with the Override flag to update caches.
  • Fields:

    • Target Address = the address being announced.
    • Target Link‑Layer Address option with the responder’s MAC.
    • Flags:

    • Solicited (S) = 1 when replying to an NS.

    • Override (O) = 1 to overwrite stale cache entries.

Verification Commands

show ipv6 neighbors
  # neighbor table with Link‑Layer addresses and state

D. Duplicate Address Detection (DAD)

  • Purpose: Ensure uniqueness of an address before binding.
  • Mechanism:

  • Host assigns the tentative address (IID = EUI‑64 or random).

  • Sends an NS with Source Address = :: and Target Address = tentative.
  • Waits for NA replies.

    • No reply within the DAD timeout → address is unique, assign it.
    • If an NA is received → collision detected → interface goes into error.
    • Exam Tip: DAD uses an NS packet; look for src=:: dst=solicited-node-multicast(target).

E. Key Takeaways

  1. Multicast addresses for NDP:

  2. RS → FF02::2

  3. RA → FF02::1 or unicast
  4. NS → solicited‑node multicast FF02::1:FFxx:xxxx
  5. NA → unicast or multicast to FF02::1
  6. Hop Limit = 255 for all NDP messages—to verify on‑link.
  7. RA flags: M‑bit, O‑bit, Valid/Preferred lifetimes.
  8. NS/NA flags: S (Solicited), O (Override).
  9. DAD = NS with src=::, target = tentative address.

—--

NDP vs ARP
flowchart TD
  subgraph ARP["IPv4 ARP"]
    A1["Host A: knows IPv4 of Host B<br>wants MAC"] --> A2["Broadcast ARP Request<br>Who has IP B? Tell A"]
    A2 --> A3["All hosts on LAN receive request"]
    A3 -- If IP matches B --> A4["Host B unicasts ARP Reply to A<br>MAC = B’s MAC"]
    A4 --> A5["Host A updates ARP cache<br>sends frame to MAC"]
  end

  subgraph NDP["IPv6 NDP"]
    B1["Host A: knows IPv6 of Host B<br>wants L2 address"] --> B2["Multicast NS to solicited-node<br>Who has IPv6 B? Tell A"]
    B2 --> B3["All hosts listen on solicited-node group"]
    B3 -- If IPv6 matches B --> B4["Host B unicasts NA to A<br>L2 = B’s MAC<br>flags S=1, O=1"]
    B4 --> B5["Host A updates neighbor cache<br>sends frame to MAC"]
  end

  style ARP fill:#000000,stroke:#ffffff,stroke-width:2px
  style NDP fill:#000000,stroke:#dddddd,stroke-width:2px

IPv6 Routing

1. Enabling IPv6 Routing

On Cisco routers, IPv6 routing is off by default. Before any IPv6 routes will work, you must enable it globally:

Router(config)# ipv6 unicast-routing

Without this, static routes and dynamic protocols will be ignored.


2. Static Routing with ipv6 route

A. Point‑to‑Point Static Route
Router(config)# ipv6 route 2001:DB8:1:0::/64 2001:DB8:2:0::2
  • Destination prefix: 2001:DB8:1:0::/64
  • Next‑hop (must be reachable link‑local or global): here 2001:DB8:2:0::2
B. Using a Link‑Local Next‑Hop
Router(config)# ipv6 route 2001:DB8:3:0::/64 FE80::2 GigabitEthernet0/1
  • If you specify a link‑local (FE80::2), you must include the outgoing interface.
C. Default Route
Router(config)# ipv6 route ::/0 2001:DB8:2:0::2
  • ::/0 matches all destinations not in the routing table.
D. Administrative Distance
  • Static: 1 (or 254 if you add the distance keyword)
  • Learned via OSPFv3: 110

3. Understanding the IPv6 Routing Table (show ipv6 route)

Example output snippet:

IPv6 Route Table - 5 entries
Codes: C - Connected, L - Local, S - Static, R - RIP, B - BGP
       O - OSPFv3, IA - OSPFv3 Inter-area, E1/E2 - OSPFv3 External

O   2001:DB8:10:0::/64 [110/20]
     via FE80::1, GigabitEthernet0/0
C   2001:DB8:20:0::/64 [0/0]
     via GigabitEthernet0/1
S   ::/0 [1/0]
     via FE80::2, GigabitEthernet0/2
L   FE80::1/128 [0/0] via GigabitEthernet0/0
  • Codes tell you how the route was learned.
  • Metric is in brackets [AD/Metric].
  • Next‑hop may be link‑local (FE80::) or global.

Key codes to know:

  • C = directly Connected
  • L = Local address of router interface
  • S = Static route
  • O = OSPFv3 intra‑area
  • IA = OSPFv3 inter‑area
  • E1/E2 = OSPFv3 external types

4. OSPFv3 Fundamentals

IPv6’s version of OSPF has a few differences from OSPFv2:

A. Enabling OSPFv3
Router(config)# ipv6 router ospf 1
Router(config‑rtr)# router-id 1.1.1.1
  • Process ID (1) is locally significant.
  • Router ID must be set manually (32‑bit IPv4 format).
B. Enabling on Interfaces (no network statements)
Router(config)# interface GigabitEthernet0/0
Router(config‑if)# ipv6 ospf 1 area 0
  • OSPFv3 is enabled per interface, not via broad network statements.
C. Link‑Local Next‑Hop & Neighbors
  • OSPFv3 uses IPv6 link‑local addresses for adjacency and next‑hop resolution.
  • Verify adjacencies:
show ipv6 ospf neighbor
show ipv6 ospf interface
D. Area Types & LSAs
  • LSA Types are similar: Router LSAs, Network LSAs, Summary LSAs, External LSAs.
  • Be aware of stub areas, totally stubby, and NSSA (exam typically only mentions “stub”).

5. CCNA‑Level Verification Commands

  • Global status

show ipv6 protocols
* Routing table

show ipv6 route
* OSPFv3 neighbors

show ipv6 ospf neighbor
* OSPFv3 interface details

show ipv6 ospf interface GigabitEthernet0/0
* Static-route troubleshooting

traceroute ipv6 2001:DB8:1:0::1
ping ipv6 2001:DB8:1:0::1

Know the Administrative Cost for common IPv6 Routes

Route Source Administrative Distance
Connected interface 0
Static route 1
eBGP 20
Internal EIGRP 90
IGRP 100
OSPFv3 110
IS‑IS 115
RIPng 120
External EIGRP 170
Unknown/unusable routes 255

Tip:

  • When two routes to the same prefix exist, the router picks the one with the lowest AD.
  • You do not calculate these values—they’re just memorized defaults.
  • Occasionally you may see or configure a “floating” static route by setting a higher AD, but you still pick from known values.

ICMPv6: Informational & Error Messages

ICMPv6 serves two primary roles: Neighbor Discovery (covered in NDP) and error/reporting for IPv6 packet delivery. On the CCNA you’ll be expected to recognize common message types and their purposes.


A. NDP Message Types (ICMPv6 Codes)

Type Code Name Purpose
133 0 Router Solicitation (RS) Host → all‑routers multicast to solicit RAs
134 0 Router Advertisement (RA) Router → hosts (or unicast) to advertise prefixes & flags
135 0 Neighbor Solicitation (NS) Address resolution & Duplicate Address Detection
136 0 Neighbor Advertisement (NA) Reply to NS; unsolicited updates
137 0 Redirect Informs host of a better first‑hop next hop

Note: The Code field for all NDP messages is always zero.


B. ICMPv6 Error Messages

Type Code Name Description
1 0 Destination Unreachable — No Route No route to destination
1 1 Destination Unreachable — Admin Prohibit Administratively prohibited (e.g., ACL)
1 3 Destination Unreachable — Addr Unreachable Address unreachable at next hop
1 4 Destination Unreachable — Port Unreachable Port unreachable at destination
2 0 Packet Too Big Packet larger than MTU; carry MTU of next‑hop in the “MTU” field
3 0 Time Exceeded — Hop Limit Exceeded Hop‑limit reached zero
3 1 Time Exceeded — Fragment Reassembly Time Exceeded Fragment reassembly timer expired
4 0 Parameter Problem — Erroneous Header Field Problem with IPv6 header
4 1 Parameter Problem — Unrecognized Next Header Next Header type unknown
4 2 Parameter Problem — Unrecognized IPv6 Option Option in the header not understood

Behavior:

  • Error messages are sent to the IPv6 source address, never a multicast address.
  • The invoking packet’s header + first 8 bytes of payload are embedded in the ICMPv6 message so the sender can correlate the error.

C. ICMPv6 Informational Messages

Type Code Name Purpose
128 0 Echo Request “Ping” to test reachability
129 0 Echo Reply Response to Echo Request

Exam Tip: IPv6 uses ICMPv6 exclusively for ping and traceroute (no separate “ping6” command on Cisco routers; it’s simply ping ipv6 …).


D. CCNA‑Level Takeaways

  1. NDP is implemented via ICMPv6 types 133–137 (all Code = 0).
  2. Error messages use Types 1–4 with multiple Codes; remember “1 = Dest Unreachable,” “2 = Too Big,” “3 = Time Exceeded,” “4 = Parameter Problem.”
  3. Echo Request/Reply are Types 128/129.
  4. Error ICMPv6 messages always return to the unicast source of the offending packet.
  5. Router & Link MTU Discovery:

  6. Packet Too Big (Type 2) drives Path MTU Discovery.



IPv6 ACLs: Basics & Syntax

IPv6 ACLs function similarly to IPv4 ACLs but use the ipv6 access-list command and support IPv6‑specific features (e.g., prefix lists).

1. Defining a Named IPv6 ACL

Router(config)# ipv6 access-list MY_IPV6_ACL
  • Named (versus numbered) is the CCNA norm.
  • Once created, you add entries beneath this mode:
Router(config‑ipv6-acl)# permit tcp 2001:DB8:1:0::/64 any eq 80
Router(config‑ipv6-acl)# deny icmp any any nd-na
Router(config‑ipv6-acl)# permit ipv6 any any

2. Entry Format

[action] [protocol] [source] [source-prefix-length] [destination] [dest-prefix-length] [operator [port]]
  • action: permit or deny
  • protocol: ipv6 (all), tcp, udp, icmp, icmp6, or specific ICMPv6 types/codes (e.g., icmp6 nd-ns, icmp6 packet-too-big)
  • source/destination: IPv6 prefix and prefix length (no wildcard masks)
  • ports/operators: eq, gt, lt, range for TCP/UDP; not supported for plain ipv6
Example Entries
! Permit SSH from anywhere to the subnet
permit tcp any 2001:DB8:2:0::/64 eq 22

! Deny all ICMPv6 Neighbor Solicitations (ND‑NS)
deny icmp6 any any nd-ns

! Permit only HTTPS to a host
permit tcp any host 2001:DB8:3:0::5 eq 443

! Deny everything else
deny ipv6 any any

! Implicit “permit ipv6 any any” at end if no deny exists

3. Applying an IPv6 ACL

ACLs are applied per interface and per direction:

Router(config)# interface GigabitEthernet0/1
Router(config‑if)# ipv6 traffic-filter MY_IPV6_ACL in
Router(config‑if)# ipv6 traffic-filter MY_IPV6_ACL out
  • in filters packets entering the interface.
  • out filters packets leaving the interface.

Permit/Deny Semantics

  1. Top‑down processing: First matching entry is used; no further entries are checked.
  2. Implicit deny: At the end of every ACL there is an invisible deny ipv6 any any.
  3. Implicit permit: If you issue only permit statements, the end-of-ACL implicit deny still applies—unmatched traffic is dropped.
  4. Explicit permit for IPv6‑specific: To allow all other IPv6 traffic, you must explicitly add permit ipv6 any any before the implicit deny.

Prefix Lists

IPv6 prefix lists let you match on prefixes without worrying about individual entries for every subnet size.

1. Defining a Prefix List
Router(config)# ipv6 prefix-list PL_FILTER seq 5 permit 2001:DB8:0:0::/64 le 128
Router(config)# ipv6 prefix-list PL_FILTER seq 10 deny 2001:DB8:0:1::/64 eq 64
  • seq: sequence number for ordering
  • permit/deny: action
  • prefix: network prefix
  • ge/le: minimum/maximum prefix length to match

  • le 128 means any subnet longer (more specific) than /64 up to /128

  • ge 48 means any prefix shorter (less specific) than /48
2. Using Prefix Lists in ACLs or Routing
  • In ACLs (with the ipv6 access-list syntax):

Router(config‑ipv6-acl)# permit ipv6 any any prefix-list PL_FILTER
* In Routing Protocols (e.g., BGP):

Router(config‑bgp)# neighbor X.X.X.X prefix-list PL_FILTER in

Tips

  • Remember no wildcard masks—you always specify prefix/length.
  • Learn the common ICMPv6 types (e.g., nd-ns, nd-na, echo-request, packet-too-big).
  • Don’t forget to apply ACLs on the correct interface and direction.
  • Always include an explicit permit if you need to allow “all other” IPv6 traffic.
  • Understand how prefix lists simplify filtering variable‑length subnets.

Multicast in IPv6

IPv6 uses multicast far more extensively than IPv4. Rather than broadcasts, IPv6 relies on multicast for discovery and many control-plane functions.

1. Well‑Known Multicast Addresses

IPv6 multicast addresses all begin with FF00::/8. The next 4 bits are flags, followed by a 4‑bit scope, then a 112‑bit group ID.

Address Scope Description
FF02::1 link‑local All nodes on the local link (equivalent to “all hosts”)
FF02::2 link‑local All routers on the local link
FF05::2 site‑local All routers within the site
FF02::D link‑local All MLDv2-capable routers (MLDv2)
FF02::16 link‑local MLDv1 Multicast Listener Report
FF02::1:FFXX:XXXX link‑local Solicited‑node multicast (for NDP); last 24 bits = address’s last 24 bits

Key points:

  • Scope values (hex):

  • 1 = node‑local

  • 2 = link‑local
  • 5 = site‑local
  • 8 = organization‑wide
  • E = global
  • Flags (4 bits): e.g., P‑bit indicates permanent vs. transient group.

Multicast Listener Discovery (MLD)

MLD is the IPv6 equivalent of IGMP. It lets routers know which multicast groups are active on which links so they can forward multicast traffic appropriately.

A. MLD Versions

  • MLDv1 (RFC 2710)

  • Uses Query and Report messages.

  • Hosts send Report when they want to join a group.

  • MLDv2 (RFC 3810)

  • Adds “source-specific” joins (like IGMPv3).

  • Supports Include and Exclude lists for finer control.

B. MLD Message Types (ICMPv6)

Type Code Name Purpose
130 0 Multicast Listener Query Router → all‑nodes to solicit reports
131 0 Multicast Listener Report (v1) Host → router: “I want to receive group X”
132 0 Multicast Listener Done (v1) Host → router: “I’m leaving group X”
143 0 Multicast Listener Report (v2) Host → router: includes source filters

C. MLD Operation

  1. Query Phase

  2. Routers periodically send a General Query to FF02::1 (all‑nodes).

  3. They may send Multicast-Address-Specific Queries to a group address.

  4. Report Phase

  5. Hosts respond with Report messages to the group’s multicast address.

  6. In MLDv2, a Report can include source‐specific filters.

  7. Timer Management

  8. Routers maintain a timer per link to know when all listeners have left.

  9. If no more Reports arrive for a group before the timer expires, the router stops forwarding that group to the link.

D. CCNA‑Level Takeaways

  • Well‑known addresses: memorize FF02::1, FF02::2, solicited‑node FF02::1:FFxx:xxxx.
  • MLD vs. IGMP: MLD is ICMPv6‑based (types 130–143) instead of IGMP.
  • MLDv2 adds source filtering—know the concept but not deep syntax.
  • Verification on Cisco routers:
show ipv6 mld groups
show ipv6 mld interface GigabitEthernet0/0

Transition Mechanisms (High‑Level)

On the CCNA you won’t be configuring these in depth, but you should understand their purposes, basic operation, and trade‑offs.


1. Dual‑Stack

  • Definition: Devices and networks run IPv4 and IPv6 simultaneously.
  • How it works:

  • Hosts have both an IPv4 A‑record and an IPv6 AAAA‑record in DNS.

  • Applications choose which to use based on DNS response (“Happy Eyeballs” algorithm).
  • Pros:

  • Simplest increment‑by‑increment migration.

  • No encapsulation overhead.
  • Cons:

  • You must maintain two parallel protocol stacks (ACLs, routing, security).

  • Potential for inconsistent policy between IPv4 and IPv6.

2. 6to4 Tunneling

  • Purpose: Automatic, “configured‑on‑the‑fly” IPv6 connectivity over IPv4 Internet.
  • Addressing:

  • 6to4 prefix = 2002::/16.

  • An end‑host or router with global IPv4 address W.X.Y.Z derives its 6to4 prefix as:

    2002:WXY Z::/48
    └──┬──┘ 
     hex(IPv4)
    

    e.g., IPv4 192.0.2.4 → prefix 2002:c000:0204::/48. * Encapsulation:

  • IPv6 packets are wrapped in IPv4 protocol 41 and sent to a 6to4 relay.

  • Configuration Example (Cisco IOS):

interface Tunnel0
  ipv6 address 2002:c000:0204::1/64
  tunnel source 192.0.2.4
  tunnel mode ipv6ip 6to4
* Pros/Cons:

  • + Auto‑configured, minimal manual config.
  • Relies on public relays; can be unreliable and has MTU issues.

3. ISATAP (Intra‑Site Automatic Tunnel Addressing Protocol)

  • Purpose: Connect IPv6 islands across an IPv4 intranet.
  • Addressing:

  • ISATAP hosts derive an interface ID of the form:

    0000:5EFE:W.X.Y.Z
    

    where W.X.Y.Z is the IPv4 address. * Combined with a site’s IPv6 prefix (e.g., 2001:db8:acad::/64), the host’s IPv6 address becomes:

    2001:db8:acad::5EFE:c000:0204
    
    * Encapsulation:

  • Uses IPv4 unicast (protocol 41) between ISATAP routers/hosts.

  • Configuration Example (Cisco IOS):

interface Tunnel1
  ipv6 address 2001:db8:acad::1/64
  tunnel source GigabitEthernet0/0
  tunnel mode isatap
* Pros/Cons:

  • + Works over existing IPv4 infrastructure internally.
  • Not suitable over the public Internet; limited to site‑to‑site.

Key Takeaways

  1. Dual‑stack is preferred for long‑term; you run both stacks side by side.
  2. 6to4 uses a built‑in 2002::/16 prefix mapped from your IPv4 address and public relays.
  3. ISATAP embeds an IPv4 address in the low 32 bits of the IPv6 interface‑ID for site tunnels.
  4. Both tunneling methods encapsulate IPv6 inside IPv4 (protocol 41) and can suffer from MTU/traceroute issues.

SEC530 Notes

flowchart TD
    A[Client Request] --> B[Web Proxy Intercepts]
    B --> C[ICAP System: Request Filtering & Policy Enforcement]
    C --> D[ICAP System Decision: Block or Forward]
    D -->|Forward| E[Web Proxy Requests from Internet]
    E --> F[Internet Responds]
    F --> G[Web Proxy Sends Response to Client]
    G --> H[Client Receives Final Response]
    D -->|Block| Z[Request Blocked by ICAP]

graph LR
    A[Client Request] -->|Intercepted by Web Proxy|> B[ICAP System]
    B -->|Request Filtering|> C[Policy Enforcement]
    C -->|Decision Made|> D[Web Proxy Response]
    D -->|Forward to Internet or Block|> E[Internet Request]
    E -->|Content Retrieval|> F[Internet Response]
    F -->|Received by Web Proxy|> G[Web Proxy Response]
    G -->|Sent Back to Client|> H[Client Response]
This diagram shows the flow of an ICAP system:

  1. The client makes a request (A).
  2. The web proxy intercepts the request and sends it to the ICAP system (B).
  3. The ICAP system applies request filtering and policy enforcement (C) based on the request metadata.
  4. A decision is made by the ICAP system, which may block or forward the request (D).
  5. If the request is forwarded, the web proxy makes a new request to the internet (E).
  6. The internet responds with content (F).
  7. The web proxy receives the response and sends it back to the client (G).
  8. The client receives the final response from the web proxy (H).

Defensible Architecture: Starting with Navigator and DeTT&CT

Intro

Documentation space for docker containers and other tools utilized in building defensible architecture with a goal of less to zero trust environment. Level of detail captured will be bare initially and likely built out over time.

Starting Point

Identify Crown Jewels

The first step is to have/gain an understanding of both your company's crown jewels and who the attackers are who would covet said jewels. Once you have identified what makes your company excel and what type of attack would destroy the company, you'll have identified what it is you need to protect. So if you're in the retail biz, perhaps your crown jewels are in a secret sauce (intellectual property) that make it bomb-diggity. How well-protected is that recipe? What type of tactics, techniques, or procedures do attackers of I.P. tend to use?

Identify Adversaries

Using MITRE ATT&CK's search function and typing in 'intellectual property' gives an idea of what we are up against.
intellectual property In this example, Cinnamon Tempest and FIN13 look to be threats to our business.

MITRE ATT&CK Navigator

The following command spins up a docker container for the MITRE ATT&CK Navigator tool. It will allow us to run our research locally on our host machine from port 4200.

docker run -p 4200:4200 --rm --name navigator aboutsecurity/attack_navigator:latest
Once the container is running, navigating to your browser will bring up the Navigator tool at localhost:4200.

Create new layer

What is the significance of each version? If you switch versions, what are the considerations? Do you have to use the same version for Navigator as DeTT&CT? What would be the advantages/disadvantages?
domain: Enterprise Create layer from version

Now we can add in the TTP information about Cinnamon Tempest to this layer. Since we'll be housing APT-specific details in this layer, renaming it can be useful. Do this by clicking on the "layer information" icon, then double-clicking on the default name of "layer." If you click the magnifying glass, this takes you to a search bar on the right where you can search for the threat group by name and then click "select." After that, click the mangnifying glass again to close that function. To harness the power of the tool and incorporate multiple layers, we will want to add scoring to this layer. Generally, a scoring of 1 might be appropriate.

Additional layers can be added about other ATPs or threat groups (or really any other TTP-related data that you want to be able to have integrated at times, but isolated at other times.) After adding additional layers, you can combine them into one layer using the option "Create layer from other layers."

Summary

1) Create a layer 2) name it 3) select the threat group's info you want 4) add scoring 5) repeat steps for any additional APTs 6) add a layer that combines the previous layers and adjusts coloring of techniques based on the scores for each technique (if more than one APT employs the same technique, it will have a higher score). 7)

DeTT&CT

Get the latest docker image for DeTT&CT

docker pull rabobankcdc/dettect:latest
Run the container, open a bash shell, and set up specific path connections between the host machine and the container.
docker run --rm -p 8080:8080 -v host/machine/path/to/output:/opt/DeTTECT/output -v host/machine/path/to/input:/opt/DeTTECT/input -v host/machine/path/to/threat-actor-data:/opt/DeTTECT/threat-actor-data --name dettect -it rabobankcdc/dettect:latest /bin/bash 
Without transferring any files:
docker run --rm -p 8080:8080 --name dettect -it rabobankcdc/dettect:latest /bin/bash
Once in the container, you can spin up the web server to use the DeTT&CT tool
python dettect.py editor &
Now you can use localhost:8080 to access the tool in your browser. This is where you'll use the "Data Sources" section to upload the yaml files you transferred from your host machine.

This yaml file can also be used (as-is or modified for enhanced quality) to create a json file.

python dettect.py ds -fd input/data-sources-traditional.yaml -l --local-stix-path input/cti-att-ck-v.latest -of data-sources-traditional.json
Running this command results the json file being put into the host's output folder.

Using DeTT&CT output for Navigator

Now you can pull up a new tab of the localhost:4200 Navigator tool and upload the json file as a layer using the "Open Existing Layer" option. This output gives insight into what techniques the company currently has visibility.

If we add the threat groups to this layer, it will provide an understanding of where we have gaps in visibility as it will only colorize the TTPs where visibiltiy is present.

Heatmaps can also be created.

iptables Cheatsheet

Intro

Basic Defense

Default Outbound DENY

In this scenario, the goal is to ALLOW outbound traffic over ports 443 and 53, but DROP traffic on a specified port and implement relevant logging.

sudo iptables -N LOGGING-OUTBOUND # Name the chain - this one is for outbound traffic and its logging
sudo iptables -A OUTPUT -j LOGGING-OUTBOUND # Insert it into the OUTPUT chain
sudo iptables -A OUTPUT -d 10.10.10.0/24 -p udp --dport 53 -j ACCEPT  # Allow 53/udp traffic to the 10.10.10.0/24 network 
sudo iptables -A OUTPUT -d 10.10.10.0/24 -p tcp --dport 443 -j ACCEPT  # Allow 443/tcp traffic to the 10.10.10.0/24 network
sudo iptables -A OUTPUT -d 10.10.10.0/24 -p tcp --dport 12345 -j DROP  # Drop 12345/tcp traffic to the 10.10.10.0/24 network
sudo iptables -A LOGGING-OUTBOUND -d 10.10.10.0/24 -p tcp --dport 12345 -m limit --limit 2/min --limit-burst 5 -j LOG --log-prefix "EGRESS-HIGH: " --log-level 4  # Log activity from this chain with the prefix "EGRESS HIGH: " and make it a WARNING by assigning it --log-level 4
Putting this on the OUTPUT chain rather than the INPUT means that it is source-agnostic. We want it to not matter from which box the traffic originates.

Basic Offense

Firewall getting in the way?

I recently had a CTF challenge where we were supposed to determine what port and service were running on a hostname that was not giving much info. Nmap scans on the associated IP did not give much and the host would not respond to pings so we checked the iptables -L -v which indicated that we basically could not communicate with much of anything. The solution was to add some lines that would give us full access (run as root):

iptables -L -v # lists the current entries - results indicate that we can't go anywhere
iptables -P INPUT ACCEPT # give me access to incoming
iptables -P OUTPUT ACCEPT # give me access to outcoming
iptables -P FORWARD ACCEPT # give me forwarding access
iptables -F # flush
This provided full communication access to the host we were trying to reach.

CRC32

CRC32 Primer in Digital Forensics

Cyclic Redundancy Check (CRC32) is a checksum algorithm used to detect errors in data and verify data integrity. It's commonly applied to files to check whether their contents have been altered or corrupted, especially in digital forensics and data recovery.

What is CRC32?

CRC32 is a type of hash function that takes an input (data) and produces a 32-bit output. It is widely used to detect accidental changes in raw data during storage or transmission. Although CRC32 isn’t cryptographically secure, its efficiency in quickly verifying data integrity has made it popular in various contexts, including file formats like PNG.

CRC32 in Digital Forensics

In digital forensics, CRC32 can be a useful tool for verifying the authenticity of files, checking for tampering, and tracking changes. When analyzing a file like a PNG image, the CRC32 checksum helps forensic investigators confirm that the file has not been modified since its creation or since its checksum was last calculated.

Key Use Cases of CRC32 in Forensics:
  1. File Integrity Verification: CRC32 is used to ensure that the file has not been altered. This is particularly important when investigators need to confirm that a file is the original or hasn't been tampered with.

  2. Detecting File Corruption: If files have been corrupted due to storage failure, transmission issues, or other unforeseen problems, CRC32 can help detect discrepancies. If the checksum doesn’t match the expected value, it can signal data corruption, a key point in forensic investigations when recovering or verifying evidence.

  3. Tracking Modifications: When files are modified, especially by malware or external agents, CRC32 can be used to track when and if these changes occurred. By storing original checksums, investigators can compare current CRC32 values with the original ones to see if alterations have been made.

  4. File Deduplication & Versioning: CRC32 checksums can help identify different versions of the same file. Forensic investigators might use CRC32 checksums to verify file duplicates and establish when specific changes occurred in different versions of a file.

  5. Cross-Verification in File Systems: Forensic investigators may use CRC32 to verify file integrity in a file system, particularly when there are concerns that the file has been altered or when a file needs to be cross-checked across different storage mediums.

  6. Validation of File Metadata: In some cases, CRC32 values are stored as part of the file metadata (e.g., PNG headers). By extracting and comparing this value, forensic experts can confirm whether the file’s metadata has been altered or tampered with.


How CRC32 Works in PNG Files

PNG files (Portable Network Graphics) use CRC32 as part of their structure to validate data integrity. The structure of a PNG file includes:

  • Header: Contains the signature identifying it as a PNG file.
  • Chunks: PNG files are divided into chunks, each containing specific data (e.g., image data, metadata, or control information).
  • CRC32 Checksum: Each chunk in a PNG file has a CRC32 checksum that is used to verify the integrity of that chunk’s data.

The chunks are divided as follows: - Length: Specifies the length of the data. - Type: Specifies the type of the chunk (e.g., IHDR for the header, IDAT for image data). - Data: The actual data of the chunk. - CRC32: The checksum that validates the data in the chunk.

When a PNG file is being processed, the CRC32 checksum of each chunk is calculated and stored. If any part of the file is altered, the CRC32 checksum will no longer match the expected value, indicating a potential issue such as corruption or tampering.


Practical Example in Digital Forensics

Imagine a forensic investigator analyzing a PNG file that might be part of an evidence collection. The investigator may:

  1. Extract the CRC32 Checksums: The investigator would extract the checksum from the PNG’s chunks and the overall file, comparing it against known good values (either from the original file or from hash databases).

  2. Compare Checksum Integrity: If the checksums match, the file is considered intact. If there’s a discrepancy, it could indicate the file has been tampered with, either due to intentional modification (e.g., editing the image or embedding hidden data) or unintentional corruption.

  3. Use CRC32 for Data Recovery: In case of file corruption (e.g., a file that was improperly transferred or had parts of its data overwritten), CRC32 checksums help in identifying and potentially recovering intact chunks or segments of data.


Limitations of CRC32
  • Non-Cryptographic: CRC32 is not designed for cryptographic security and can be easily manipulated by attackers. In digital forensics, while it’s useful for detecting accidental changes, it's not sufficient for detecting deliberate tampering or ensuring file authenticity in a security-sensitive context.

  • Collision Vulnerability: CRC32 has a higher chance of collision (i.e., two different inputs producing the same checksum) than more robust hashing algorithms like SHA256, making it less suitable for security-critical applications.

For more robust file verification, digital forensics often combines CRC32 with other cryptographic hash functions (such as MD5, SHA1, or SHA256), which are more resistant to intentional modification.


Memory Dump Files

The first time I was given a memory dump file to look at, it was a bit of a schlog figuring out how to use volatility to look at the data. But I did! And I documented it. And when I was faced with another memory dump file, you'd better believe I went back to my notes! It was quite the rude awakening to realize that none of my volatility commands worked! After some troubleshooting, I realized the first one I'd looked at was a .dmp file and the second was a .mem file. With this newfound understanding of the dumps from different OSs, I added to my documentation and felt confident that I'd be prepared the next time. I was wrong. There are many nuances to the different types of RAM dump files and the types of tools that are best for each. In this post, I wanted to include some of the nuances of the file types, the tools, and also some considerations regarding the types/versions of OS that impact the functionality of Volatility.

File Extensions and Context Clues

Common File Extensions

  • .core: A user-space core dump of a process. This is commonly generated when a program crashes. It can be analyzed using debuggers like gdb or lldb, or for more comprehensive memory analysis, tools like Volatility can be used.

  • .dmp: A general memory dump extension used by various operating systems (commonly Windows). It can represent full or partial memory dumps. Windows memory dumps are typically .dmp files, and they can be opened using tools like WinDbg, Volatility, or other Windows debugging tools.

  • .vmcore: A Linux kernel crash dump. This type of file is generated during a kernel panic or crash. It contains a snapshot of the kernel’s memory at the time of the crash. You would typically use crash or Volatility for analysis.

  • .lime: A memory dump generated by the LiME (Linux Memory Extractor) tool. This is a raw memory dump and can be used for forensic analysis using tools like Volatility or LiME’s own tools.

  • .mem: This is a general memory dump file extension used by various forensic and debugging tools. It is commonly used in both Linux and macOS for full or partial memory dumps. For instance, Volatility can analyze .mem files by specifying the right profile, and for macOS, you might encounter .mem files that are the result of live memory capture or a memory image.

  • .vmss: This is a VMware snapshot file. These files are generated during the suspension of a virtual machine. It contains the in-memory state of the virtual machine at that point in time and can be analyzed using tools that support VMware memory images like Volatility (with VMware profiles) or Volatility’s VM-specific plugins.

Contextual Clues in the Filename

In addition to the file extension, the filename itself can sometimes offer clues about the type of dump or its source. Here are some examples of filenames and what they might indicate:

  • vmcore or vmcore-[timestamp]: This is commonly associated with a kernel crash dump on Linux. If the file is named vmcore or contains the timestamp vmcore-<timestamp>, you are likely dealing with a kernel panic dump.

  • memory.dmp: This could be a Windows memory dump file (a full memory dump of the system). This is a standard Windows memory dump file and can be analyzed using tools like WinDbg or Volatility.

  • core.[pid] or core.[pid].dump: Indicates a core dump file for a specific process on a UNIX-based system (macOS, Linux). This is the memory image of a single process that crashed. The [pid] is the process ID of the crashed process.

  • [vmname].vmss: This is a VMware snapshot file, as mentioned earlier. It contains the suspended memory state of a virtual machine. If you see this, you'll need VMware-specific tools or Volatility with VMware profiles to analyze it.

  • [hostname].mem: This could indicate a memory dump of a Linux or macOS system. It's a more generic file name but could provide enough context when coupled with the appropriate file extension.

  • [hostname]-[timestamp].dmp: Often seen on Windows systems, this file could be a full or minidump of the system memory. The timestamp helps determine when the dump was generated (e.g., after a crash or system event).

Directory Context

If the memory dump is in a system's crash or logs directory, it can provide valuable context: - Linux: A vmcore file might be found in /var/crash/ or /proc/vmcore/. - Windows: A .dmp file might be located in C:\Windows\Minidump\ or C:\Windows\MEMORY.DMP. - macOS: Memory dumps and crash reports might be located in ~/Library/Logs/DiagnosticReports/.


File Type Detection

Once you have a memory dump file, the next step is to determine its type. This can be done by inspecting the file with tools to gather more information about the file's structure and format.

Check File Type with file Command

file /path/to/memory_dump
- For a vmcore dump, it may indicate something like "Linux kernel core dump." - For a core file, it will describe it as "ELF 64-bit LSB core file." - For a Windows .dmp file, it might say "MS Windows crash dump."

Examine the First Few Bytes (Hex Dump)

For a more detailed analysis of the file's structure, you can use a hex editor or command-line tools like xxd to view the first few bytes of the file. The first few bytes of some memory dump files can indicate the file format (for example, ELF headers for Linux core dumps).

xxd /path/to/memory_dump | head

Determining If Conversion Is Needed

Once you've determined the file type, you need to assess whether any conversion is required.

Common Scenarios Where Conversion is Needed

  1. Linux vmcore: If you receive a vmcore file (a Linux kernel dump), it doesn’t need to be converted, but it does need to be analyzed using tools like crash or Volatility. You'll also need the kernel symbols (vmlinux) to properly analyze the dump.

  2. LiME (.lime) Dumps: LiME dumps are raw memory dumps and need to be analyzed with either LiME’s tools or Volatility. There is no need for conversion, but you may need to specify the right profile for the OS in Volatility.

  3. VMware Snapshots (.vmss): VMware memory snapshots don’t need conversion, but you will need VMware-compatible tools for analysis. Volatility has specific profiles and plugins for VMware memory images, or you can use VMware’s own debugging tools.

  4. Windows .dmp Files: Windows memory dumps typically don’t need conversion. Tools like WinDbg, Volatility, or even DumpChk can be used to analyze these files directly.

  5. macOS Crash Files (.crash): These files don’t need conversion, but you’ll need to use lldb or similar debugging tools to analyze them.


Using the Tools to View and Analyze the Memory Dump

After determining whether the file needs conversion, and choosing the right tool, here’s how you can analyze the dump file.

Linux Kernel Dump (vmcore)

You can analyze a Linux kernel crash dump (vmcore) using the crash tool.

crash /var/crash/vmcore /usr/lib/debug/boot/vmlinux-<version>

This will load the crash dump and allow you to interact with kernel memory, view processes, and trace the state of the system during the crash. More details about using crash on Red Hat's documentation site.

LiME Dump (.lime)

For a LiME memory dump, you can use Volatility to analyze it.

volatility -f /path/to/lime_dump.lime --profile=LinuxUbuntu_3_13_0_32_64 pslist

This command uses the correct profile to extract information about processes from the raw memory dump.

Windows .dmp Files

To analyze a Windows memory dump, use WinDbg or Volatility.

  • Using Volatility:

    volatility -f memory.dmp --profile=Win7SP1x64 pslist
    

  • Using WinDbg:

    windbg -z memory.dmp
    
    More detailed info about using WinDbg can be found on Microsoft's website here.

VMware Snapshot (.vmss)

If you have a VMware snapshot file, you can analyze it with Volatility using VMware-specific profiles.

volatility -f /path/to/vmss-file --profile=Win7SP1x64 pslist

This allows you to extract process and system data from the snapshot.

macOS Crash Report (.crash)

For a .crash file, you can load it into lldb for in-depth analysis of the process at the time of the crash.

lldb -c /path/to/crashfile.crash

Then you can debug the crash, view stack traces, and investigate potential causes.


Crash Dumps

Using GDB to examine Crash Dump Files

chacho-debug

1. What is GDB?

GDB (GNU Debugger) is one of the most widely used debuggers for C, C++, and Fortran programs, and it can also be used to analyze core dumps and perform memory analysis. It provides a command-line interface for inspecting program execution, understanding crashes, debugging code step by step, and analyzing memory.

It is often used on Linux, macOS, and other Unix-like systems for both debugging running programs and analyzing crash dumps.

2. Installation of GDB

Before you can use GDB, you need to ensure that it's installed on your system.

On Linux (Kali Linux, Ubuntu, etc.):
  • You can install GDB using your system's package manager:
    sudo apt-get update
    sudo apt-get install gdb
    
On macOS:
  • You can install GDB using Homebrew (a package manager for macOS):

    brew install gdb
    

  • Note: On macOS, GDB may require additional configuration for code-signing to work properly, due to Apple’s security requirements.


3. GDB Overview and Commands

GDB provides a rich set of commands to control program execution, inspect the state of the program, and analyze crashes. Below are the most commonly used commands when working with core dumps or debugging crashes.

4. Core Dump Basics

A core dump is a file that captures the memory contents of a running process at a specific point in time (usually when it crashes). The core dump can help you understand what was happening in the program at the time of the crash.

  • To generate a core dump in Linux, you can run:

    ulimit -c unlimited  # Allows the system to generate core dumps
    
    The core dump file is usually created in the current working directory with the filename core or core.[pid].

  • On macOS, core dumps are also generated when a program crashes, but you may need to enable them through system preferences.

5. Using GDB with Core Dumps

5.1 Start GDB with Core Dump

To analyze a core dump with GDB, you need two things: 1. The executable file (the program that crashed). 2. The core dump file (the memory dump generated by the crash).

To start GDB with these files:

gdb /path/to/executable /path/to/core

Example:

gdb ./my_program core

If you don’t specify the executable, GDB will attempt to load symbols from the core dump alone, but you’ll get limited information without the executable.

5.2 Inspecting Core Dump Information

Once you open the core dump in GDB, you'll be able to examine various aspects of the crash:

  • Backtrace: The backtrace (bt) command shows you the stack frames at the time of the crash, which can help you trace the function calls leading up to the crash.

    (gdb) bt
    

  • Stack Trace with Arguments: For a more detailed backtrace, you can use:

    (gdb) bt full
    
    This will show all function calls and arguments passed.

5.3 Inspecting Variables and Memory

Once inside GDB, you can inspect the values of local variables, global variables, and memory at the time of the crash.

  • Inspect Local Variables: To print the value of a local variable in the current function:

    (gdb) print variable_name
    
    Example:
    (gdb) print my_var
    

  • Inspect Memory at a Specific Address: You can also view memory at a particular address:

    (gdb) x/16xw 0xADDRESS
    
    This will examine 16 words (4 bytes each) at the memory address 0xADDRESS.

5.4 Navigating the Call Stack

You can navigate through the call stack to get a better sense of what was going on at the time of the crash.

  • Move Up or Down the Stack: You can use up and down to navigate through the stack frames.

    (gdb) up   # Move up to the previous stack frame
    (gdb) down # Move down to the next stack frame
    

  • Inspect Arguments of a Function: When you are inside a function in the stack trace, you can inspect the arguments:

    (gdb) info args
    

5.5 View Source Code

If you have the source code available, GDB can display the exact line of code where the crash occurred.

  • Display Source Code:

    (gdb) list
    
    This will display the source code surrounding the current execution point in the program.

  • Jump to a Specific Line in the Code: If you know the line number where the crash happened, you can use:

    (gdb) list 50
    
    This will show you lines 50 to 70 of the source code.

5.6 Inspecting Registers

You can inspect the CPU registers at the time of the crash, which can be useful for low-level debugging.

  • View Registers:
    (gdb) info registers
    
5.7 Checking Loaded Libraries

If the crash might be related to a specific library, you can list the libraries that are loaded into memory at the time of the crash:

  • List Loaded Libraries:
    (gdb) info sharedlibrary
    

6. Debugging a Running Program with GDB

While core dumps are useful, sometimes you want to debug a program that is actively running. Here’s how you can use GDB to debug live processes.

6.1 Attach GDB to a Running Process

You can attach GDB to a running process by specifying the PID (Process ID).

gdb -p [PID]

For example:

gdb -p 12345

Once attached, you can start issuing commands like bt, info locals, and others to inspect the running process.

6.2 Set Breakpoints

Breakpoints allow you to pause execution at specific lines or functions.

  • Set a Breakpoint at a Line of Code:

    (gdb) break filename.c:line_number
    

  • Set a Breakpoint at a Function:

    (gdb) break function_name
    

6.3 Run the Program Inside GDB

You can run a program under GDB to trace through it step by step.

  • Start Running the Program:

    (gdb) run [program_arguments]
    

  • Step Through the Program: Use step to execute one line of code at a time, stepping into functions:

    (gdb) step
    

Use next to step over functions:

(gdb) next

  • Continue Execution: If you’ve hit a breakpoint, use continue to let the program run until the next breakpoint or crash:
    (gdb) continue
    

7. Advanced GDB Techniques

  • Examine Heap or Stack Memory: If you suspect issues with memory allocation, such as a heap corruption, you can inspect heap or stack memory using the x command:

    (gdb) x/32xw [address]   # Examine 32 words at the address
    

  • Dumping Data: You can dump the contents of a register or memory to a file for later inspection:

    (gdb) dump memory /path/to/output [start_address] [end_address]
    


8. Common Troubleshooting Commands

  • Listing All Breakpoints:

    (gdb) info breakpoints
    

  • Removing a Breakpoint:

    (gdb) delete [breakpoint_number]
    

  • Examine Program State:

    (gdb) info locals       # View local variables
    (gdb) info threads      # View threads
    


9. Conclusion

GDB is a powerful tool for debugging and crash analysis on Linux and macOS. Whether you're analyzing a core dump, debugging a live program, or investigating memory issues, GDB provides a rich set of commands to inspect program state, investigate crashes, and perform advanced debugging.

To summarize: - Start GDB with the executable and core dump using

:

gdb /path/to/executable /path/to/core

  • Important Commands: bt (backtrace), print (variables), x (memory), info registers, step, continue.

  • For debugging live processes, you can attach to a running program with gdb -p [PID].

Java File Analysis


Java Decompilation and Reverse Engineering

JD-GUI

JD-GUI is a graphical tool that decompiles Java .class files into readable Java source code.

Usage

Open the .class file or JAR with JD-GUI.

Why it's useful

JD-GUI helps you understand the source code of compiled Java files, especially useful when analyzing suspicious or unknown binaries to identify malicious behavior.

CFR (Another Java Decompiler)

CFR is a command-line Java decompiler that can be used to decompile .class files.

java -jar cfr.jar <input.class>
Why it's useful

CFR handles difficult-to-decompile Java classes (such as those with obfuscation) better than some other decompilers, making it a great tool for reverse engineering Java malware.

Fernflower

Fernflower is another decompiler, typically used for decompiling .class files into Java source code.

Usage

Fernflower is often integrated with IDEs like IntelliJ IDEA, but can also be used via the command line by using the procyon decompiler or similar tools.

Why it's useful

Similar to JD-GUI, Fernflower helps reverse-engineer Java bytecode into readable code, and is particularly useful for examining obfuscated code.

JADX (For APKs)

JADX decompiles APK files (Android applications), which are often written in Java. This tool is vital for analyzing Java-based Android malware.

jadx -d output_folder <input.apk>
Why it's useful

When analyzing Android malware, JADX allows you to view the source code and structure of APKs, helping identify malicious behavior or backdoors.


Disassembling Java Bytecode

javap (Java Disassembler)

The javap tool disassembles Java bytecode to provide detailed information about the class, methods, and fields, without needing the source code.

Display class structure
javap -c <file.class>
Display class details (e.g., methods, fields)
javap -p -v <file.class>
Why it's useful

javap is invaluable for inspecting the bytecode of a Java class when the source code isn't available. You can analyze the class structure, method signatures, and bytecode instructions, which is useful for reverse engineering and understanding the behavior of Java applications or malware.


Java Runtime Analysis for Forensics

jmap (Heap Memory Dump)

jmap is used to generate heap dumps of a running Java process, which can be valuable for analyzing the memory state of an application during an incident.

Dump heap
jmap -dump:live,format=b,file=heapdump.hprof <PID>
Display heap summary
jmap -heap <PID>
Why it's useful

Memory dumps can provide insight into the state of the Java application, including objects in memory, potential memory leaks, or malicious payloads. Analyzing the heap dump is useful for malware analysis and root cause investigation in runtime incidents.

jstack (Thread Stack Traces)

jstack generates stack traces for all threads in a Java process. This helps identify what a Java process is doing at a given point in time.

Get stack traces:
jstack <PID>
Why it's useful

This command is essential for diagnosing performance problems or debugging issues, as it shows which method each thread is currently executing. In DFIR, this can help track suspicious activity or identify what malicious code is being executed in a Java process.


Java Security and Integrity Analysis

Jarsigner (Verify/Sign JAR Files)

jarsigner is used to sign JAR files and verify their signatures. In a DFIR context, this tool can verify if a JAR file has been tampered with or is legitimate.

Verify a JAR file signature
jarsigner -verify -verbose -certs <file.jar>
Sign a JAR file
jarsigner -keystore <keystore.jks> -signedjar <signedfile.jar> <unsignedfile.jar> <alias>
Why it's useful

Verifying the integrity of JAR files can help identify if malware has replaced or modified legitimate applications. Ensuring the signature is valid confirms that the file hasn't been tampered with.

Keytool (Managing Keystores and Certificates)

keytool is a utility for managing keystores and certificates, used to handle cryptographic operations in Java applications. It's useful when analyzing applications that use certificates or encryption.

List keystore certificates
keytool -list -keystore <keystore.jks>
Generate a keystore
keytool -genkey -keyalg RSA -keystore <keystore.jks> -alias <alias> -dname "CN=YourName"
Why it's useful

In cybersecurity, checking for the presence of valid certificates and analyzing keystore contents can help determine if an application is authentic or if it's being used for malicious purposes. This is critical when analyzing Java-based malware or securing Java applications.


Incident Response Tools for Java Malware

jps (Java Process Status)

jps lists all Java processes currently running on the system, along with their Java process IDs (PID). This is useful for monitoring suspicious Java processes.

List all Java processes:
jps -l
Why it's useful

By listing running Java processes, jps helps identify which Java applications are running on a system. This can be useful during an incident when trying to find malicious or unauthorized Java applications.

jdb (Java Debugger)

jdb is a command-line debugger for Java programs. It's useful for attaching to a running Java process or analyzing specific Java code during an incident.

Debug a running Java process
jdb -attach <PID>
Why it's useful

In a forensic investigation, jdb can be used to debug a Java process and step through the execution of the application. If you're investigating malicious behavior in a running Java process, this tool is essential for performing live analysis.


Summary of Key Tools and Use Cases

  • JD-GUI, CFR, Fernflower: Use for decompiling Java bytecode and examining suspicious code.
  • JADX: Decompiles APKs, useful for Android-based Java malware analysis.
  • javap: Disassembles .class files to inspect bytecode.
  • jmap: Generates heap dumps for memory forensics and identifying potential malicious objects.
  • jstack: Provides stack traces for live Java processes to identify suspicious activity.
  • Jarsigner: Verifies the integrity of JAR files and checks for tampering.
  • Keytool: Manages certificates and keystores, useful for verifying the legitimacy of Java applications.
  • jps: Lists running Java processes, helping to identify suspicious activity in an incident.
  • jdb: Debugs running Java processes, allowing you to step through code during an investigation.