Skip to content

Blog

Digital Forensics

Git History

Given a backup of a git repo, what sort of information can be gleaned?

git log -p
git log --all
- git log: Shows commits from the current branch. - git log -p: Shows commits from the current branch along with the changes (diffs) introduced by each commit. - git log --all: Shows commits from all branches, including those that are not on the current branch.

Depending on what was listed in the message when the "git commit -m 'updating passwords'" was run, you may be able to find speific commits of interest. Once you find a commit of interest and you want to see more details, you can run:

git show 20771a8fb517900793be6c8d1d9269978a0866de

Compressed PNG Files

Sometimes files include other files. Running exiftool helps identify if some files might be hiding. exifHiddenFiles

This creates an "extracted" directory where the original file's extracted files can be found. After examining the extracted files, running strings on executables like .exe or .bin files can sometimes help understand more.


File Forensics Commands

file file.pdf
Identifies the file type by analyzing its magic number (signature). It determines whether the file is a PDF, image, binary, etc., based on its internal structure, not just its file extension.


exiftool file.pdf
Extracts metadata from a file (such as a PDF). This can include author information, creation date, software used, GPS coordinates, and other details that may be hidden in the file’s properties.


strings file.pdf
Searches for and extracts human-readable strings from a file. This can help reveal hidden text, URLs, error messages, or passwords embedded within the file.


binwalk -e file.pdf
Analyzes and extracts embedded files or data from a binary (like a PDF). It’s commonly used to detect and extract hidden files or compressed data that are embedded within the file.


strace ./binaryName
Traces system calls and signals made by a binary during execution. It provides insight into the program’s interaction with the OS, such as file access, network connections, and memory usage.


ltrace ./binaryName
Traces library calls (e.g., malloc, free, printf) made by a binary. It helps to analyze how the program interacts with shared libraries and which functions are called during execution.


gdb ./binaryName
Starts the GNU Debugger (GDB) for a binary, allowing you to debug the program. You can inspect variables, control execution flow, set breakpoints, and view assembly code to understand how the program operates.


Altered File

Does the file extension match the file type that shows when running file or exiftool? Do the magic bytes of the file match the filetype?
What sort of information does strings return?

Editing the file in HexEd.it allows you to modify magic bytes and export as a new filetype.


Hidden File

.docx files are actually zip files containing .xml documents that support features/functions of Word so you can unzip a .docx file in order to see the xml files it holds.


Windows Memory Forensics

memdump files are memory dump files for a Windows OS. Sometimes these files are gzip files while other times they are xzip. In order to examine the contents, you'll first have to decompress.

xz -d memdump.mem.xz
or
gunzip memdump.raw.gz

Using Volatility3

Finding the right command for what you need to answer (example: finding the operating system info)

vol -h | grep "OS"
vol -f memdump.mem windows.info.Info
Tells you about the host environment:
vol -f memdump.mem windows.envars.Envars
Note the computername, username, etc. from these results. Files of interest
vol -f memdump.mem windows.filescan | grep liber8hacker
There will be several "AppData" file paths, but these indicate what programs were being used and not files of interest. To eliminate these, you can always:
vol -f memdump.mem windows.filescan | grep liber8hacker | grep -v "AppData"
This helps narrow the focus.

virtaddr

This next command uses the virutal address for the file of interest in order to pull the actual file contents from memory.

mkdir dump && vol -f memdump.mem -o dump windows.dumpfiles.DumpFiles --virtaddr 0xe0003d624b90
Now we can check what type of file we have extracted:
file dump/file.0xe0003d624b90.0xe0003f47b990.DataSectionObject.black_book.db.dat
dump/file.0xe0003d624b90.0xe0003f47b990.DataSectionObject.black_book.db.dat: SQLite 3.x database, last written using SQLite version 3035005, file counter 6, database pages 4, 1st free page 3, free pages 1, cookie 0x8, schema 4, UTF-8, version-valid-for 6
In this instance it's a sqlite3 file.

Show Tables

.tables
Results
aliases books
Show Schema (Structure) of a Table .schema aliases

Show the contents of each table

SELECT * from aliases;
SELECT * from book;


Binary Files

If you've got a binary file such as an ELF program, it is advisable to examine it using Linux-specific tools, many of which are baked into Kali Linux already. strace, ltrace, and gdb (GNU Debugger) are all Linux-specific tools.

  • readelf: Use this command to analyze ELF files and their structure, including headers and sections.

    readelf -h executable
    readelf -S executable
    

  • objdump: Another tool for disassembling binaries. You can use it to view assembly instructions and even extract strings from a binary.

    objdump -d executable  # Disassemble binary
    objdump -s executable  # View all strings
    

  • nm: Lists symbols in the binary. This can help identify exported functions, static variables, and more.

    nm executable
    

  • strings: Extracts readable strings from a binary file, which can sometimes reveal sensitive information like hardcoded passwords or paths.

    strings executable
    


strace

strace is a tool used to trace system calls and signals made by a program. It is useful for understanding the system interactions of a binary, such as file I/O, memory allocation, and network connections. It helps in debugging and reverse engineering, particularly when trying to understand how an executable interacts with the operating system.

Trace system calls made by a program
strace ./program
Trace system calls of an already running process
strace -p <PID>
Save the strace output to a file
strace -o output.txt ./program
Show only specific system calls
strace -e trace=<syscall> ./program
Filtering and Analyzing Specific System Calls
strace -e trace=read,write ./program
Count the number of times each system call is invoked
strace -c ./program
Suppress the output of certain system calls
strace -e !trace=write ./program
Vulns you may see when using strace
  • Unusual File Access: Look for excessive file reads or writes that might indicate a vulnerability like privilege escalation (e.g., writing to sensitive files).
  • Network connections: Connections to unexpected IP addresses could indicate remote code execution or data exfiltration vulnerabilities.
  • Unusual system calls: System calls such as mmap with overly large regions or execve with unknown arguments could signal exploit attempts like buffer overflows.

ltrace

ltrace is similar to strace but focuses on library calls (e.g., malloc, free, printf). It's useful for debugging how a binary interacts with shared libraries and dynamically linked code. While strace traces system calls, ltrace focuses on the function calls within the program itself.

Trace function calls made by a program
ltrace ./program
Trace function calls for an already running process
ltrace -p <PID>
Filter specific functions to trace
ltrace -e "malloc,free" ./program
Save the ltrace output to a file
ltrace -o output.txt ./program
Show only the function call names without arguments
ltrace -n ./program
Vulns you may see when using ltrace
  • Memory leaks: If a program allocates memory using malloc but never frees it with free, ltrace can be used to detect memory leaks that could lead to denial of service (DoS).
  • Improper use of library functions: Calling functions like strcpy or gets without proper bounds checking can lead to buffer overflow vulnerabilities.
  • Insecure library interactions: Look for interactions with insecure or outdated libraries which could introduce arbitrary code execution vulnerabilities.

GNU Debugger (GDB)

gdb is a powerful debugger that allows you to inspect and modify the execution of a program. It provides a rich set of features for disassembling, stepping through, and interacting with a program while it's running. It’s invaluable for reverse engineering, exploit development, and understanding how a binary works at a low level.

Debug an executable
gdb ./executable
Disassemble the main function
(gdb) disassemble main
List available functions
(gdb) info functions
Set a breakpoint at main
(gdb) break main
Call a function manually
(gdb) call functionName
Attach a process to gdb
gdb -p <PID>
Debug with a core file
gdb -c core ./executable
Execute given GDB commands upon start
gdb -ex "commands" ./executable
Start gdb and pass arguments to the executable
gdb --args ./executable argument1 argument2
Vulns often seen by examining binary in gdb:
  • Buffer Overflows: You can manually examine stack buffers, set breakpoints, and inspect memory to find places where unchecked buffers might overflow (e.g., through gets, strcpy).
  • Return-Oriented Programming (ROP): GDB can help identify vulnerable code paths that could be exploited through ROP chains, where an attacker hijacks the program’s control flow using small snippets of code (gadgets).
  • Heap Corruption: If the program uses dynamic memory allocation (via malloc), gdb can help identify vulnerabilities like double-free or use-after-free, which are common in heap management bugs.
  • Format String Vulnerabilities: GDB allows you to trace the format strings passed to functions like printf. If these strings are not sanitized, they can lead to format string vulnerabilities (e.g., arbitrary code execution via %n).

Log Analysis Cheatsheet

This cheatsheet focuses on command-line tools often used in log analysis and security investigations. The commands are presented in ways that allow you to combine them for more efficient analysis.

chacho-logs

1. grep + awk (Search and Extract Data)

  • Find error messages and print the timestamp (assumes logs have timestamps in the first column):

    grep "error" logs.txt | awk '{print $1, $2, $3}'
    

  • Filter logs for a specific pattern, then extract the 3rd and 5th columns:

    grep "login" logs.txt | awk '{print $3, $5}'
    

  • Search for lines that match "failed" in one file and extract user information from another file:

    grep "failed" logs.txt | awk '{print $2}' | while read user; do grep "$user" /etc/passwd; done
    


2. awk + sort (Summarize and Sort Data)

  • Sum all login attempts (assuming the 1st column is the IP address and the 4th is the number of attempts):

    awk '{s+=$4} END {print s}' login_attempts.txt
    

  • Sort failed login attempts by IP address:

    awk '/failed/ {print $1}' logs.txt | sort | uniq -c | sort -nr
    

  • Display top 10 IP addresses based on the number of login attempts:

    awk '{print $1}' logs.txt | sort | uniq -c | sort -nr | head -n 10
    


3. grep + sort (Search and Sort Results)

  • Find the top 10 most common error types (assuming error types are in the 3rd column):

    grep "error" logs.txt | awk '{print $3}' | sort | uniq -c | sort -nr | head -n 10
    

  • Search logs for IP addresses, then sort them by frequency of access:

    grep -oP '\d+\.\d+\.\d+\.\d+' logs.txt | sort | uniq -c | sort -nr
    


4. awk + cut (Extract and Format Data)

  • Display a user’s login history (assuming username is in the 3rd column):

    awk '{print $3}' login_attempts.log | sort | uniq -c
    

  • Extract and display specific fields, then format:

    awk '{print $1, $3, $5}' logs.txt | cut -d ' ' -f 1,3
    

  • Find and format IPs, then show number of accesses:

    awk '{print $1}' logs.txt | sort | uniq -c | awk '{printf "%-15s %s\n", $2, $1}'
    


5. grep + sed (Search, Modify, and Filter Logs)

  • Find all lines with "login", and replace "failed" with "unsuccessful":

    grep "login" logs.txt | sed 's/failed/unsuccessful/'
    

  • Search logs for specific keywords, then highlight the matches:

    grep --color=always "error" logs.txt | sed 's/error/\x1b[31m&\x1b[0m/'
    


6. sort + uniq (Counting Occurrences)

  • Count occurrences of specific error codes (assuming the error code is in the 2nd column):

    awk '{print $2}' error_logs.txt | sort | uniq -c | sort -nr
    

  • Find the most frequent IP addresses involved in failed login attempts:

    grep "failed" logs.txt | awk '{print $1}' | sort | uniq -c | sort -nr | head -n 10
    


7. awk + grep + sort (Data Filtering and Aggregation)

  • Extract and display IPs with a specific pattern, then sort by frequency:

    grep "failed" logs.txt | awk '{print $1}' | sort | uniq -c | sort -nr
    

  • Extract entries where the status code is 404, sort by the number of occurrences:

    awk '$9 == "404" {print $1}' access.log | sort | uniq -c | sort -nr
    


8. grep + head (Limit Results)

  • Get the first 10 lines with a specific keyword (e.g., "error"):

    grep "error" logs.txt | head -n 10
    

  • Show top 10 most recent login attempts by IP address:

    grep "login" logs.txt | tail -n 10 | awk '{print $1}' | sort | uniq -c | sort -nr
    


9. find + grep (Search Through Logs)

  • Search for logs containing "error" across multiple files:

    find /var/log/ -type f -name "*.log" -exec grep -H "error" {} \;
    

  • Find all log files modified in the last 24 hours, then search for a specific pattern:

    find /var/log/ -mtime -1 -type f -exec grep "suspicious_activity" {} \;
    


10. tail + grep (Monitor Logs in Real-Time)

  • Monitor a log file for real-time login failures:

    tail -f /var/log/auth.log | grep "failed"
    

  • Follow logs and only show lines with "error":

    tail -f logs.txt | grep "error"
    


11. cut + sort (Quick Field Extraction and Sorting)

  • Extract and count occurrences of a field (e.g., IP addresses in the 1st column):
    cut -d ' ' -f 1 logs.txt | sort | uniq -c | sort -nr
    

12. Combining Commands for Advanced Filtering and Analysis

  • Get the top 5 IP addresses with failed logins:

    grep "failed" logs.txt | awk '{print $1}' | sort | uniq -c | sort -nr | head -n 5
    

  • Show all unique error codes, formatted with counts:

    grep "error" logs.txt | awk '{print $2}' | sort | uniq -c | sort -nr
    

  • Extract and summarize event timestamps, showing the most frequent timestamps:

    awk '{print $1, $2}' logs.txt | sort | uniq -c | sort -nr | head -n 10
    


Windows Process IDs

Ah, I see! You're referring to Windows Event IDs, which are logged by the Windows Event Log system. These Event IDs are critical for incident response and cybersecurity log analysis as they help to identify specific events that may indicate suspicious or malicious activity.

Here's a table of Windows Event IDs that are often important in log analysis for cybersecurity and incident response:

Event ID Description Significance
4624 Successful logon Indicates a successful user logon. Investigate suspicious accounts or logon times.
4625 Failed logon Indicates failed logon attempts, which can indicate brute-force attacks or unauthorized access attempts.
4648 A logon was attempted using explicit credentials Important for identifying lateral movement or pass-the-hash attacks.
4688 A new process has been created Useful to track execution of new processes, including potentially malicious or unauthorized ones.
4689 A process has exited Helps to track the end of a process, which can be useful for identifying abnormal behavior or process lifecycles.
4670 Permissions on an object were changed Indicates changes to file, folder, or registry permissions, which can be a sign of tampering or escalation of privileges.
4732 A member was added to a security-enabled local group Shows when a user is added to a privileged group, which could indicate privilege escalation.
4733 A member was removed from a security-enabled local group Could indicate removal of a user from a privileged group (useful in investigating insider threats).
4740 A user account was locked out Can be an indicator of brute-force attacks or malicious user behavior attempting to guess passwords.
4767 A user account was unlocked Indicates the unlocking of a user account; relevant for identifying account compromises.
5156 The Windows Filtering Platform has allowed a connection Shows that a network connection has been established, useful for tracking legitimate or suspicious network activity.
5158 The Windows Filtering Platform has blocked a connection Identifies blocked network connections, helpful in spotting attempted communications with external systems.
4688 A new process has been created (often used for identifying malicious binaries or suspicious scripts) Identifies the launch of new processes and can help track execution of malware or unauthorized scripts.
4689 A process has exited Used for tracking the termination of processes, important for analyzing malware lifecycles or fileless attacks.
5140 A network share object was accessed Important for monitoring access to shared resources; potential sign of data exfiltration or unauthorized file access.
5145 A network share object was accessed with an incorrect password or by unauthorized user Could indicate unauthorized access attempts or malicious activities attempting to access shared resources.
1102 The audit log was cleared A red flag; clearing event logs is often an attempt to cover tracks after malicious activity.
5152 The Windows Filtering Platform has blocked a connection due to a rule match Could be useful for detecting attacks blocked by firewall rules (e.g., port scans, exploit attempts).
4680 A handle to an object was requested by a process Shows when an object is opened, useful for detecting suspicious file or registry access.
4690 A handle to an object was closed by a process Can be used to track whether potentially malicious processes are interacting with system resources.
4698 A scheduled task was created Important for identifying unauthorized tasks scheduled by attackers for persistence.
4700 A scheduled task was deleted Useful for identifying the removal of tasks created by attackers for persistence.
4769 A Kerberos authentication ticket was requested Important for monitoring Kerberos authentication activity; may help identify unauthorized access.
4771 Kerberos pre-authentication failed Indicates failed Kerberos authentication; could suggest attacks such as Pass-the-Ticket or credential dumping.
4776 The domain controller attempted to validate the credentials of an account Tracks failed login attempts in Active Directory, often associated with unauthorized login attempts.
4000 DNS query received Can help in identifying suspicious domain lookups or attempts to reach malicious IPs.
4662 An operation was performed on an object (e.g., a file, registry key) Identifies when a change was made to an object, such as file modification, which could be a sign of data manipulation.
4964 Special group membership was enumerated Often used to track changes in user group memberships, useful for detecting unauthorized privilege escalation.

Got it! I understand now. You want to see grep, awk, sed, and other commands being used together in real-life scenarios for log analysis, especially involving Windows Event IDs.

Here’s a more integrated approach where commands are combined to process logs and extract meaningful information:


Windows Logs with Event IDs


1. Search for Failed Logon Attempts (Event ID 4625) and Show IPs Using grep, awk, and sed

  • Scenario: You're analyzing logs for failed logon attempts (Event ID 4625) and want to identify the source IP addresses.

    grep "4625" path/to/log | awk '{print $11}' | sort | uniq -c | sed 's/^ *//g'
    

    Explanation: - grep "4625" path/to/log: Filters logs for failed logon attempts (Event ID 4625). - awk '{print $11}': Extracts the 11th column, which typically contains the IP address. - sort | uniq -c: Sorts the IP addresses and counts unique occurrences (identifying multiple failed logins from the same IP). - sed 's/^ *//g': Removes leading spaces for a cleaner output.

    Outcome: Displays the number of failed logon attempts per IP address.


2. Check for Multiple Event IDs (4624, 4625, 4648) and Display Usernames Using grep, awk, and sed

  • Scenario: You're looking for both successful (4624) and failed (4625) logons, and also logons using explicit credentials (4648). You want to display the usernames involved.

    grep -E "4624|4625|4648" path/to/log | awk '{print $5}' | sort | uniq -c | sed 's/^ *//g'
    

    Explanation: - grep -E "4624|4625|4648" path/to/log: Filters for successful logons (4624), failed logons (4625), and explicit credential logons (4648). - awk '{print $5}': Extracts the 5th column, which typically contains the username. - sort | uniq -c: Sorts the usernames and counts occurrences. - sed 's/^ *//g': Cleans up any extra spaces before displaying the output.

    Outcome: Shows a count of successful, failed, and explicit credential logons by username.


3. Extract Process Creation Events (Event ID 4688) for Specific User Using grep, awk, and sed

  • Scenario: You want to track new processes created (Event ID 4688) by a particular user (e.g., Administrator).

    grep "4688" path/to/log | grep "Administrator" | awk '{print $0}' | sed 's/^/NEW PROCESS: /'
    

    Explanation: - grep "4688" path/to/log: Filters for process creation events (Event ID 4688). - grep "Administrator": Filters those events further to include only those related to the Administrator user. - awk '{print $0}': Outputs the full line for each matching event. - sed 's/^/NEW PROCESS: /': Adds a prefix to each line for better readability and context.

    Outcome: Displays process creation events for the Administrator user, making it easier to see what processes were launched.


4. Monitor Logon Events and Check for Concurrent Logons (Event ID 4624) Using grep, awk, sort, and uniq

  • Scenario: You are analyzing successful logon events (Event ID 4624) and want to identify any concurrent logons by the same user.

    grep "4624" path/to/log | awk '{print $5, $1}' | sort | uniq -d
    

    Explanation: - grep "4624" path/to/log: Filters for successful logon events (Event ID 4624). - awk '{print $5, $1}': Extracts both the username (5th column) and timestamp (1st column). - sort: Sorts the output by both username and timestamp. - uniq -d: Displays only the duplicate (concurrent) logons by the same user.

    Outcome: Identifies concurrent logins by the same user across different machines or sessions, which can indicate suspicious behavior.


5. Search for Logon Events with Specific IP Addresses Using grep, awk, sort, and sed

  • Scenario: You want to filter successful logon events (Event ID 4624) and find logons from a specific IP address.

    grep "4624" path/to/log | awk '{if ($11 == "192.168.1.100") print $0}' | sed 's/^/IP LOGON: /'
    

    Explanation: - grep "4624" path/to/log: Filters for successful logon events (Event ID 4624). - awk '{if ($11 == "192.168.1.100") print $0}': Filters out the lines where the IP address (11th column) matches 192.168.1.100. - sed 's/^/IP LOGON: /': Adds a prefix to each matching log entry for clarity.

    Outcome: Displays only successful logon events from the specific IP 192.168.1.100, helping to identify a specific source of access.


  • Scenario: You want to track when the audit log is cleared (Event ID 1102) and then look at subsequent events for further investigation.

    grep "1102" path/to/log | awk '{print $0}' | sed 's/^/LOG CLEARED: /' > cleared_logs.txt
    grep -f cleared_logs.txt path/to/log
    

    Explanation: - grep "1102" path/to/log: Finds when the audit log is cleared (Event ID 1102). - awk '{print $0}': Prints the full line for each matching event. - sed 's/^/LOG CLEARED: /': Adds a prefix to identify log-clearing events. - > cleared_logs.txt: Saves the filtered log-clearing events to a file. - grep -f cleared_logs.txt path/to/log: Uses the saved file of cleared logs to find subsequent events that may have been tampered with.

    Outcome: Tracks log clearing events and then allows you to follow up with any suspicious activity after the logs were cleared.


7. Combine and Filter Multiple Event IDs (4624, 4688, 4689) for User Activity Using grep, awk, and sort

  • Scenario: You're analyzing user activity and want to check for successful logons, process creation, and process termination.

    grep -E "4624|4688|4689" path/to/log | awk '{print $5, $1, $0}' | sort | uniq
    

    Explanation: - grep -E "4624|4688|4689" path/to/log: Filters logs for successful logons (4624), process creation (4688), and process termination (4689). - awk '{print $5, $1, $0}': Extracts username (5th column), timestamp (1st column), and the full log entry. - sort | uniq: Sorts the logs and removes any duplicates.

    Outcome: Provides a chronological sequence of user logins, processes created, and processes terminated, helping you analyze user activity.


Password Cracking Cheetsheet

Straight up Cheetsheet for CTFs. There's a flow sometimes where they start easy and progress to harder so I'm making a cheatsheet of the likely stages.

Online Resources

RockYou passwords from LMs, MD5s, SHA1s, and SHA256 hashes are most easily found in here so you don't have to bother running them through john or hashcat if the cracking has already been done for you.

Simple Hashcat Mask

hashcat -m 0 -a 6 pass2.txt knownBeginningWord.txt "?d?d?d?d"
This is taking the known begining of the password and running it through all versions of the password with 4 digits appended to it.

Simple Wordlists

john --wordlist=pokemon.txt --format=Raw-MD5 hashes.txt
cat ~/.john/john.pot
 ```

## Ophcrack 
Rainbow tables are necessary for some of the Windows NTLM password hashes if they aren't in the ntlm.pw website.  This can require downloading the correct rainbow tables into the `/usr/share/ophcrack/table/<nameofrainbowtable>/` directory and then installing that into Ophcrack. Once done, you can load each hash individually and then run the hashes against the rainbow table.  

Ophcrack is preinstalled in Kali. 
Rainbow tables are downloadable from [here](https://ophcrack.sourceforge.io/tables.php)

## Custom Wordlists for MD5 Hashes
```bash
hashcat -m 0 -a 6 pass6SVU.txt law-and-order-svu-episode-titles.txt "?d?d?"

Specific Hash Type - PDF

When given an encrypted file, we need to find the password. First, run file to get teh basic deets on the pdf version.

file encrypted.pdf
It returned 1.7 for the version so now we look for which hash-type that would be:
hashcat --example-hashes | grep -i 'pdf 1.7' -B 8 -A 7
Looks like we're going with 10600 or 10700.

But we don't have the hash for the pdf yet. To retrieve that, we can use:

pdf2john encrypted.pdf > pdfhash.txt
This gives us our hash, though you will need to remove the filename from the beginning of the output so that it starts with the actual hash looking something like this:
$pdf$5*6*256*-1028*1*16*1dffd5f4a85d4a2a9b632fe8b2cf400d*127*aa3f91765a570bef95ca28fc879f038707f53a4c30ae3fde2ab5a516e62f269b16028aa82b4146ad88e738376693c1f800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000*127*842c2b4513ea81921e9965c51dc0c9747ce97fb9e1b576b92a899b9a8ddd8c35fe7a4ca4e85b45a484b805ad6d0b84cd00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000*32*f2f6780f8510d8d74afa92b7ceaef61049e5b1b8120fb6fcdebc4aedc7098d3b*32*42a11c5982292ed8b495dac13ea095d0067f5de287304a11ce3085c54a86b547
Now this saved hash can be run through haschat:
hashcat -m 10700 pdfhash.txt rockyou.txt
For whatever reason, running this through macos threw an error, but running on Kali did not. If it seems to not be working, use Kali.

To view the password:

hashcat pdfhash.txt -m 10700 --show
The cracking can also be done through john once you have the hash:
john --wordlist=/usr/share/wordlists/rockyou.txt hash.txt
(the hash will be recognized as a pdf hash so you do not need to enter the hash type)
john --show hash.txt

Cracking $y$ YesCrypt

Cracking yescrypt can be done from Kali Linux box. If you're given the /etc/shadow file, save the user's entry (minus their username:) and then run that through john like so:

john --wordlist=/usr/share/wordlists/rockyou.txt --format=crypt yescrypthash.txt 
The yescrypthash.txt would look something like this:
$y$j9T$/WzixhAsn8sdXhCquYzh01$KZlio78LilItobsx/17ecFf1e2SbsduhP1sZEWuHrL4

Proxmox Images

Whether it's metasploitable2, SIFT, or some other prepackaged image you're wanting to run from your Proxmox hypervisor, it's easy to get lost in the configuration process. Generally speaking through, you want to get your image into a qcow2 format and then moved onto the Proxmox host in a location where you keep your images. The default location is /var/lib/vz/images. Here's a walkthrough of how to do it.

Converting to qcow2

The conversion process varies slightly depending on what you're given. If it's an .ova file, you'll need to first run:

tar -vxf filename.ova
This extracts the files. From there, you've likely got a vmdk file and this is what you'd convert to qcow2 using:
qemu-img convert -f vmdk -O qcow2 ~/Downloads/sift-disk1.vmdk ~/sift.qcow2

Transferring to Proxmox Host

Now that you have the image in a format that works for Proxmox, you can transfer it over to the Proxmox host using whatever method works for you. I'll show the commands for scp using the standard password login through scp or using creds.

scp with password
scp ~/sift.qcow2 root@proxmox_ip:8006/var/lib/vz/images/
The command explained
  • scp: we want to transfer a file to/from the machine we're on to/from another machine
  • ~/sift.qcow2: this is the from location (in this case on our client computer)
  • root@proxmox_ip:8006/var/lib/vz/images: this is our to location (in this case, it's the proxmox IP address running on the native Proxmox port 8006 and going to the images directory)
    When you hit enter for this command, you'll be prompted for the Proxmox root password so long as you didn't change it to disallow password authentication.
scp with creds
scp -i id_rsa ~/sift.qcow2 root@proxmox_ip:8006/var/lib/vz/images/
The command explained
  • scp: we want to transfer a file to/from the machine we're on to/from another machine
  • -i id_rsa: this assumes you have your computer's public ssh key in the authorized_keys file on the Proxmox host (this is a more secure way of logging in rather than using a password)
  • ~/sift.qcow2: this is the from location (in this case on our client computer)
  • root@proxmox_ip:8006/var/lib/vz/images: this is our to location (in this case, it's the proxmox IP address running on the native Proxmox port 8006 and going to the images directory)

Creating Skeleton VM

From the Proxmox web interface, follow these steps to create a new VM skeleton:

  1. Log in to the Proxmox portal: Open your Proxmox web interface in a browser.
  2. Click on the "Create VM" button: This option is typically at the top-right of the interface. createVM
  3. Enter the VM details: creatingVM
  4. Skip the ISO Image step: nomedia
    • Select the "Do not use any media" option since you will attach the disk image later.
  5. Configure the VM hardware:

    • System: Choose the firmware (BIOS or UEFI) appropriate for your image. If unsure, use the default.
    • Hard Disk: Skip adding a hard disk here since you will import the disk image later.
    • CPU: Allocate a reasonable number of cores (e.g., 2 cores for basic setups).
    • Memory: Assign sufficient RAM based on your VM requirements (e.g., 4096 MB for SIFT).
    • Network: Add a network device (e.g., VirtIO or E1000) as required.
    • Finalize: Review your settings and click "Finish" to create the skeleton VM.

Importing, Attaching Image

From the Proxmox host's shell, you'll now need to associate the image with the VM you created.

qm importdisk 113 /var/lib/vz/images/sift.qcow2 local-lvm

Once this finishes successfully, you'll go back into the portal's view of the VM's settings. You'll notice in the Hardware tab there is now a hard disk that says Unused Disk 0 (or something similar). Double-click it, select the appropriate Bus/Device option and then click add.
unused add

How do I know which Bus/Device to select?
  1. Check the Documentation: Most prepackaged images specify their recommended Bus/Device type.
  2. Trial and Error: If documentation isn't clear, try adding the disk using a likely candidate (e.g., SCSI or VirtIO) and boot the VM. Adjust if the image fails to boot or the OS doesn't detect the disk.
  3. Inspect the OS: If you can mount the image or access the OS configuration, check what drivers are pre-installed (e.g., VirtIO drivers for Linux-based VMs).


    Known:
    • Metasploitable2: VirtIO
    • SIFT Workstation: SCSI
    • REMnux: SCSI
    • Kali Linux: VirtIO
    • Parrot Security OS: VirtIO

Setting the Boot Order

Ensure the VM knows to look for the hard disk when it boots:

  1. Go to the Options tab in the VM settings. bootorder
  2. Double-click Boot Order. chkboxprioritize

Start the VM

Fire up that puppy up.

Encrypted Client Hello

Background

When you want to visit a website, you might click a link or type in the domain name and boom, in an instant, the page loads on your screen. Behind the scenes, however, a complex exchange occurs between your browser (client) and the website’s server to establish a secure connection. This process, known as the TLS handshake, negotiates how information will be encrypted and transmitted between the two parties. While different versions of TLS (Transport Layer Security) implement this handshake slightly differently, the overall goal remains the same: agree on secure parameters for communication.

The TLS handshake begins when the client sends a Client Hello message to the server. This message contains critical information, such as supported versions of TLS, a list of preferred cryptographic algorithms (cipher suites), and any additional capabilities or settings required to establish a secure channel. These capabilities are communicated through a series of extensions, which serve as modular options that allow the client and server to enable or disable specific features based on what they support.

Client Hello and Extensions

The Client Hello message includes a set of extensions that help tailor the connection to the client’s needs and the server’s capabilities. Some commonly used extensions include:

  • Supported Versions: Indicates the versions of TLS the client can support, such as TLS 1.2 or TLS 1.3.
  • Server Name Indication (SNI): Allows the client to specify the exact hostname it wants to connect to, useful for servers that host multiple domains on a single IP address.
  • Key Share: Provides the client’s public key for Diffie-Hellman or elliptic curve key exchanges, which are used to securely agree on a shared secret.
  • Signature Algorithms: Lists the signature algorithms that the client supports, enabling the server to choose a compatible one for signing handshake messages.
  • Supported Groups: Specifies the elliptic curve groups that the client supports for key exchange.

These extensions offer flexibility and extensibility to the handshake process, enabling advanced features like session resumption, early data transmission, and post-quantum cryptography.

Cryptography 101

crypto-chacho

Cryptography is designed to support key security objectives in cybersecurity:

  • Confidentiality,
  • Integrity,
  • Authentication, and
  • Non-repudiation.

Understanding these principles is crucial for implementing secure systems and communications. This is a walkthrough of implementing these goals with a classic example that demonstrates the use of asymmetric key exchange, symmetric encryption, and digital signatures.

Alice and Bob have been demoing cryptography for decades. No reason to retire them now.

Confidentiality

Confidentiality ensures that information is only accessible to those authorized to see it. This is achieved through encryption techniques, which transform readable data (plaintext) into unreadable data (ciphertext). We’ll illustrate this concept through an example involving two parties: Alice and Bob.

Step 1: Generating Asymmetric Keys

Before Alice can securely send data to Bob, they need to establish a secure channel for sharing a symmetric key. Bob first needs to generate an RSA key pair:

# Generate Bob's RSA private key (4096 bits)
openssl genpkey -algorithm RSA -out bob_private.pem -pkeyopt rsa_keygen_bits:4096

# Extract Bob's public key from the private key
openssl rsa -in bob_private.pem -pubout -out bob_public.pem

Now, Bob shares his bob_public.pem file with Alice. This public key will be used to encrypt a symmetric key, ensuring that only Bob can decrypt it using his private key.

Why doesn't Alice encrypt it with her private key and let Bob decrypt with Alice's public key?

In this confidentiality portion of the scenario, Alice is wanting to ensure only Bob can decrypt. If Alice signed with her own private key, anyone could use her public key to decrypt the message. Using Bob's public key to encrypt ensures that only Bob, who keeps his private key on lockdown, can decrypt. That said, her private key will be used to address the integrity portion of the cryptogrpahy objectives.

John-the-Ripper & Hashcat

john-chacho

John-the-Ripper & Hashcat: Password-Cracking 101

Is your password policy truly as strong as you think? There's only one sure way to find out—by testing it! In the world of penetration testing, password cracking is an essential skill for uncovering weak credentials and assessing overall security. This walkthrough will guide you through the use of two powerful password-cracking tools, John the Ripper and Hashcat, and how to leverage them with wordlists generated by CeWL, a tool that scrapes websites for likely passwords. We'll explore how to gather targeted wordlists, feed them into cracking tools, and ultimately test the strength of your password policies.

John the Ripper

A versatile password cracking tool for identifying weak passwords in various encrypted formats.

Hashcat

A powerful, GPU-accelerated password cracker that supports a wide range of hashing algorithms. Where the rubber meets the road (but mostly if you have the GPU to harness the power).

CeWL

A web scraper tool that generates custom wordlists by crawling websites for potential password phrases.

Setting Up

Kali or Parrot

If you are running Kali Linux or Parrot Sec OS, you may not need to install much, but check to ensure you're running the latest versions.

Magic Bytes & File Forensics

magicbytes-chacho

Magic Bytes & File Forensics

Here's a walkthrough that includes basic checks for file integrity, metadata analysis, content inspection, and security intelligence gathering. Each step is fleshed out with context to guide the triage process:


1. Basic String Extraction

Use strings to extract readable text from the DOCX file. This helps identify hidden or encoded text, suspicious URLs, or any embedded messages without unzipping the file.

strings suspiciousfile.docx

What to Look For

Scan the output for suspicious phrases, URLs, encoded strings (e.g., base64), or unusual content that doesn't fit the context of the document.


2. File Identification and Format Check

Use file to confirm that the file type is consistent with its extension and to get a basic idea of its format. If the DOCX file claims to be something else, this is a red flag.

file suspiciousfile.docx

What to Look For

Verify that the output indicates the file is a valid ZIP archive, which is the expected format for DOCX files.

Note

When it is some other format type, (e.g. jpg, etc.), it may end up being helpful to do a google search on "jpg file structure." This will help find documentation as to what the specific magic bytes and other pertinent file structure components are specific to that file type.

VM Packet Capture Considerations

When running a virtual machine (VM) to capture network data, there's a potential problem that can arise due to network offloading. This is a specific issue related to capturing network data in a virtualized environment. Network offloading can be likened to missing pages in a notebook; when you capture packets without network offloading, it's like having a complete set of notes with all the necessary information. But when network offloading occurs, it's as if some crucial pages are missing or not present, making it harder to understand the original context and intent.

pcap-chacho

What Happens?

  1. Network Offloading: Many modern network cards and VM hosts support network offloading, allowing the NIC or host to handle certain aspects of the TCP protocol stack on behalf of the guest VM.

  2. MSS (Maximum Segment Size): The MSS is a parameter that specifies the largest amount of data that a device is willing to receive in a single TCP segment. While MSS itself is not altered by offloading, offloading can impact how TCP segments are handled, potentially obscuring details in captured traffic.

  3. VM Network Stack: The VM’s network stack, specifically the virtual NIC, handles TCP segments. Disabling offloading within the VM ensures that packet capture tools see traffic as it is truly handled by the VM without NIC-induced alterations.

  4. Impact on Captured Traffic: When offloading is enabled, the virtual NIC or the host might process packets differently, altering their structure in ways that are not reflected accurately in packet captures.

The Impact

  • Altered Packet Handling: Network offloading features like TSO and GRO can segment or reassemble packets in ways that make captured data appear differently than the actual traffic flow, which can complicate analysis and debugging.
  • Obscured TCP Details: Important TCP details, including segment boundaries, can be masked by offloading, making the captured data less representative of true network behavior.

This can lead to challenges when trying to analyze or debug network problems, as the captured data may not accurately represent the true network behavior.

Disabling Offloading

To disable offloading within a VM, use the following command:

RegEx Primer

greppinglogs-chacho


Finding Patterns in Logs (or Other Files)

What is Regex?

Regular Expressions (regex) are sequences of characters that form search patterns. They are used for matching, searching, and manipulating text, making them incredibly useful for analyzing data, detecting patterns, and automating tasks. In cybersecurity, regex can help identify sensitive information, extract useful data from logs, and detect anomalies.

Different Regex Formats

Regex patterns come in several different formats, each suited to specific use cases:

  • Basic Regular Expressions (BRE): Simple, portable expressions that match literal text or basic patterns. Use when you need simplicity without advanced matching requirements.
  • Extended Regular Expressions (ERE): Adds flexibility with operators like +, ?, and {}, useful for moderately complex patterns.
  • Perl-Compatible Regular Expressions (PCRE): Highly versatile, supporting lookaheads, lookbehinds, and more. Ideal for complex patterns and advanced searches.
  • POSIX Regular Expressions: Found in POSIX tools (like awk), with specific character classes like [[:alnum:]]. Choose for cross-platform consistency.

Basic Concepts of Regex

Literal Characters

Match exactly what you type (e.g., abc matches "abc").

Metacharacters

Special characters with unique functions:

  • .: Matches any character except a newline.
  • ^: Anchors the match to the start of a line.
  • $: Anchors the match to the end of a line.
  • \: Escapes a metacharacter to treat it as a literal.

Character Classes

Define a set of characters:

  • [0-9] or \d: Matches any digit.
  • [a-zA-Z]: Matches any letter (uppercase or lowercase).

Quantifiers

Define how many times an element must appear:

  • *: Matches 0 or more times.
  • +: Matches 1 or more times.
  • ?: Matches 0 or 1 time.
  • {n,m}: Matches between n (minimum) and m (maximum) times.

Grouping and Capturing

Parentheses () group patterns and capture matched text.

Why Use Regex in Cybersecurity?

  • Log Analysis: Quickly search and filter through logs to find specific events, IP addresses, error codes, or patterns.
  • Data Extraction: Extract sensitive information like credit card numbers, email addresses, or phone numbers.
  • Intrusion Detection: Identify patterns indicative of malicious activity, like SQL injection attempts, XSS payloads, or anomalous user behavior.
  • Data Sanitization: Validate and sanitize inputs to prevent injection attacks.

Choosing a Regex Format

Basic Regular Expressions (BRE)

When to Use:

Use BRE when working with simple patterns and in cases where compatibility with various systems is a factor.

Example:
grep -Bil '(secret|confidential|sensitive)' /path/to/file.txt

This command uses BRE to search for "secret," "confidential," or "sensitive" in the file.