In certain scenarios, it may be necessary to collect artifacts from target machines in an offline mode, ensuring minimal alteration or contamination of the digital environment. Various tools are available for this purpose, designed to extract information discreetly and effectively. These tools enable forensic investigators to gather crucial data without compromising the integrity of the evidence.
Importing an offline collection can be done via the Server.Utils.ImportCollection artifact. This artifact will inspect the zip file from a path specified on the server and import it as a new collection (with a new collection ID) into either a specified client or a new randomly generated client.
Using the KAPE GUI to analyze the artifacts
Instead of the velociraptor GUI, you can use the KAPE GUI to analyze and process all the artifacts.
Next, you can use "TimeLine Explorer " to analyze the result.
KAPE Agent Collector
KAPE agent can be used to collect also the data.
The GUI can be used to select what kind of collection we want to do:
For collecting the artifacts on the remote machine, you just need to use the kape.exe collector with the Modules and Targets folder. See below:
To execute it, open a new command line with administrator rights, and paste the command obtained from step 4 on the GUI.
or accessing the target machine, mount the kape server folder, and run it remotely without touch with the binary on disk.
net use k: \\kape-server-vm\triage /user:kape-server-vm\analyst
k:\kape\kape.exe --tsource C --target RegistryHives --tdest k:\kape_out\tdest --vss
Building your agent collector
By developing an agent, you can collect also the raw files from the target Windows machine. This can be very useful for post-analysis in a depth way.
Next, see some of the raw artifacts collected.
Example of the content of the "amcache directory":
Full disk image
In order to obtain a complete snapshot of the target machines the following tools can be used:
FTK IMAGER
Disk2vhd
More details on how to use it can be found here:
chntpwn
Also the usage of chntpwn to change Windows local administration passwords before executing an analysis.
Autopsy
TestDisk & PhotoRec
This tool can be used to recover files from damaged devices.
Velociraptor Analysis
After getting the ZIP files with all the artifacts, the file must be imported into the GUI.
Importing an offline collection can be done via the Server.Utils.ImportCollection artifact. This artifact will inspect the zip file from a path specified on the server and import it as a new collection (with new collection id) into either a specified client or a new randomly generated client.
After that, click on "Search" and select the target machine ID you want to analyze.
In addition, you can also create a new hunting and add the notebook logs into the hunting process. This is just a way how to split the results to perform a better analysis.
$ ./chainsaw analyse srum --software ./SOFTWARE ./SRUDB.dat -o ./output.json
██████╗██╗ ██╗ █████╗ ██╗███╗ ██╗███████╗ █████╗ ██╗ ██╗
██╔════╝██║ ██║██╔══██╗██║████╗ ██║██╔════╝██╔══██╗██║ ██║
██║ ███████║███████║██║██╔██╗ ██║███████╗███████║██║ █╗ ██║
██║ ██╔══██║██╔══██║██║██║╚██╗██║╚════██║██╔══██║██║███╗██║
╚██████╗██║ ██║██║ ██║██║██║ ╚████║███████║██║ ██║╚███╔███╔╝
╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝╚══════╝╚═╝ ╚═╝ ╚══╝╚══╝
By WithSecure Countercept (@FranticTyping, @AlexKornitzer)
[+] ESE database file loaded from "/home/user/Documents/SRUDB.dat"
[+] Parsing the ESE database...
[+] SOFTWARE hive loaded from "/home/user/Documents/SOFTWARE"
[+] Parsing the SOFTWARE registry hive...
[+] Analysing the SRUM database...
[+] Details about the tables related to the SRUM extensions:
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| Table GUID | Table Name | DLL Path | Timeframe of the data | Expected Retention Time |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {5C8CF1C7-7257-4F13-B223-970EF5939312} | App Timeline Provider | %SystemRoot%\System32\eeprov.dll | 2022-03-10 16:34:59 UTC | 7 days |
| | | | 2022-03-10 21:10:00 UTC | |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {B6D82AF1-F780-4E17-8077-6CB9AD8A6FC4} | Tagged Energy Provider | %SystemRoot%\System32\eeprov.dll | No records | 3 days |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {D10CA2FE-6FCF-4F6D-848E-B2E99266FA86} | WPN SRUM Provider | %SystemRoot%\System32\wpnsruprov.dll | 2022-03-10 20:09:00 UTC | 60 days |
| | | | 2022-03-10 21:09:00 UTC | |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {D10CA2FE-6FCF-4F6D-848E-B2E99266FA89} | Application Resource Usage Provider | %SystemRoot%\System32\appsruprov.dll | 2022-03-10 16:34:59 UTC | 60 days |
| | | | 2022-03-10 21:10:00 UTC | |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {FEE4E14F-02A9-4550-B5CE-5FA2DA202E37} | Energy Usage Provider | %SystemRoot%\System32\energyprov.dll | No records | 60 days |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}LT | Energy Usage Provider (Long Term) | %SystemRoot%\System32\energyprov.dll | No records | 1820 days |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {973F5D5C-1D90-4944-BE8E-24B94231A174} | Windows Network Data Usage Monitor | %SystemRoot%\System32\nduprov.dll | 2022-03-10 16:34:59 UTC | 60 days |
| | | | 2022-03-10 21:10:00 UTC | |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {7ACBBAA3-D029-4BE4-9A7A-0885927F1D8F} | vfuprov | %SystemRoot%\System32\vfuprov.dll | 2022-03-10 20:09:00 UTC | 60 days |
| | | | 2022-03-10 21:10:00 UTC | |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {DA73FB89-2BEA-4DDC-86B8-6E048C6DA477} | Energy Estimation Provider | %SystemRoot%\System32\eeprov.dll | No records | 7 days |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
| {DD6636C4-8929-4683-974E-22C046A43763} | Windows Network Connectivity Usage Monitor | %SystemRoot%\System32\ncuprov.dll | 2022-03-10 16:34:59 UTC | 60 days |
| | | | 2022-03-10 21:10:00 UTC | |
+------------------------------------------+--------------------------------------------+--------------------------------------+-------------------------+-------------------------+
[+] SRUM database parsed successfully
[+] Saving output to "/home/user/Documents/output.json"
[+] Saved output to "/home/user/Documents/output.json"
zircolite
python3 zircolite.py --evtx Logs/ --package
Hayabusa
Import the logs into Timeline Explorer and add visualization filter by: LEVEL > COMPUTER > RULE
DeepBlueCLI & WELA & APT-Hunter
These tools are also useful to collect pieces of evidence from eventx. For example, WELA can detail authentication on the machine by user and their types.
Manual Analysis with ericzimmerman Tools
Each facet of analysis is delineated into subsections accessible through the navigation menu on this page.
The files under analysis are:
Timeline Explorer, EZViewer, and Hasher are proficient tools for concurrently examining all artifacts.
Timeline Explorer
Opening all the CSV files post-normalization tailored by the specific tools.
EZViewer
Opening single files (docx, csv, pdf, etc).
Hasher
Hash everything.
Dissect
Dissect is an incident response framework build from various parsers and implementations of file formats. Tying this all together, Dissect allows you to work with tools named target-query and target-shell to quickly gain access to forensic artefacts, such as Runkeys, Prefetch files, and Windows Event Logs, just to name a few!
And the best thing: all in a singular way, regardless of underlying container (E01, VMDK, QCoW), filesystem (NTFS, ExtFS, FFS), or Operating System (Windows, Linux, ESXi) structure / combination. You no longer have to bother extracting files from your forensic container, mount them (in case of VMDKs and such), retrieve the MFT, and parse it using a separate tool, to finally create a timeline to analyse. This is all handled under the hood by Dissect in a user-friendly manner.
If we take the example above, you can start analysing parsed MFT entries by just using a command like target-query -f mft <PATH_TO_YOUR_IMAGE>!
target-shell <PATH_TO_YOUR_IMAGE>
Download artifacts from image raw (VMDK, E01, RAW, etc)
After that, convert all the jsonl files from the dissect output into CSV files to import them in the Timeline Explorer!
import osimport csvimport jsonimport sys# Function to convert JSONL to CSV, skipping the first linedefconvert_jsonl_to_csv(jsonl_file,csv_file):withopen(jsonl_file, 'r')as json_file:withopen(csv_file, 'w', newline='')as csv_out: csv_writer = csv.writer(csv_out)# Skip the first linenext(json_file)for line in json_file: data = json.loads(line)if csv_out.tell()==0:# Write the header in the first line csv_writer.writerow(data.keys()) csv_writer.writerow(data.values())# Ask the user for the base directorybase_dir =input('Please enter the base directory path: ')# Check if the directory existsifnot os.path.isdir(base_dir):print(f"The path {base_dir} does not exist. Please try again.")else:# Count total number of files to process total_files =sum(len(files) for _, _, files in os.walk(base_dir) ifany(file.endswith('.jsonl') for file in files)) progress =0 finished_files = [] # List to keep track of finished files# Walk through the directories and subdirectories recursivelyfor root, dirs, files in os.walk(base_dir):for file in files:if file.endswith('.jsonl'): jsonl_path = os.path.join(root, file) csv_path = os.path.join(root, file.replace('.jsonl', '.csv'))convert_jsonl_to_csv(jsonl_path, csv_path) progress +=1 finished_files.append(file)# Show files converted so farprint("\nFiles converted so far:")for finished in finished_files:print(finished)# Print progress bar with file name bar_length =40# Length of the progress bar progress_bar ='#'*int(bar_length * progress / total_files) sys.stdout.write(f'\rConverting {file}: [{progress_bar:<{bar_length}}] {progress}/{total_files} files converted') sys.stdout.flush()print("\nAll files converted!")
Drop all the security logs from Collectors zip files
import zipfile
import os
import shutil
# Lista das pastas
folders = [
"cxxxx000031/",
"Cxxxx00010CV/"
]
# Diretório de saída
output_dir = "output_evtx"
os.makedirs(output_dir, exist_ok=True)
# Loop por cada pasta para processar o ficheiro ZIP
for folder in folders:
# Localizar qualquer arquivo ZIP que comece com "Collection-"
for filename in os.listdir(folder):
if filename.startswith("Collection-") and filename.endswith(".zip"):
zip_path = os.path.join(folder, filename)
prefix = os.path.basename(os.path.normpath(folder)) # Prefixo com o nome da pasta
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
# Procurar o ficheiro Security.evtx dentro do ZIP
for file in zip_ref.namelist():
if "Security.evtx" in file:
# Extrair e renomear o ficheiro com o prefixo da pasta
output_file_path = os.path.join(output_dir, f"{prefix}_Security.evtx")
with zip_ref.open(file) as source, open(output_file_path, "wb") as target:
shutil.copyfileobj(source, target)
print(f"Extraído: {output_file_path}")
break # Parar após encontrar o Security.evtx
break # Prosseguir para a próxima pasta após encontrar um ZIP que comece com "Collection-"
target-yara xxx-flat.vmdk -p 'c:\Users' --check -r yara/ | tee -a output.log
target-yara xxx-flat.vmdk -p 'c:\Windows\Temp' --check -r yara/ | tee -a out
target-yara xxx-flat.vmdk -p 'c:\Users\ProgramData' --check -r yara/ | tee -a out
My bundle:
Convert from yara output into CSV
import csvimport redefparse_yara_output(line): pattern =r"hostname='(?P<hostname>.*?)' domain='(?P<domain>.*?)' path='(?P<path>.*?)' digest=\(md5=(?P<md5>.*?), sha1=(?P<sha1>.*?), sha256=(?P<sha256>.*?)\) rule='(?P<rule>.*?)' tags=\[(?P<tags>.*?)\] namespace='(?P<namespace>.*?)'" match = re.search(pattern, line)if match:return match.groupdict()returnNonedefmain(): input_filename =input("Digite o nome do ficheiro de output YARA: ") output_filename = input_filename.split('.')[0] +".csv"withopen(input_filename, 'r')as infile: yara_data = infile.readlines() parsed_data = [parse_yara_output(line)for line in yara_data ifparse_yara_output(line)] fields = ["hostname","domain","path","md5","sha1","sha256","rule","tags","namespace"]withopen(output_filename, 'w', newline='')as csvfile: csvwriter = csv.DictWriter(csvfile, fieldnames=fields) csvwriter.writeheader()for data in parsed_data:# Converte a lista de tags para string, se existirif data['tags']: data['tags']=", ".join(data['tags'].split(", ")) csvwriter.writerow(data)print(f"Ficheiro CSV gerado: {output_filename}")if__name__=="__main__":main()
After that you can import the CSV in Timeline Explorer and group the output by matched rules ;)
Yara Repositories:
Loki Yara
Awesome Yara
Dissect Documentation:
Dissect Tutorials
gKAPE (offline parser)
Using the gkape tool to parse the telemetry obtained from the collector (raw log files from Windows).
After getting all the zip outputs from the target machines, the following procedure should be executed:
The files should be prepared to analyze
The following python script executed on the root folder:
import osimport shutildeflist_folders(directory):try:# Construct the full path full_path = os.path.abspath(directory)# Check if the directory existsifnot os.path.exists(full_path):raiseFileNotFoundError(f"Directory '{directory}' does not exist.")# List all entries in the directory entries = os.listdir(full_path)# Filter only directories folders = [entry for entry in entries if os.path.isdir(os.path.join(full_path, entry))]return foldersexceptExceptionas e:print(f"Error: {e}")return []defcopy_thumbcache_files(root_folder,output_folder):try:# Create output folder if it doesn't existifnot os.path.exists(output_folder): os.makedirs(output_folder)# List all folders in the root folder folders =list_folders(root_folder)# Iterate through each folderfor folder in folders: folder_path = os.path.join(root_folder, folder) explorer_path = os.path.join(folder_path, 'AppData', 'Local', 'Microsoft', 'Windows', 'Explorer')# Check if the Explorer folder existsif os.path.exists(explorer_path):# Iterate through files in Explorer folder files = os.listdir(explorer_path)# Copy thumbcache files to output folderfor file in files:if file.startswith("thumbcache"): file_path = os.path.join(explorer_path, file) shutil.copy(file_path, os.path.join(output_folder, file))print(f"Copied '{file}' to '{output_folder}'")exceptExceptionas e:print(f"Error: {e}")# Root folder to search for user foldersroot_folder ='uploads\\auto\\C%3A\\Users'# Output folder for copied thumbcache filesoutput_folder ='output_thumbcache'# Copy thumbcache files to output foldercopy_thumbcache_files(root_folder, output_folder)
Changing the following mkape files:
Description: 'thumbcache_viewer_cmd.exe: process Windows Thumbcache files'
Category: FileKnowledge
Author: Dennis Reneau, Kevin Pagano
Version: 2.0
Id: 8896483c-563a-4a28-ad8a-07ba74a54a63
BinaryUrl: https://github.com/thumbcacheviewer/thumbcacheviewer/releases/download/v1.0.1.8/thumbcache_viewer_cmd.zip
ExportFormat: html
Processors:
-
Executable: thumbcache_viewer_cmd.exe
CommandLine: -o %destinationDirectory%\ThumbCache_Results -w -c -z -d %sourceDirectory%\output_thumbcache
ExportFormat: html
ExportFile: thumbcache_results.csv
# Documentation
# Uses Thumbcache Viewer (https://github.com/thumbcacheviewer)
# Designed to work with the Thumbcache DB Target collection created by Eric Zimmerman.
# Executable author Eric Kutcher.
# Point msource (Module Source) to the Thumbcache folder or use the Target/Module option of KAPE.
# Options -w HTML Report | -c CSV Report | -z Exclude 0 byte files | -n Prevent Thumbnail extraction | -o Output
# 2023-06-27 Updated by Kevin Pagano: Updated binary URL, changed source to directory for parsing to HMTL properly if more than DB one file
Description: Tool to parse Windows Background Intelligent Transfer Service database files
Category: GitHub
Author: Pedro Sanchez Cordero (conexioninversa)
Version: 1.0
Id: acdc62ed-b1a1-426f-8d5e-e53687284410
BinaryUrl: https://github.com/conexioninversa/BitsParser/blob/master/BitsParser.exe
ExportFormat: json
Processors:
-
Executable: BitsParser.exe
CommandLine: -i %sourceDirectory%\uploads\auto\C%3A\ProgramData\Microsoft\Network\Downloader\ -o %destinationDirectory%\BitsParser_Results.json
ExportFormat: json
# Documentation
# https://github.com/fireeye/BitsParser
# By default BitsParser will process files in the %ALLUSERSPROFILE%\Microsoft\Network\Downloader. The script can be used with offline files from alternate operating systems.
# By default BitsParser will only parse and output active jobs and files. To carve deleted entries from the database use --carvedb. To carve entries from all file types, including transaction logs, use --carveall
# https://www.sans.org/reading-room/whitepapers/forensics/bits-forensics-39195
# https://cyberforensicator.com/2019/05/12/using-mitre-attck-for-forensics-bits-jobs-t1197/
Description: 'Ese2csv: Parsing SRUM Database'
Category: SRUMDatabase
Author: Max Ye
Version: 1.0
Id: 852b64c1-fd0e-47ec-8aa4-0994dbf5d8d1
BinaryUrl: https://github.com/MarkBaggett/ese-analyst/archive/master.zip
ExportFormat: csv
Processors:
-
Executable: ese-analyst\ese2csv.exe
CommandLine: -o %destinationDirectory% -p srudb_plugin --plugin-args "%sourceDirectory%\uploads\auto\C%3A\Windows\System32\config\SOFTWARE" -- "%sourceDirectory%\uploads\auto\C%3A\Windows\System32\sru\SRUDB.dat"
ExportFormat: csv
# Documentation
# https://github.com/MarkBaggett/ese-analyst
# Create a folder "ese-analyst" within the ".\KAPE\Modules\bin" folder
# Place both files "ese2csv.exe" and "srudb_plugin.py" into ".\KAPE\Modules\bin\ese-analyst"
# When using this Module, the Module source should be set to OS drive root directory (e.g. C:\), because parameters use absolute paths