In certain scenarios, it may be necessary to collect artifacts from target machines in an offline mode, ensuring minimal alteration or contamination of the digital environment. Various tools are available for this purpose, designed to extract information discreetly and effectively. These tools enable forensic investigators to gather crucial data without compromising the integrity of the evidence.
Importing an offline collection can be done via the Server.Utils.ImportCollection artifact. This artifact will inspect the zip file from a path specified on the server and import it as a new collection (with a new collection ID) into either a specified client or a new randomly generated client.
Using the KAPE GUI to analyze the artifacts
Instead of the velociraptor GUI, you can use the KAPE GUI to analyze and process all the artifacts.
If you are running it locally, the "Module Source" should be the folder where the artifacts obtained are. 😎
Next, you can use "TimeLine Explorer " to analyze the result.
This tool can be used to recover files from damaged devices.
Velociraptor Analysis
After getting the ZIP files with all the artifacts, the file must be imported into the GUI.
Importing an offline collection can be done via the Server.Utils.ImportCollection artifact. This artifact will inspect the zip file from a path specified on the server and import it as a new collection (with new collection id) into either a specified client or a new randomly generated client.
After that, click on "Search" and select the target machine ID you want to analyze.
After that, select the artifacts FLOWID, click on Notebook, and all the data is presented! 👍
In addition, you can also create a new hunting and add the notebook logs into the hunting process. This is just a way how to split the results to perform a better analysis.
These tools are also useful to collect pieces of evidence from eventx. For example, WELA can detail authentication on the machine by user and their types.
Each facet of analysis is delineated into subsections accessible through the navigation menu on this page.
The files under analysis are:
Timeline Explorer, EZViewer, and Hasher are proficient tools for concurrently examining all artifacts.
Timeline Explorer
Opening all the CSV files post-normalization tailored by the specific tools.
EZViewer
Opening single files (docx, csv, pdf, etc).
Hasher
Hash everything.
Dissect
Dissect is an incident response framework build from various parsers and implementations of file formats. Tying this all together, Dissect allows you to work with tools named target-query and target-shell to quickly gain access to forensic artefacts, such as Runkeys, Prefetch files, and Windows Event Logs, just to name a few!
And the best thing: all in a singular way, regardless of underlying container (E01, VMDK, QCoW), filesystem (NTFS, ExtFS, FFS), or Operating System (Windows, Linux, ESXi) structure / combination. You no longer have to bother extracting files from your forensic container, mount them (in case of VMDKs and such), retrieve the MFT, and parse it using a separate tool, to finally create a timeline to analyse. This is all handled under the hood by Dissect in a user-friendly manner.
If we take the example above, you can start analysing parsed MFT entries by just using a command like target-query -f mft <PATH_TO_YOUR_IMAGE>!
target-shell <PATH_TO_YOUR_IMAGE>
Download artifacts from image raw (VMDK, E01, RAW, etc)
After that, convert all the jsonl files from the dissect output into CSV files to import them in the Timeline Explorer!
import osimport csvimport jsonimport sys# Function to convert JSONL to CSV, skipping the first linedefconvert_jsonl_to_csv(jsonl_file,csv_file):withopen(jsonl_file, 'r')as json_file:withopen(csv_file, 'w', newline='')as csv_out: csv_writer = csv.writer(csv_out)# Skip the first linenext(json_file)for line in json_file: data = json.loads(line)if csv_out.tell()==0:# Write the header in the first line csv_writer.writerow(data.keys()) csv_writer.writerow(data.values())# Ask the user for the base directorybase_dir =input('Please enter the base directory path: ')# Check if the directory existsifnot os.path.isdir(base_dir):print(f"The path {base_dir} does not exist. Please try again.")else:# Count total number of files to process total_files = sum(len(files) for _, _, files in os.walk(base_dir) if any(file.endswith('.jsonl') for file in files))
progress =0 finished_files = [] # List to keep track of finished files# Walk through the directories and subdirectories recursivelyfor root, dirs, files in os.walk(base_dir):for file in files:if file.endswith('.jsonl'): jsonl_path = os.path.join(root, file) csv_path = os.path.join(root, file.replace('.jsonl', '.csv'))convert_jsonl_to_csv(jsonl_path, csv_path) progress +=1 finished_files.append(file)# Show files converted so farprint("\nFiles converted so far:")for finished in finished_files:print(finished)# Print progress bar with file name bar_length =40# Length of the progress bar progress_bar ='#'*int(bar_length * progress / total_files) sys.stdout.write(f'\rConverting {file}: [{progress_bar:<{bar_length}}] {progress}/{total_files} files converted')
sys.stdout.flush()print("\nAll files converted!")
Drop all the security logs from Collectors zip files
import zipfile
import os
import shutil
# Lista das pastas
folders = [
"cxxxx000031/",
"Cxxxx00010CV/"
]
# Diretório de saída
output_dir = "output_evtx"
os.makedirs(output_dir, exist_ok=True)
# Loop por cada pasta para processar o ficheiro ZIP
for folder in folders:
# Localizar qualquer arquivo ZIP que comece com "Collection-"
for filename in os.listdir(folder):
if filename.startswith("Collection-") and filename.endswith(".zip"):
zip_path = os.path.join(folder, filename)
prefix = os.path.basename(os.path.normpath(folder)) # Prefixo com o nome da pasta
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
# Procurar o ficheiro Security.evtx dentro do ZIP
for file in zip_ref.namelist():
if "Security.evtx" in file:
# Extrair e renomear o ficheiro com o prefixo da pasta
output_file_path = os.path.join(output_dir, f"{prefix}_Security.evtx")
with zip_ref.open(file) as source, open(output_file_path, "wb") as target:
shutil.copyfileobj(source, target)
print(f"Extraído: {output_file_path}")
break # Parar após encontrar o Security.evtx
break # Prosseguir para a próxima pasta após encontrar um ZIP que comece com "Collection-"
target-yara xxx-flat.vmdk -p 'c:\Users' --check -r yara/ | tee -a output.log
target-yara xxx-flat.vmdk -p 'c:\Windows\Temp' --check -r yara/ | tee -a out
target-yara xxx-flat.vmdk -p 'c:\Users\ProgramData' --check -r yara/ | tee -a out
Using the gkape tool to parse the telemetry obtained from the collector (raw log files from Windows).
After getting all the zip outputs from the target machines, the following procedure should be executed:
The files should be prepared to analyze
The following python script executed on the root folder:
import osimport shutildeflist_folders(directory):try:# Construct the full path full_path = os.path.abspath(directory)# Check if the directory existsifnot os.path.exists(full_path):raiseFileNotFoundError(f"Directory '{directory}' does not exist.")# List all entries in the directory entries = os.listdir(full_path)# Filter only directories folders = [entry for entry in entries if os.path.isdir(os.path.join(full_path, entry))]return foldersexceptExceptionas e:print(f"Error: {e}")return []defcopy_thumbcache_files(root_folder,output_folder):try:# Create output folder if it doesn't existifnot os.path.exists(output_folder): os.makedirs(output_folder)# List all folders in the root folder folders =list_folders(root_folder)# Iterate through each folderfor folder in folders: folder_path = os.path.join(root_folder, folder) explorer_path = os.path.join(folder_path, 'AppData', 'Local', 'Microsoft', 'Windows', 'Explorer')# Check if the Explorer folder existsif os.path.exists(explorer_path):# Iterate through files in Explorer folder files = os.listdir(explorer_path)# Copy thumbcache files to output folderfor file in files:if file.startswith("thumbcache"): file_path = os.path.join(explorer_path, file) shutil.copy(file_path, os.path.join(output_folder, file))print(f"Copied '{file}' to '{output_folder}'")exceptExceptionas e:print(f"Error: {e}")# Root folder to search for user foldersroot_folder ='uploads\\auto\\C%3A\\Users'# Output folder for copied thumbcache filesoutput_folder ='output_thumbcache'# Copy thumbcache files to output foldercopy_thumbcache_files(root_folder, output_folder)
Changing the following mkape files:
Description: 'thumbcache_viewer_cmd.exe: process Windows Thumbcache files'
Category: FileKnowledge
Author: Dennis Reneau, Kevin Pagano
Version: 2.0
Id: 8896483c-563a-4a28-ad8a-07ba74a54a63
BinaryUrl: https://github.com/thumbcacheviewer/thumbcacheviewer/releases/download/v1.0.1.8/thumbcache_viewer_cmd.zip
ExportFormat: html
Processors:
-
Executable: thumbcache_viewer_cmd.exe
CommandLine: -o %destinationDirectory%\ThumbCache_Results -w -c -z -d %sourceDirectory%\output_thumbcache
ExportFormat: html
ExportFile: thumbcache_results.csv
# Documentation
# Uses Thumbcache Viewer (https://github.com/thumbcacheviewer)
# Designed to work with the Thumbcache DB Target collection created by Eric Zimmerman.
# Executable author Eric Kutcher.
# Point msource (Module Source) to the Thumbcache folder or use the Target/Module option of KAPE.
# Options -w HTML Report | -c CSV Report | -z Exclude 0 byte files | -n Prevent Thumbnail extraction | -o Output
# 2023-06-27 Updated by Kevin Pagano: Updated binary URL, changed source to directory for parsing to HMTL properly if more than DB one file
Description: Tool to parse Windows Background Intelligent Transfer Service database files
Category: GitHub
Author: Pedro Sanchez Cordero (conexioninversa)
Version: 1.0
Id: acdc62ed-b1a1-426f-8d5e-e53687284410
BinaryUrl: https://github.com/conexioninversa/BitsParser/blob/master/BitsParser.exe
ExportFormat: json
Processors:
-
Executable: BitsParser.exe
CommandLine: -i %sourceDirectory%\uploads\auto\C%3A\ProgramData\Microsoft\Network\Downloader\ -o %destinationDirectory%\BitsParser_Results.json
ExportFormat: json
# Documentation
# https://github.com/fireeye/BitsParser
# By default BitsParser will process files in the %ALLUSERSPROFILE%\Microsoft\Network\Downloader. The script can be used with offline files from alternate operating systems.
# By default BitsParser will only parse and output active jobs and files. To carve deleted entries from the database use --carvedb. To carve entries from all file types, including transaction logs, use --carveall
# https://www.sans.org/reading-room/whitepapers/forensics/bits-forensics-39195
# https://cyberforensicator.com/2019/05/12/using-mitre-attck-for-forensics-bits-jobs-t1197/
Description: 'Ese2csv: Parsing SRUM Database'
Category: SRUMDatabase
Author: Max Ye
Version: 1.0
Id: 852b64c1-fd0e-47ec-8aa4-0994dbf5d8d1
BinaryUrl: https://github.com/MarkBaggett/ese-analyst/archive/master.zip
ExportFormat: csv
Processors:
-
Executable: ese-analyst\ese2csv.exe
CommandLine: -o %destinationDirectory% -p srudb_plugin --plugin-args "%sourceDirectory%\uploads\auto\C%3A\Windows\System32\config\SOFTWARE" -- "%sourceDirectory%\uploads\auto\C%3A\Windows\System32\sru\SRUDB.dat"
ExportFormat: csv
# Documentation
# https://github.com/MarkBaggett/ese-analyst
# Create a folder "ese-analyst" within the ".\KAPE\Modules\bin" folder
# Place both files "ese2csv.exe" and "srudb_plugin.py" into ".\KAPE\Modules\bin\ese-analyst"
# When using this Module, the Module source should be set to OS drive root directory (e.g. C:\), because parameters use absolute paths