Windows Logs Automation
Offline Artifacts Collection
Unix Collector
Velociraptor Offline Agent
In certain scenarios, it may be necessary to collect artifacts from target machines in an offline mode, ensuring minimal alteration or contamination of the digital environment. Various tools are available for this purpose, designed to extract information discreetly and effectively. These tools enable forensic investigators to gather crucial data without compromising the integrity of the evidence.

Importing an offline collection can be done via the Server.Utils.ImportCollection artifact. This artifact will inspect the zip file from a path specified on the server and import it as a new collection (with a new collection ID) into either a specified client or a new randomly generated client.
Using the KAPE GUI to analyze the artifacts
Instead of the velociraptor GUI, you can use the KAPE GUI to analyze and process all the artifacts.
If you are running it locally, the "Module Source" should be the folder where the artifacts obtained are. 😎

Next, you can use "TimeLine Explorer " to analyze the result.

KAPE Agent Collector
KAPE agent can be used to collect also the data.

The GUI can be used to select what kind of collection we want to do:

For collecting the artifacts on the remote machine, you just need to use the kape.exe collector with the Modules and Targets folder. See below:

To execute it, open a new command line with administrator rights, and paste the command obtained from step 4 on the GUI.
Next, the data will be available on: C:\temp\kap inside a zip file.
Run modules on the target machine
In addition, you can also run the modules on the target machine or locally.
If you are running it locally, the "Module Source" should be the folder where the artifacts obtained are. 😎

Next, you can use "TimeLine Explorer " to analyze the result.

Remote Collections with KAPE
or accessing the target machine, mount the kape server folder, and run it remotely without touch with the binary on disk.
Building your agent collector
By developing an agent, you can collect also the raw files from the target Windows machine. This can be very useful for post-analysis in a depth way.
Next, see some of the raw artifacts collected.

Example of the content of the "amcache directory":


Full disk image
In order to obtain a complete snapshot of the target machines the following tools can be used:
FTK IMAGER
Disk2vhd
More details on how to use it can be found here:
chntpwn
Also the usage of chntpwn to change Windows local administration passwords before executing an analysis.

Autopsy
TestDisk & PhotoRec
This tool can be used to recover files from damaged devices.




Velociraptor Analysis
After getting the ZIP files with all the artifacts, the file must be imported into the GUI.
Importing an offline collection can be done via the Server.Utils.ImportCollection artifact. This artifact will inspect the zip file from a path specified on the server and import it as a new collection (with new collection id) into either a specified client or a new randomly generated client.
After that, click on "Search" and select the target machine ID you want to analyze.

After that, select the artifacts FLOWID, click on Notebook, and all the data is presented! 👍

In addition, you can also create a new hunting and add the notebook logs into the hunting process. This is just a way how to split the results to perform a better analysis.
Eventx Analysis
chainsaw
Search all .evtx files for the case-insensitive string "mimikatz"
*Search all .evtx files for powershell script block events (Event ID 4014
Search a specific evtx log for logon events, with a matching regex pattern, output in JSON format
Hunting
Hunt through all evtx files using Sigma rules for detection logic
Hunt through all evtx files using Sigma rules and Chainsaw rules for detection logic and output in CSV format to the results folder
Hunt through all evtx files using Sigma rules for detection logic, only search between specific timestamps, and output the results in JSON format
Shimcache
From the collected raw files. there is the SYSTEM hive.
SRUM
Analyse the SRUM database (the SOFTWARE hive is mandatory)
zircolite

Hayabusa
Import the logs into Timeline Explorer and add visualization filter by: LEVEL > COMPUTER > RULE

DeepBlueCLI & WELA & APT-Hunter
These tools are also useful to collect pieces of evidence from eventx. For example, WELA can detail authentication on the machine by user and their types.

Manual Analysis with ericzimmerman Tools
Each facet of analysis is delineated into subsections accessible through the navigation menu on this page.
The files under analysis are:

Timeline Explorer, EZViewer, and Hasher are proficient tools for concurrently examining all artifacts.
Timeline Explorer
Opening all the CSV files post-normalization tailored by the specific tools.

EZViewer
Opening single files (docx, csv, pdf, etc).

Hasher
Hash everything.

Dissect
Dissect is an incident response framework build from various parsers and implementations of file formats. Tying this all together, Dissect allows you to work with tools named target-query and target-shell to quickly gain access to forensic artefacts, such as Runkeys, Prefetch files, and Windows Event Logs, just to name a few!
And the best thing: all in a singular way, regardless of underlying container (E01, VMDK, QCoW), filesystem (NTFS, ExtFS, FFS), or Operating System (Windows, Linux, ESXi) structure / combination. You no longer have to bother extracting files from your forensic container, mount them (in case of VMDKs and such), retrieve the MFT, and parse it using a separate tool, to finally create a timeline to analyse. This is all handled under the hood by Dissect in a user-friendly manner.
If we take the example above, you can start analysing parsed MFT entries by just using a command like target-query -f mft <PATH_TO_YOUR_IMAGE>!
target-shell <PATH_TO_YOUR_IMAGE>
<PATH_TO_YOUR_IMAGE>
Download artifacts from image raw (VMDK, E01, RAW, etc)
After that, convert all the jsonl files from the dissect output into CSV files to import them in the Timeline Explorer!
Drop all the security logs from Collectors zip files
Yara scan in raw formats
My bundle:
Convert from yara output into CSV
After that you can import the CSV in Timeline Explorer and group the output by matched rules ;)

Yara Repositories:
Loki Yara
Awesome Yara
Dissect Documentation:
Dissect Tutorials
gKAPE (offline parser)
Using the gkape tool to parse the telemetry obtained from the collector (raw log files from Windows).

After getting all the zip outputs from the target machines, the following procedure should be executed:
The files should be prepared to analyze

The following python script executed on the root folder:

Changing the following mkape files:
Use the following modules:
Also, run the following manually:
Have a good hunting! 😎
GRR
GRR is a python client (agent) that is installed on target systems, and python server infrastructure that can manage and talk to clients.

References
Last updated
Was this helpful?