# Tips

## NTLM cracking with --remove

```
cat .\hash.txt | .\cut.exe -d ":" "-f1,4" | Out-File -FilePath ntlm.txt -Encoding utf8
```

## Script sort cracked NTLM

python3 script\_passwords.py hashes.txt cracked.txt

```
#!/usr/bin/env python

import sys

print("---start process---")

file_hashes = sys.argv[1]
file_cracked = sys.argv[2]

with open(file_hashes) as f:
    hashes = [line.rstrip() for line in f]

with open(file_cracked) as f:
    cracked = [line.rstrip() for line in f]

f = open("output.txt", "w")

for hash in hashes:
	for crack in cracked:
		a=crack.split(":")
		if a[0] in hash:
			f.write(crack)
			f.write("\n")
			
f.close()
```

The ouput.txt file is generated with all the NTLM hashes, including repetitions.

Finally, the top of the passwords can be see:

```
cat output.txt | sort | uniq -c | sort -nr
or
Get-Content .\output.txt | Group-Object | Sort-Object Count -Descending | Select-Object Name, Count
```

![](https://4052868066-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MWd-VcvRHVgUtkahm85%2Fuploads%2FkdGGrmBDnF1f5coWNb9U%2Fimage.png?alt=media\&token=75a1427a-86f5-4807-973d-56bd66eeaa20)

## Create customized dic from rockyou

```
# grep -Ei 'batman|arkham|joker|alfred|bruce' /usr/share/wordlists/rockyou.txt > batman.txt
# wc -l batman.txt
5532 batman.txt
```

## Password Profiling / Skweez && CEWL

```
.\skweez.exe https://xxxx/pt-pt https://xxx/pilotos -n 16 -m 1 -o teste.txt
```

{% embed url="<https://github.com/edermi/skweez>" %}

```
cewl www.megacorpone.com -m 6 -w megacorp-cewl.txt
```

## Chartset custom hashcat

![](https://4052868066-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MWd-VcvRHVgUtkahm85%2Fuploads%2FA2jlDGzShv4lzMQVylWo%2Fimage.png?alt=media\&token=b8a848f9-d98b-4394-83c2-be26b57df891)

{% embed url="<https://hashcat.net/wiki/doku.php?id=mask_attack#custom_charsets>" %}

## Hashcat methodology cracking

* **Cracking wordlist based**
* **Cracking with Rules**&#x20;
* **Hybrid cracking:** Wordlist + mask && mask + Wordlist.

{% hint style="info" %}
Hint: use ?d (several ...) and ?s?d?d ..n
{% endhint %}

* **Brute-force cracking:** ?sTarget?s?d?d?d?d (incremental 7 - 15)
* **Pure brute-force with chartset:** (3) ?l?u ?3?3?3?3?3?3?3

![](https://4052868066-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MWd-VcvRHVgUtkahm85%2Fuploads%2FjAx3ZK66RKQHTYbuWepW%2Fimage.png?alt=media\&token=05935930-b653-4fdb-9a3e-ee41d74e2d62)

* Download via [#password-profiling-skweez-and-and-cewl](#password-profiling-skweez-and-and-cewl "mention") target dic and use it with rules
* Create target dir via [#create-customized-dic-from-rockyou](#create-customized-dic-from-rockyou "mention")
* **BONUS:** use the cracked passwords, and add it to the wordlists and crack the NOTFOUND hashes again with rules ;)

## Password analysis (Active Directory)

{% hint style="info" %}
Replace several "domains/users" entries for the same domain.
{% endhint %}

![](https://4052868066-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MWd-VcvRHVgUtkahm85%2Fuploads%2F96687NXOIIBwQeyT8LX7%2Fimage.png?alt=media\&token=6bd3937e-1cd3-468d-8338-2154369dbb31)

{% embed url="<https://github.com/clr2of8/DPAT>" %}

### Bonus: run powershell script to get target groups ;)

```
#import powerview module!
#pick the target groups manually or filter *admin*
Get-DomainGroup -Properties name | Out-File -FilePath domaingroups.txt

#create target file with domain groups: groups.txt

#execute the script and change the vars!

-------------------------------------------------------------------------
$domain="org_domain_xpto.pt\"
$workdir_files=".\workdir_name\"
$dc_dump_file="hash.txt"
$cracked_file="cracked.txt"

foreach($group in Get-Content .\groups.txt) {
    if($line -match $regex){
        Write-Host "building group file: $group" -ForegroundColor red -BackgroundColor white 
        Get-DomainGroupMember $group | Select-Object -Property MemberDomain, MemberName, MemberObjectClass -ExpandProperty MemberName |  foreach {if($_.MemberObjectClass -eq "user"){$domain+$_.MemberName}} | Out-File -FilePath "$group.txt" -Encoding utf8
        
    }
}

Write-Host "DPAT groups:"
Write-Host "python.exe .\dpat.py -n $workdir_files$dc_dump_file -c $workdir_files$cracked_file -g " -NoNewline
foreach($group in Get-Content .\groups.txt) {
    if($line -match $regex){
        If ((Get-Content "$group.txt")) {
          Write-Host " '$workdir_files$group.txt'" -NoNewline
        }
    }
}
Write-Host ""
Write-Host "Done! ;)" -ForegroundColor red -BackgroundColor white
```

## Via BloodHound or Neo4J

<pre><code><strong>//get all groups
</strong><strong>MATCH (g:Group) WHERE g.name CONTAINS 'ADMIN'
</strong>RETURN g.name AS GroupName

//get all users from group
Match (u:User)-[:MemberOf]->(g:Group) WHERE g.name CONTAINS "DOMAIN ADMINS" return u.name,g.name

//I like that: get all users from domais like "ADMIN"
Match (u:User)-[:MemberOf]->(g:Group) WHERE g.name CONTAINS "ADMIN" return u.name,g.name
</code></pre>

The Neo4J output can be exported as CSV: user,group.

The following script generates group files with target users. We just need to execute it:

```
python3 scrypt.py export.csv domain_ntds.txt
```

```
import sys
import os
import shutil

def copy_file_to_folder(source_file, destination_folder):
    # Extract the filename from the source file path
    file_name = os.path.basename(source_file)
    
    # Destination file path
    destination_file = os.path.join(destination_folder, file_name)

    # Copy the file to the destination folder
    shutil.copy2(source_file, destination_folder)

    # Print the name of the copied file
    print(f"File '{file_name}' copied to '{destination_folder}'")

def load_hashes_users_domains(file_path):
    hashes_users_domains = []
    with open(file_path, 'r') as file:
        for line in file:
            if '\\' not in line:
                continue
            parts = line.strip().split(':')
            domain, user = parts[0].split('\\')
            hash_value = parts[3]
            hashes_users_domains.append((domain, user, hash_value))
    return hashes_users_domains

def change_domain_prefix(input_folder, hashes_users_domains):
    for file_name in os.listdir(input_folder):
        if file_name.endswith('.txt'):
                file_path = os.path.join(input_folder, file_name)
                with open(file_path, 'r') as file:
                    lines = file.readlines()
                 
                with open(file_path, 'w') as file:
                    for line in lines:
                         for domain, user, hash_value in hashes_users_domains:
                              parts = line.strip().split("\\")
                              if parts[1].strip().lower() == user.strip().lower():
                                  modified_line = domain + "\\" + parts[1] + "\n"
                                  file.write(modified_line)
                              else:
                                  continue  

def group_users(source_file, folder_name):
    # Dictionary to store users grouped by group names
    group_users_dict = {}

    created_files = []  # List to store the names of created files

    with open(source_file, 'r') as file:
        for line in file:
            # Splitting each line into user and group
            user, group = line.strip().split(',')

            # Remove everything after "@" character
            user = user.split('@')[0]

            # Add prefix to the user
            user_with_prefix = f"{prefix}\{user}"

            # Check if the group already exists in the dictionary
            if group in group_users_dict:
                # Append the user to the existing group
                group_users_dict[group].append(user_with_prefix)
            else:
                # Create a new group entry in the dictionary
                group_users_dict[group] = [user_with_prefix]

    # Writing users to separate output files named after each group inside the folder
    for group, users in group_users_dict.items():
        output_file = os.path.join(folder_name, f"{group}.txt")
        with open(output_file, 'w') as out:
            for user in users:
                out.write(user + '\n')
        created_files.append(output_file)

    # Generating the command with the -g parameter containing the names of the files created
    command_files = ' '.join(f'"{file}"' for file in created_files)
    command = f"python3 dpat.py -n '{folder_name}/{dump_file}' -c '{folder_name}/cracked.txt' -g {command_files}"
    print("\n\n\n" + command + "\n\n\n")

    print(f"All users grouped by the prefix '{prefix}' have been saved to '{folder_name}' folder.")

if __name__ == "__main__":
    if len(sys.argv) != 3:
        print("Usage: python script.py <csv_neo4j> <dump_ntds> ")
        sys.exit(1)
    
    input_file = sys.argv[1]
    dump_file = sys.argv[2]
    prefix = input("Enter DOMAIN prefix for all users: ")
    folder_name = f"{prefix}_user_files"
    os.makedirs(folder_name, exist_ok=True)
    input_folder = folder_name
    group_users(input_file, folder_name)
    
    hashes_users_domains = load_hashes_users_domains(dump_file)
    
    change_domain_prefix(folder_name, hashes_users_domains)
    print("Domain prefixes have been updated in the files.")
    
    copy_file_to_folder(dump_file, folder_name)
    
```

## NTDS Active Users

By getting the NTDS file, we can exfiltrate the active users from neo4j database, and get a new NTDS file with just the active users. For this, we can use the following script.

```
# Definir os caminhos dos ficheiros
file1_path = 'ntds_file.txt'
file2_path = 'sam_active_users.txt'
output_path = 'matching_users.txt'

# Ler o ficheiro com o dump de palavras-passe
with open(file1_path, 'r') as file1:
    dump_data = file1.readlines()

# Ler o ficheiro com os SAM users ativos
with open(file2_path, 'r') as file2:
    active_users = [line.strip() for line in file2.readlines()]

# Abrir o ficheiro de saída para escrever os resultados
with open(output_path, 'w') as output_file:
    # Iterar por cada linha no dump de palavras-passe
    for line in dump_data:
        # Extrair o sam-user de cada linha do dump
        parts = line.split(':')
        if len(parts) >= 2:
            domain_user = parts[0]
            sam_user = domain_user.split('\\')[-1]
            # Verificar se o sam-user está na lista de SAM users ativos
            if sam_user in active_users:
                # Escrever a linha correspondente no ficheiro de saída
                output_file.write(line)

print(f"Resultados com match foram guardados em {output_path}")
```

## AD\_Miner

AD Miner is an Active Directory audit tool that leverages cypher queries to crunch data from the #Bloodhound graph database to uncover security weaknesses.

<figure><img src="https://4052868066-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MWd-VcvRHVgUtkahm85%2Fuploads%2FVdbSzKDdMfnIXFfAiRTp%2Fimage.png?alt=media&#x26;token=a8695a93-f2ab-484e-b30a-33347d9c2547" alt=""><figcaption></figcaption></figure>

{% embed url="<https://github.com/Mazars-Tech/AD_Miner.git>" %}

## BONUS

{% embed url="<https://hunter2.gitbook.io/darthsidious/credential-access/password-cracking-and-auditing>" %}

{% embed url="<https://github.com/clr2of8/DPAT/>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://gitbook.seguranca-informatica.pt/tools/password-and-cracking/tips.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
