Cobalt Strike and Tradecraft

It’s been known that some built-in commands in Cobalt Strike are major op-sec no-no’s, but why are they bad? The goal of this post isn’t to teach you “good” op-sec, as I feel that is a bit subjective and dependent on the maturity of the target’s environment, nor is it “how to detect Cobalt Strike”. The purpose of this post is to document what some Cobalt Strike techniques look like under the hood or to a defender’s point of view. Realistically, this post is just breaking down a page straight from Cobalt Strike’s website, which can be found here. I won’t be able to cover all techniques and commands In one article, so this will probably be a two part series.

Before jumping into techniques and the logs associated with them, the baseline question must be answered: “What is bad op-sec?”. Again, this is an extremely subjective question. If you’re operating in an environment with zero defensive and detection capabilities, there is no bad op-sec. While the goal of this article isn’t to teach “good op-sec”, it still has a bias towards somewhat mature environments and certain techniques will be called out where they tend to trigger baseline or low-effort/default alerts & detections. My detection lab for the blog post is extremely simple: just an ELK stack with Winlogbeat & Sysmon on the endpoints, so I’m not covering “advanced” detections here.

Referencing the op-sec article from Cobalt Strike, the first set of built-in commands I’d like to point out are the ‘Process Execution’ techniques, which are run, shell, and pth.

These three commands tend to trigger several baseline alerts. Let’s investigate why.


When an operator uses the shell command in Cobalt Strike, it’s usually to execute a DOS command directly, such as dir, copy, move, etc. Under the hood, the shell command calls cmd.exe /c.

With Sysmon logging, this leaves a sequence of events, all around Event Code 1, Process Create.

We can see here that the shell command spawns cmd.exe under the parent process. whoami though, is also actually an executable within System32, so cmd.exe also spawns that as a child process. But, before that occurs, conhost.exe is called in tandem with cmd.exe. Conhost.exe is a process that’s required for cmd.exe to interface with Explorer.exe. What is unique, is how Conhost.exe is created:

In this case, Conhost.exe’s arguments are 0xffffffff -ForceV1, which tells Conhost which application ID it should connect to. Per Microsoft:

The session identifier of the session that is attached to the physical console. If there is no session attached to the physical console, (for example, if the physical console session is in the process of being attached or detached), this function returns 0xFFFFFFFF.”

A goal of op-sec is to always minimize the amount of traffic, or “footprints” that your activities leave behind. As you can see, shell generates quite a few artifacts and it’s common for detections to pick up as cmd.exe /c is seldom used in environments.


The PTH, or pass-the-hash, command has even more indicators than shell.

From Cobalt Strike’s blog

“The pth command asks mimikatz to: (1) create a new Logon Session, (2) update the credential material in that Logon Session with the domain, username, and password hash you provided, and (3) copy your Access Token and make the copy refer to the new Logon Session. Beacon then impersonates the token made by these steps and you’re ready to pass-the-hash.”

This creates several events.

First, the ‘spawnto’ process that is dictated in the Cobalt Strike profile is created, which in my case is dllhost.exe. This becomes a child process of the current process.  This is used as a sacrificial process in order to “patch” in the new logon session & credentials.

Then a new logon session is created, event ID 4672.

The account then logs on to that new session and another event is created with the ID of 4624.

In this new logon session, cmd.exe is spawned as a child process of dllhost.exe and a string is passed into a named pipe as a unique identifier.

Now, according to the logon session attached to the parent process (dllhost.exe), ADMAlice is the logged in user.

Finally, Conhost.exe is again called since cmd.exe is called. The unique arguments that hide the cmd.exe window are passed into Conhost.

Now, whenever the operator attempts to login to a remote host, the new logon session credential will be attempted first.


The run command is a bit different than PTH and Shell, it does not spawn cmd.exe and instead calls the target executable directly.

Once again though, Conhost is called with the unique arguments.

While the arguments for Conhost aren’t inherently malicious, it is a common identifier for these commands.

execute works similarly to run, however no output is returned.


The powershell command, as you can probably guess, runs a command through PowerShell. Powershell.exe is spawned as a child process but the parent PID can be changed with the ppid command. In this case, though, the ppid is kept to the original parent process.

Conhost is again called.

The major problem with the powershell command is that it always adds unique arguments to the command and encodes the command in base64.

This results in a highly signature-able technique as it is not common to see legitimate PowerShell scripts to run as base64 encoded with the -exec bypass flag.


Powerpick is a command that uses the “fork-and-run” technique, meaning Cobalt Strike creates a sacrificial process to run the command under, returns the output, then kills the process. The name of the spawnto process is defined in the Cobalt Strike profile on the teamserver. In my case, it’s dllhost.exe.

When running a powerpick command, such as powerpick whoami, three processes are created: Dllhost.exe (SpawnTo process), Conhost.exe, and whoami.exe.

While Powerpick does not spawn powershell.exe, there’s still op-sec considerations. In this case, this behavior would look somewhat suspicious because of the parent process of ‘whoami.exe’ is ‘dllhost.exe’. Typically, when a user runs ‘whoami’ it’s going to be in the context of cmd.exe or powershell.exe.

Figure 1: What a normal use of ‘whoami’ looks like

The op-sec consideration here is to be aware of what your parent process is and what process you’ll be spawning. Always try to keep parent-child process relationships as ‘normal’ looking as possible. Dllhost.exe with a child process of ‘whoami.exe’ is not normal.

Similarly, these other commands utilize the “fork-and-run” technique and you can expect similar events:

  • chromedump
  • covertvpn
  • dcsync
  • execute-assembly
  • hashdump
  • logonpasswords
  • mimikatz
  • net *
  • portscan
  • pth
  • ssh
  • ssh-key


The spawnas command will create a new session as another user by supplying their credentials and a listener.

Since this is effectively just re-deploying a payload on the host, there’s several events associated with it.

First, a special logon session is created

If the spawnas command is run as an elevated user, the new session will have a split token, meaning two sessions are created: One privileged and another unprivileged.

Next, a 4648 event will be created, notifying of a logon with explicitly provided credentials

Then a new process will be created under that new session, which is whatever the spawnto process is set in the profile.

That process is now the beacon process for that logon session and user. It’s a child process of the original beacon’s process.

There are several techniques that were not covered in this post that are considered more “op-sec” friendly as they do not leave behind glaring obvious events behind like the ones covered so far. Some examples of these are:

  • Beacon Object Files (BOF)
  • Shinject
  • API-Only calls such as upload, mkdir, downloads, etc.

I do plan on covering detection for these in a later post.

Creating a Red & Blue Team Homelab

Over the years of penetration testing, red teaming, and teaching, I (and I’m sure a lot of others) are often asked how to get started in infosec. More specifically, how to become a pentester/red teamer or threat hunter/blue teamer. One of the things I always recommend is to build out a lab so you can test TTPs (techniques, tactics, procedures) and generate IOCs (indicators of compromise) so that you can understand how an attack works and what noise it generates, with the aim of being either to detect that attack or modify it so it’s harder to detect. It’s not really an opinion, but a matter of fact, that being a better blue teamer will make you a better red teamer and vice-versa. In addition, one of the things that I ask in an interview and have always been asked in an interview, is to describe what your home lab looks like. It’s almost an expectation as it is so crucial to be able to experiment with TTPs in a non-production environment. This post is aimed to help you create a home lab that will allow you to both do red team and blue team activity.


One of the first questions that’s asked about a home lab is the cost. There’s a few ways to answer this.

  1. Host everything locally on your PC/laptop.
  2. Host everything on a dedicated server
  3. Host everything in the cloud

The other question is what is the necessary size of the lab? Home-labs do not have to replicate the size of an enterprise company. My home lab is setup as shown below, which is what will act as a template for this post.

Figure 1: One of many ways to set up a home lab

In my person lab I run two Windows Servers and three Windows workstations. You could absolutely just have one server and one workstation, it’s just a matter of what you’re trying to accomplish. So, to answer the question of “what will it cost”, the answer is “it depends”. Personally I use a computer to act as a server which cost me about $400 to build which runs ESXI 7 to host all the VMs. Cloud could initially be cheaper, but in the long run it will probably cost more. I used to run everything locally on my work PC but I started to run out of disk space with all the VMs. As far as this guide goes, however you choose to host your VMs is up to you.

Hosting OS links:

Server Operating Systems:



Workstation Applications:

VMWare Workstation Player

Virtual Box





How your lab is architected/laid out is a big deal. You want to mimic a real environment as much as possible which is why I suggest building a lab that runs Window’s Active Directory (AD). I don’t think I’ve been in an environment where AD was not being used. We will start by using Windows evaluation licenses.

Windows 10

Windows Server 2019

And we will use Debian 10 to build an ELK (Elasticsearch, Logstash, Kibana) server.

Debian 10

Finally, for our attacking machine and for simplicity we will just use Kali

Kali Linux

ELK Setup

Before setting up Windows, we will set up an ELK server. ELK (Elasticsearch, Logstash, Kibana) is a widely used platform for log processing. As a blue teamer, you want this because digging through logs is a key piece to threat hunting. As a red teamer, you want this to know what IOCs are generated from the TTPs you use.

Keep in mind this lab is meant to be for internal, private use only. The setup of these servers will not be secure and should not be used in a production environment.

Start off by downloading the Debian 10 ISO and then create a VM to boot off the ISO. I won’t go into the specifics on creating a VM as it’s platform specifics (e.g. VirtualBox, VMWare, etc.), but there’s a good article here for VMWare.

Once you install Debian and log in, you’ll want to first add your current user to the sudoers group. First, escalate to root:

sudo su

Then add your user to the sudoers group.

sudo usermod -aG sudo [username]

Then switch back to your user

su [username]

Next, add the GPG key for the Elastic repository

wget -qO - | sudo apt-key add -

And add the repository definition

echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Now update apt

sudo apt-get update

And install logstash

sudo apt-get install logstash

Then Java

sudo apt-get install openjdk-11-jre

Then install Elasticsearch

sudo apt-get install elasticsearch

and finally Kibana

sudo apt-get install kibana

Next is to enable services

sudo /bin/systemctl enable elasticsearch.service && sudo /bin/systemctl enable kibana.service && sudo /bin/systemctl enable logstash.service

Before we start the services, there’s a few config changes we need to make.

sudo nano /etc/kibana/kibana.yml

Uncomment and set the IP to to listen on all interfaces and uncomment server.port. You can leave the port to 5601.

Figure 2: Setting the Kibana Config file

Save the file (ctrl+O, Enter, ctrl+x)

and now edit the elasticsearch config file

sudo nano /etc/elasticsearch/elasticsearch.yml

Set the line to and http.port to 9200

Figure 3: Setting the Elasticsearch config settings

And add an additional line at the bottom

discovery.type: single-node

Save the file (ctrl+O, Enter, ctrl+x)

And start the services

sudo service logstash start
sudo service elasticsearch start
sudo service kibana start

Now if you browse to your Debian machine’s IP on port 5601 you should see Kibana.

ip addr
Figure 4: Viewing the host’s IP
Figure 5: Kibana/Elastic homepage

Windows Setup

Once again, I will not be showing how to deploy a VM as I want this post to be platform agnostic. So for setting up Windows, this will be under the impression you have stood up a Windows 2019 Server and Windows 10 machine.

In this section we will create an Active Directory lab by making a Domain Controller and Workstation.

Windows Server 2019

Once Server 19 is stood up, the first thing you should do is set a static IP. If you don’t, the machine’s IP can change which will break the environment. For reference, these are my settings.

Figure 6: Server’s IP settings

The import part here is setting the DNS servers. The preferred DNS will be localhost ( as we will install the DNS service on the machine in a moment. Then setting Google’s DNS server as a secondary so it can reach the internet (optional, completely OK if you do not want your lab to reach the internet).

Next, rename the server to something more conventional. I named mine PRIMARY as it will act as the primary domain controller in the environment.

Figure 7: Go to Start>Type in “rename” and this is the screen that will be brought up
Figure 8: Renaming the server to PRIMARY

Reboot the server for the new settings to take effect. Once rebooted, you should have the server manager dashboard. Click on ‘Add roles and features’

Figure 9: Server Manager Dashboard

Click next until you get to ‘Server Roles’. Add DNS and Active Directory Domain Servers

Figure 10: Adding DNS and ADDS services

Click next until it asks for confirmation, then click install.

Figure 11: Installing features

After it installs, the server dashboard will have a notification. Click on it and click ‘Promote this server to a domain controller’

Figure 12: Promote the machine to a DC

Once you click promote, it will bring up another window. Click ‘Add a new forest’ and give the domain a name. I named mine ‘LAB.LOCAL’

Figure 13: Give your domain a name

Next, leave the default functional levels (Unless you’re adding a 2012, 2008, or 2003 server, then change it to those). Then set the DSRM password to something you’ll remember.

Figure 14: Setting the DSRM password

Click next until you get to the prerequisite check, then click ‘install’.

Figure 15: Prereq check will give warnings

Once done, reboot the server.

Once rebooted, in the Server Dashboard, click on Tools>ADUC (Active Directory Users and Computers)

Figure 16: Tools>ADUC

ADUC is used to manage users, groups, and computers (among other things). In this instance we just want to create a new user and assign them to the Domain Administrator role.

In ADUC, click on your domain on the left then select ‘Users’. At the top, click the icon shown below to create a new user.

Figure 17: Creating a new user in ADUC

Give them a name where you can identify them as an administrator by their name. Commonly in environments, they have ADM, ADMIN, -A, or some moniker to signify it’s a privileged account. Once created, right click on that user in ADUC and click ‘Add to a group’

Figure 18: Adding the user to a group

Then type in ‘Domain Admins’ and select ‘OK’.

Figure 19: Adding the user to the Domain Admins group

Once the user is added to the Domain Admins group, switch over to the Windows 10 workstation. Once again, I will assume the provisioning of the machine was already done and it is able to communicate with the domain controller. A simple test is to ping the Domain Controller’s IP and ensure they can talk to each other on the network.

On the Windows 10 machine, edit the DNS settings to include your Domain Controller’s IP address. Below is a shown example.

Figure 20: Networking settings for Windows 10

Click on the Windows icon, type in ‘join domain’ and open up ‘Access work or school’. Click on the ‘Connect’ button and then click ‘join this device to a local Active Directory domain’.

Figure 21: Select the highlighted box

Enter the FQDN (Fully qualified domain name) of your domain and click ‘next’.

Figure 22: Enter the FQDN of your domain.

Note: If you get an error saying the domain was unable to be found, double check your DNS settings and ensure the Windows 10 machine can reach the Domain Controller.

You will then be prompted for credentials. This is where you will input your newly created Domain Administrator’s credentials.

Figure 23: Enter your DA’s credentials to join the domain

Reboot the PC and it will then be joined to the domain.


Now that we have a workstation and domain controller as well as an ELK server, we need to configure our two Windows machines to send logs to the ELK server. To do this, we need a program called ‘Winlogbeat‘. In addition, I recommend also installing Sysmon. Download the .zip file for Winlogbeat and unzip it to a folder inside one of the Windows machines. Open a PowerShell window and navigate to the Winlogbeat directory in PowerShell. Run the following command

Set-ExecutionPolicy bypass

Select [a] when prompted

Then run the script

Figure 24: Installing Winlogbeat

Next, open “winlogbeat.yml” in Notepad. Copy+paste the following while changing the “hosts” IPs to match your ELK’s server IPs.

======================= Winlogbeat specific options ==========================
 name: Application
 ignore_older: 30m
 name: Security
 ignore_older: 30m
 name: System
 ignore_older: 30m
 name: Microsoft-windows-sysmon/operational
 ignore_older: 30m
 name: Microsoft-windows-PowerShell/Operational
 ignore_older: 30m
 event_id: 4103, 4104
 name: Windows PowerShell
 event_id: 400,600
 ignore_older: 30m
 name: Microsoft-Windows-WMI-Activity/Operational
 event_id: 5857,5858,5859,5860,5861 
   hosts: ["ELKIPHERE:9200"]
   username: "elastic"
   password: "changeme"
   host: "ELKIPHERE"

Then start the winlogbeat service

Start-Service winlogbeat
Figure 25: Starting winlogbeat

Once the service is started you can verify that the connection works by running

.\winlogbeat setup -e
Figure 26: Checking winlogbeat config file

Click the drop down menu on the side and under ‘Analytics’ and go to ‘Discover’. You should now be seeing Windows logs.

Figure 28: Viewing Windows Logs

Ensure that the time is synchronized properly within your lab as those are the times that will be reflected with the logs in Kibana. Otherwise, you can set your time filter to a different time.

Your basic detection lab is now ready to go! As said earlier, I recommend installing Sysmon on the Windows hosts to get detailed events out of them.

AzureHound Cypher Cheatsheet

List of Cypher queries to help analyze AzureHound data. Queries under ‘GUI’ are intended for the BloodHound GUI (Settings>Query Debug Mode). Queries under ‘Console’ are intended for the Neo4j console (usually located at http://localhost:7474). Download the ‘Custom Queries’ json file here:


Return All Azure Users that are part of the ‘Global Administrator’ Role

MATCH p =(n)-[r:AZGlobalAdmin*1..]->(m) RETURN p

Return All On-Prem users with edges to Azure

MATCH  p=(m:User)-[r:AZResetPassword|AZOwns|AZUserAccessAdministrator|AZContributor|AZAddMembers|AZGlobalAdmin|AZVMContributor|AZOwnsAZAvereContributor]->(n) WHERE m.objectid CONTAINS 'S-1-5-21' RETURN p

Find all paths to an Azure VM

MATCH p = (n)-[r]->(g:AZVM) RETURN p

Find all paths to an Azure KeyVault

MATCH p = (n)-[r]->(g:AZKeyVault) RETURN p

Return All Azure Users and their Groups

MATCH p=(m:AZUser)-[r:MemberOf]->(n) WHERE NOT m.objectid CONTAINS 'S-1-5' RETURN p

Return All Azure AD Groups that are synchronized with On-Premise AD

MATCH (n:Group) WHERE n.objectid CONTAINS 'S-1-5' AND n.azsyncid IS NOT NULL RETURN n

Find all Privileged Service Principals

MATCH p = (g:AZServicePrincipal)-[r]->(n) RETURN p

Find all Owners of Azure Applications

MATCH p = (n)-[r:AZOwns]->(g:AZApp) RETURN p


Return All Azure Users

MATCH (n:AZUser) return

Return All Azure Applications

MATCH (n:AZApp) return n.objectid

Return All Azure Devices

MATCH (n:AZDevice) return

Return All Azure Groups

MATCH (n:AZGroup) return

Return all Azure Key Vaults

MATCH (n:AZKeyVault) return

Return all Azure Resource Groups

MATCH (n:AZResourceGroup) return

Return all Azure Service Principals

MATCH (n:AZServicePrincipal) return n.objectid

Return all Azure Virtual Machines

MATCH (n:AZVM) return

Find All Principals with the ‘Contributor’ role

MATCH p = (n)-[r:AZContributor]->(g) RETURN p

Using a C# Shellcode Runner and ConfuserEx to Bypass UAC

I was recently on an engagement where we phished in and ran into UAC which gave me more trouble than I expected. When a user logs onto Windows, a logon session is created and the credentials are tied into an authentication package inside of the logon session. Whenever a process wants to act as a user or use the user’s credentials, it uses a token. These tokens are tied to the logon sessions and ultimately determine how the credential is used. In the case of User Access Control (UAC) and Administrative users, the token is effectively split into two levels. Tokens have different integrity levels:

  • Low – Very restrictive, commonly used in sandboxing
  • Medium – Regular user
  • High – Administrative privileges
  • System – SYSTEM privileges

UAC splits the Administrative user’s token into a medium and a high integrity token. When that user tries to run something as an administrator, a prompt is shown which they must accept, which then the high integrity token is then applied to that process or thread.

Figure 1: UAC Prompt

A UAC bypass is going from the Administrative user’s medium integrity token to high integrity token without having to interact with the prompt. In Cobalt Strike, this can be observed in a medium integrity context by typing get privs.In addition, the user will not have an asterisk * next to their name.

Figure 2: What a Medium “Normal User” Integrity token looks like in Cobalt Strike
Figure 3: What a high “Administrator” integrity token looks like in Cobalt Strike

UAC Bypasses are not anything new and have existed for quite sometime, an exhaustive list can be found here:

Tons of PowerShell scripts and some C# tools exist around using some of these techniques in order to bypass UAC. There’s even Cobalt Strike Aggressor scripts to automate it for you. A lot of the UAC bypasses in the aforementioned page have been remediated, however there’s still a few that exist. One that I’m particular of is my from my co-worker’s, @enigma0x3 & @mattifestation, research which DLL hijacks the ‘SilentCleanup’ scheduled task, which more can be read about here:

The problem I ran into, was these tools and scripts leveraged the built-in payload generator in Cobalt Strike, which always immediately get picked up by AV and EDRs. My goal then was to find out a way to generate a DLL that can run shellcode and not be picked up by AV. Finding a specific shellcode runner that spits out a DLL turned up short, however since EXEs and DLLs are both PEs, I figured I could just modify an existing shellcode runner to compile into a DLL.

After a bit of tinkering with quite a few shellcode runners, I ended up using one of my co-worker’s, @djhohenstein, projects, CSharp SetThreadContext. which stemmed from @_xpn_‘s work here. The beauty of this project is that it automatically determines an executable to spawn into, which avoids Get-InjectedThread.

After struggling for a bit on how to get the project to compile a working .DLL instead of an .EXE, I (once again) stumbled upon @_xpn_‘s work here. The TL;DR of it, is there is a NuGet package that will automatically export a function from the project so an entry point is available when called with rundll32.exe. Keep in mind this is for 64-bit architecture.

First, generate the payload using Cobalt Strike (or whatever C2 you prefer).

Figure 4: Generating a stageless payload for Cobalt Strike. This will output in .bin format.

Follow Dwight’s instructions on generating the encrypted.bin file here. Build the .EXE, which is the default output type, and ensure that it successfully works and establishes a beacon. Next, install the NuGet package ‘DllExport’ by right clicking on the solution an selecting ‘Manage NuGet Packages for Solution…’

Figure 5: Managing the NuGet Packages

Click on ‘Browse’ and search for ‘DllExport’, then install the package. Once it finishes installing, it will run a .bat file to set up the DLL Export type. Ensure that the ‘Runner’ project is selected, then ensure the settings match the ones pictured below.

Figure 6: Using DllExport

Once setup, it will ask to reload the project, which choose ‘Yes’ to. Next, add the DllExport attribute right before the Main function. At the same time, clear the Main function’s argument.

Figure 7: Adding the [DllExport] attribute to the project & clearing the “Main” function of arguments.

The next steps are import to ensure the DLL is generated in the correct architecture or else the code will not run. I probably spent more time on this part than what was necessary.

At the top of Visual Studio, click the drop down menu where it says ‘Any CPU’ and click on ‘Configuration Manager’

Figure 8: Click on Configuration Manager.

Change the ‘Active Solution platform’ to x64

Figure 9: Configuration Manager settings

To the left of the configuration manager menu, change the build to ‘Release’.

Next, right click on ‘Runner’ in the solution explorer and click on ‘Properties’. Change the Output type to ‘Class Library’.

Figure 10: Changing output of the project to DLL instead of EXE

Next, click on the ‘Build’ tab on the left, and check “Optimize code”

Figure 11: Settings for the build

Now you can right click on ‘Runner’ in the solution explorer and select ‘Build’ to build the .DLL.

The working DLL will be placed in \CSharpSetThreadContext\Runner\bin\Release\x64\Runner.dll

Author’s Note: I’m not sure why it requires so much finessing, but I’m open to any optimizations or explanations if anyone knows. Specifically, only the DLL in the \x64\ directory will work, for some reason the one that’s under \Release\ does not contain the entrypoint that should be generated by [DllExport], even though it’s built at the same time as the one in \x64\.

You can then run the DLL (ensure it’s the one in the x64 directory)

rundll32.exe .\Runner.dll,Main

Figure 12: Running the DLL and getting a beacon back.

Now that we have a working DLL shellcode runner, we can run it through ConfuserEx to perform basic AV evasion on it.

Figure 13: Bypassing Defender with ConfuserEx. ConfuserEx settings part snipped out, that’s for you to find out 🙂

With a working DLL shellcode runner that will bypass AV (Defender at least), we can then use it for a UAC Bypass. For the actual bypass, I use @chryzsh‘s Aggressor script here which includes an edited version of the C# binary located here. I once again use ConfuserEx on that binary to evade AV (again, at least Defender). The last step is to edit the Aggressor script to not create the built-in Cobalt Strike Payload and upload it. Also change the function after \\temp.dll from ‘Start’ to ‘Main’.

Figure 14: Editing uac-silentcleanup.cna

Rename Runner.dll to temp.dll (Or edit the Aggressor script to execute whatever name you want) and upload it to “C:\Users\[User]”. Finally, string it all together to form a UAC bypass.

Credits & Acknowledgements

Kerberosity Killed the Domain: An Offensive Kerberos Overview

Kerberos is the preferred way of authentication in a Windows domain, with NTLM being the alternative. Kerberos authentication is a very complex topic that can easily confuse people, but is sometimes heavily leveraged in red team or penetration testing engagements, as well as in actual attacks carried out by adversaries. Understanding how Kerberos works legitimately is essential to understanding the potential attack primitives against Kerberos and how attackers can leverage it to compromise a domain. This article is intended to give an overview of how Kerberos works and some of the more common attacks associated with it.


Kerberos revolves around objects called ‘tickets’ for authentication. There’s two types of tickets:

  • Ticket-Granting-Ticket (TGT)
  • Ticket-Granting-Service (TGS, also called a ‘service ticket’)

When a user logs into Windows on a domain-joined computer, the password they input is hashed and used as a key to encrypt a timestamp which is sent to the Key-Distribution Center (KDC), which is located on the domain controller. This encrypted timestamp is then sent as an AS-REQ (Authentication Server Request). The KDC then verifies the user’s credentials by decrypting the request with the user’s password hash in AD and verifying the timestamp is within acceptable limits. The KDC then responds with an AS-REP (Authentication Server Reply).


The AS-REP contains the TGT encrypted with the KRBTGT’s key (password hash) as well as some other data encrypted with the user’s key. The KRBTGT account is an account that is created when promoting a DC for the first time and is used by Kerberos for authentication. Compromising the KRBTGT account password has very serious implications to it, which will be covered later.

Now that the user is authenticated in the domain, they still need access to services on the computer they’re logging into. This is accomplished by requesting a service ticket (TGS) for a service principal via TGS-REQ. A service principal is represented through its service principal name (SPN). There’s many SPNs, a majority of which can be found here. For accessing the actual machine, the SPN ‘HOST’ is requested. HOST is the principal that contains all the built-in services for Windows.


A TGS contains a Privileged Attribute Certificate (PAC). The PAC is what contains information about the user and their memberships, as shown in figure 3.


The GroupIDs are what the service looks at to determine if that user has access to it. In order to prevent tampering, the TGS is encrypted using the target service’s password hash. In the case of HOST/ComputerName, this would be the machine account password hash. The reason account password hashes are used to encrypt/decrypt tickets is because those are the only shared secrets between the account and the KDC/Domain controller.


Once the TGS is received, via TGS-REP, the target service decrypts the ticket with it’s password hash (in this case, it’s the computer account’s password hash) and looks in the TGS’s PAC to see if the appropriate group SIDs are present, which determines access. The key distinction with service tickets, is that the KDC does the authentication (TGT) where the service does the authorization (PAC in the TGS). Once confirmed, the user is allowed to access the HOST service principal and the user is then logged into their computer.

This entire logon process can be viewed in Wireshark when capturing a login process for another user.


In figure 5, the first AS-REQ is responded with ‘KRB Error: KRB5KDC_ERR_PREAUTH_REQUIRED’. Prior to Kerberos version 5, Kerberos would allow authentication without a password. In Version 5, it does require a password, which is called Pre-Authentication. Presumably for backwards compatibility reasons, Kerberos tries to authenticate without a password first, before using Pre-Authentication, which is why there’s always an error after the initial AS-REQ during a logon. This leads in to the first attack that will be covered, AS-REP Roasting.

AS-REP Roasting


There is a setting in the Account options of a user within AD to not require Kerberos Pre-Authentication.

Since a timestamp encrypted with the user’s password hash is used as an encryption key for an AS-REQ, if the KDC successfully reads the timestamp using the user’s password hash, as long as the timestamp falls within a few minutes of the KDC’s time, it issues the TGT via AS-REP. When Pre-authentication is not required, an attacker can send a fake AS-REQ which the KDC will immediately grant a TGT  because there’s no password needed for verification. Since part of the AS-REP (apart from the TGT) contains data (a session key, TGT expiration time, and a nonce) that is encrypted with the user’s key (password hash), the password hash is able to be pulled from that request and cracked offline. More can be read here.

In Rubeus, this can be accomplished with the asreproast function.


When a TGS is issued, a timestamp + password hash for the service account is used to encrypt the TGS since the password is the shared secret between the service account and the KDC/DC. This is most commonly a service (such as HOST or CIFS) that is controlled by the computer, so the computer account password hash is used. In some cases, user accounts are created to be “service accounts” and registered as a service principal name. Since the KDC does not perform authorization for services, as that is the service’s job, any user can request a TGS for any service. This means that if a user “service account” is registered as an SPN, any user can request a TGS for that user which will be encrypted with the user account password hash. That hash can be extracted from the ticket and cracked offline.

With Rubeus, this can be accomplish using the kerberoast function.

Golden Ticket

As briefly mentioned earlier, when a TGT is issued, it is encrypted with the KRBTGT’s account password hash. The KRBTGT’s password, by default, is never set manually and thus is as complex as a machine accounts password. A golden ticket attack is when the KRBTGT password is compromised and an attacker forges a TGT.  The RC4 hash of the KRBTGT password can be used with mimikatz to forge a ticket for any user without needing their password.




Silver Ticket

Where a golden ticket is a forged TGT, a silver ticket is a forged TGS. The major opsec consideration with golden tickets is that there is a transaction that occurs within the KDC — a TGT is issued, which allows defenders to alert on these transactions and potentially catch golden ticket attacks. Silver tickets are much more stealthy because they never touch the KDC. Since a service ticket is being forged, knowledge of the target service’s password hash is needed, which in most cases will be the machine account password hash. In the case of service accounts with an SPN set, a silver ticket can be generated for that SPN.

For example, if a service account is created under the username ‘MSSQLServiceAcct’ and registered for the MSSQLSVC principal, the SPN would be MSSQLSVC/MSSQLServiceAcct. If an attacker obtained that accounts password hash (via Kerberoasting or other means), it could then forge a TGS for that SPN and access the service that utilizes it (MSSQLSVC).

In the case of certain services, such as CIFS, where an SPN for a user account (e.g. CIFS/Alice) is made, a silver ticket for CIFS using the user’s password will not work because the user does not control access to that service, the machine account does.

In the example shown below, an attacker gained knowledge of the domain controller’s computer account hash and generated a silver ticket for CIFS to access it’s file system.


One caveat to this attack, is PAC validation is a feature where the ticket will be sent to the KDC for verification, which could cause this attack to fail.

Delegation Attacks

Kerberos utilizes something called ‘delegation’, which is when an account can essentially re-use, or “forward”, a ticket to another host or application.

For example, in figure X, a user is logged into a web application which uses a SQL DB on another server. Instead of the web application’s service account having full access to the entire server the SQL DB is running on, delegation can be configured so that the service account on the web application server can only access the SQL service on the SQL server. In addition, the service account will be used for delegation, meaning it will access the SQL server on the user’s behalf and with that user’s ticket. This limits both the service account from having complete access to the SQL server, as well as ensuring only authorized users can access the SQL DB through the web application.


There’s three main types of delegation, each with their own attack primitives:

  • Unconstrained
  • Constrained
  • Resource-Based Constrained (RBCD)

Unconstrained Delegation

Unconstrained Delegation is a very historic way of performing delegation, during Windows 2000. This is configured on the ‘Delegation’ tab of a computer object within AD.


When a machine is configured for unconstrained delegation, any TGS that is sent to the host and contains an SPN, will be accompanied with a TGT and that TGT will be kept in memory for impersonation. The security implication with this, is that if an attacker is monitoring the memory for Kerberos ticket activity, once a TGS is sent to the host, the attacker can extract the TGT and re-use it.


This can be taken a step further by coercing authentication from any machine in the domain to the unconstrained delegation host via the printer bug. The printer bug is a “feature” within the Windows Print System Remote Protocol that allows a host to query another host, asking for an update on a print job. The target host then responds by authenticating to the host that initiated the request, via TGS (which contains a TGT in the case of unconstrained delegation).

What this means, is that if an attacker controls a machine with unconstrained delegation, they could use the printer bug to coerce a domain controller to authenticate to their controlled machine and extract the domain controller’s computer account TGT.


This is possible using Rubeus and SpoolSample.

An important final note is that domain controllers will always have unconstrained delegation enabled by default.

Constrained Delegation

Constrained delegation was introduced during Windows 2003 as an improvement to unconstrained delegation. The major change was that services are limited for an account/machine when impersonating a user (i.e. being delegated). Constrained delegation settings are located in the ‘delegation’ tab of an object within Active Directory Users and Computers


This can also be checked across the domain by looking for the msDS-AllowedToDelegateTo property in accounts/machines via the PowerView function:

Get-DomainUser USERNAME -Properties msds-allowedtodelegateto,useraccountcontrol

Before using the attack, it’s essential to understand how constrained delegation legitimately works. Constrained delegation uses two main Kerberos extensions: S4U2Self and S4U2Proxy. @harmj0y covered the technical details here, but at a high level, S4U2Self allows an account to request a service ticket to itself on behalf of any other user (without needing their password). If the TRUSTED_TO_AUTH_FOR_DELEGATION bit is set, The TGS will then be marked as forwardable.


Then the S4U2Proxy is leveraged by the delegated account by using the forwardable TGS to request a TGS to the specified SPN. This is accomplished by the MS-SFU Kerberos extension which allows a TGS to be requested with a TGS.


Now that service 1 (HTTP/WebServiceAcct) has a ticket for service 2 (MSSQLSvc/SQLSA), service 1 presents that ticket to service 2, who then verifies if the user is allowed to access the service via SIDs within the PAC of the TGS.


The attack primitive abuses the S4U2Self and S4U2Proxy extensions. If there is an SPN set in the msDS-AllowedToDelegateTo property for an account and the userAccountControl property contains the value for ‘TRUSTED_TO_AUTH_FOR_DELEGATION”, that account can impersonate any user to any service in that SPN. While it was explained that the S4U2Self extension allows a service to request a TGS to itself on behalf of any user, the additional part of the attack is that the sname (service name) field of the SPN in the (second) TGS is not protected, which allows an attacker to change that to be any service they desire.

The full attack path then looks as followed:

  • Attacker Kerberoasts an account (WebSA) that has the msds-AllowedToDelegateTo property set with the SPN of MSSQLSvc/LABWIN10.LAB.local in the property, meaning WebSA can delegate other accounts to access the MSSQLSvc on LABWIN10.LAB.local.
  • Rubeus is used to automatically use the S4U2Self extension to request a TGS for the current user, WebSA, on behalf of the user ‘Admin’. The returned TGS is marked “forwardable”.
  • Rubeus then automatically uses the S4U2Proxy extension to use the MS-SFU extension and request a TGS for the delegated SPN, but changing the service portion to whatever the user specifies, e.g. instead of MSSSQLSvc/LABWIN10.LAB.local, it requests a TGS for HOST/LABWIN10.LAB.local. Since the service part is not verified, the TGS is returned as the user ‘Admin’ and SPN HOST/LABWIN10.LAB.local
  • The ticket is imported into memory and the user can now access HOST/LABWIN10 as the ‘Admin’ user, as specified in the TGS.


One final note on constrained delegation:

TRUSTED_TO_AUTH_FOR_DELEGATION is needed for S4U2Self, but not present by default when adding an SPN for delegation on an account. It can be be modified/added if you have the SeEnableDelegationPrivilege over a domain controller.

Resource-Based Constrained Delegation

Resource-Based Constrained Delegation (RBCD) is an improvement on constrained delegation and introduced with Windows Server 2012. The major change in delegation, is that instead of specifying an SPN in the ‘Delegation’ tab of an account, the delegation settings are now controlled by the resource instead. In the previous example with constrained delegation, this would mean that delegation is configured on the backend SQL service, instead of the web service account that delegates to the SQL service.

Where constrained delegation sets the SPN in the msDS-AllowedToDelegateto property, RBCD uses the msDS-AllowedToActOnBehalfOfOtherIdentity property on a computer object. Elad Shamir did an excellent write-up on how this can be abused. The summary of the article, is that if the TRUSTED_TO_AUTH_FOR_DELEGATION userAccountControl flag is not present, S4U2Self will still work, but the returned service ticket will not be marked forwardable. In the context of traditional constrained delegation, this means it couldn’t be used in the S4U2Proxy extension. However, with RBCD, even if the ticket is not marked as forwardable, it still works.

The attack primitive then, is if an attacker has control of an account with an SPN set and there’s a computer account that has the msDS-AllowedToActOnBehalfOfOtherIdentity property set, the computer can be compromised. In addition, if the attacker has GenericWrite privileges over the computer account, they can compromise the computer due to being able to modify the ‘AllowedToAct’ attribute and setting it to an SPN they control.

If an attacker does not have an account with an SPN set, they can create one by creating a computer object. By default, standard users in AD can create up to 10 computer objects. This can be done with Kevin Robertson’s PowerMad project.

The attack path then looks like this:

  • Attacker discovers they have GenericWrite privileges over a computer
  • If the attacker doesn’t have an account with an SPN set, PowerMad is used to create a machine account, so now the attacker has an account with an SPN.


  • Attacker discovers the msDS-AllowedToActOnBehalfOfOtherIdentity on a computer is set for an SPN that the user has already compromised


  • Rubeus’ S4U function is used to request a ticket on behalf of any user, via S4U2Self, to the account with an SPN set (e.g. Getting a TGS for Administrator for newmachine$)
  • Rubeus’ S4U function then uses S4U2Proxy to request a TGS as Administrator (with the TGS from S4U2Self) to the target machine. The ticket is not marked as forwardable, which under traditional constrained delegation, would fail, but under RBCD, it does not matter and succeeds.
  • The attacker now is able to access the target computer

In figure X, the attacker has compromised Bob, a user account with an SPN set.

For a transcript of the commands used, reference @harmj0y’s gist here (I slightly modified some commands for the User SPN instead of computer account).