All About Azure Penetration Testing

What is Azure Penetration Testing?

Security practices and technologies cannot guarantee protection against cyber attacks. Given today’s threat landscape, organizations must assume a breach has already occurred or is likely to occur in the future. 

 

Microsoft uses a methodology called Assume Breach, and the concept of red and blue teams, to conduct penetration testing against Microsoft infrastructure, services and applications. Microsoft uses this methodology to conduct regular penetration tests against Microsoft Azure cloud systems. 

 

In addition, Microsoft allows Azure users to perform penetration tests against their own applications. It is convenient to use Azure for penetration testing, because you can quickly provision a duplicate of your production environment, and use it for testing.

 

Microsoft limits the types of penetration testing you can perform on your Azure services. Here are common types of penetration tests that are allowed:

 

  • Testing for OWASP vulnerabilities—for example, OWASP Top Ten application vulnerabilities.
  • Endpoint fuzz testing—trying random inputs to find vulnerabilities.
  • Endpoint port scanning—identifying unnecessary or vulnerable open ports.

 

In this article:

Microsoft Pentesting Methodology: Assume Breach

When planning penetration testing in Azure, you can take advantage of penetration testing methodologies tried and tested by Microsoft’s security teams. These are also the methods Microsoft itself uses to test the Azure cloud.

 

Microsoft follows the Assume Breach methodology, where the goal of security is to close gaps in any of the following capabilities:

 

  • Attack and intrusion detection
  • Rapid response to attacks and intrusions
  • Recovery after compromise or data leak
  • Preventing future attacks

 

Assume Breach security testing starts with a paper-based war game, and then proceeds to a realistic breach attempt by a Red Team. The Red Team’s goal is to test Microsoft’s ability to respond to breaches, with the aim of reducing mean time to detection (MTTD) and recovery (MTTR).

 

The Red Team

Microsoft maintains a Red Team of full-time ethical hackers, who constantly launch cyberattacks against Microsoft infrastructure, services, and applications. They do not target applications and data belonging to end customers, focusing mainly on Microsoft-owned and managed services.

 

The Blue Team

The Blue Team is composed of employees dedicated to responding to Red Team attacks, and are sometimes joined by members of the incident response and operations organization at Microsoft. They are completely independent from the Red Team, and use cutting edge security practices and technologies to defend against attacks.

 

To simulate real threats, the Blue Team does not know where or how the Red Team will strike. They are on-call around the clock, 365 days a year, and must respond to all security incidents, whether originating from the Red Team or real threat actors. 

 

When the Blue Team discovers an environment has been breached, they must:

 

  • Collect evidence and indications of compromise (IoC)
  • Notify engineering and operations teams
  • Classify alerts to determine if further investigation is needed
  • Add context about the environment to determine severity
  • Create a plan to mitigate the threat
  • Execute the plan and recover the affected systems

 

Red Team Breach Post-Mortem

At the end of each Red Team attack, the Red and Blue Teams meet to conduct a post-mortem analysis, to assess the attack and Microsoft’s response. The two teams share their strategy and lessons learned. The Red Team provides valuable details about:

 

  • When the breach happened
  • How the breach took place
  • Which systems or assets were compromised
  • Whether the Blue Team managed to eradicate the threat completely
  • Whether recovery was effective or if some systems are still compromised

 

Conducting Your Own Penetration Test on Azure: Rules of Engagement

Microsoft has defined rules of engagement for penetration tests, which allows you to test applications hosted in Microsoft cloud services without harming other Microsoft customers.

 

The following acts are prohibited as part of a penetration test:

 

  • Analyzing or testing assets of other Microsoft Cloud customers.
  • Accessing or using any data that is not owned by your organization.
  • Running denial of service (DoS) attacks, or any test that generates large amounts of traffic.
  • Perform fuzz testing that may use extensive network bandwidth (except on your own VMs).
  • Taking action after the proof of concept (POC) stage of the penetration test—for example, you can prove you have root access on a system, but not execute root commands.
  • Violating any part of the Acceptable Use Policy.
  • Performing phishing or other social engineering attacks against Microsoft employees.

 

The following activities are allowed:

 

  • Create a number of test accounts or tenants to demonstrate and test access and data transfer between accounts and tenants.
  • Run port scans, fuzz testing or other vulnerability testing tools against your own Azure VMs.
  • Test load on an application by generating traffic expected from a typical business process—including surge tests.
  • Tests that check security monitoring capabilities—for example, generating unusual logs
  • Breaking out of an Azure service container like Azure Functions—however, if you are successful, you should immediately report it to Microsoft, and do not make any use of access rights you achieve as a result.
  • In Microsoft Intune, you can test the enforcement of restrictions applied by Mobile Application Management (MAM) or conditional access policies.

 

 

Azure Pentesting Tools

Azure pentesting tools can help you to identify security weaknesses in your Azure deployment.

The following are some open source tools you can use:

 

  • Azucar — this is a multi-threaded plugin-based tool for auditing Azure environments. It automatically collects different configuration data and scrutinizes all data associated with a particular Azure subscription. It then uses that information to reveal the security risks present.

 

  • MicroBurst — this includes a collection of powerful functions and scripts for attacking Azure environments and determining their security. It supports weak configuration auditing, Azure Services discovery, and various post-exploitation activities such as credential dumping.

 

  • PowerZure — this is a PowerShell script designed to undertake both reconnaissance and exploitation actions on Microsoft Azure. It comes with a variety of attack components and functions for various tasks, including operational activities, information gathering, credential dumping, and data exfiltration.

 

  • Stormspotter — this is the tool for generating an “attack graph” for Azure and Azure Active Directory objects. It increases visibility into the attack surface, allowing pentesters and red teams to identify security vulnerabilities easily.

 

  • Cloud Security Suite (cs-suite) — this is a comprehensive tool for assessing the security posture of various cloud computing services, including Microsoft Azure.

 

Conclusion

Azure penetration testing practices can help detect security gaps before any are exploited by threat actors. Microsoft uses a penetration methodology called “assume breach”, implemented using a red team and a blue team. 

 

The red team is maintained by ethical hackers who use various Azure pentesting tools to 

try to detect vulnerabilities, without impacting end user accounts. The blue team is in charge of incident response, constantly defending against attacks launched by the red team.  

 

To help organizations conduct their own penetration testing on Azure, Microsoft has defined rules of engagement. These rules outline what actions are allowed or not allowed when implementing Azure penetration testing.