Azure Security Document
Azure Security Document
Azure Security Document
Overview
Network security
Database security
Storage security
Compute security
Operational security
Security management and monitoring
Service Fabric security
Identity management
IoT security
Azure encryption overview
Security architecture
Enabling operational security
Advanced threat detection
Logging and auditing
Isolation in the public cloud
Security technical capabilities
Governance in Azure
Data encryption at rest
Get Started
Getting started with Azure security
Security best practices and patterns
Security services and technologies
Network security
Network security best practices
Azure network security
Boundary security
Secure hybrid network architecture
Storage security
Data security and encryption best practices
Storage security guide
Compute security
Best practices for Azure VMs
Best practices for IaaS workloads
Microsoft Antimalware
Disk encryption for IaaS VMs
Encrypt an Azure VM
Operational security
Best practices for operational security
Security management and monitoring
Security management
Azure Security Center
Introduction to Azure log integration
Service Fabric
Service Fabric best practices
Service Fabric checklist
Identity management
Identity management security best practices
PaaS services
Securing PaaS deployments
Internet of Things
Secure your IoT deployment
IoT security best practices
Security architecture
Data classification
Disaster recovery and high availability for applications built on Azure
Related
Trust Center
Microsoft Security Response Center
Pen testing
Security Center
Key Vault
Log Analytics
Multi-Factor Authentication
Azure Active Directory
Operations Management Suite
Resources
Azure Roadmap
Azure security MVP program
Cybersecurity consulting
Pricing calculator
Scenarios
Manage personal data in Azure
Discover, identify, and classify personal data in Azure
Protect personal data in Azure
Protect personal data with Security Center
Protect personal data with Application Gateway
Protect personal data by using identity and access controls
Protect personal data at rest by using encryption
Protect personal data in transit by using encryption
Protect personal data by using Azure reporting tools
Security and Compliance blog
Security courses from Virtual Academy
Security videos on Channel 9
Threat Modeling Tool
Getting started
Feature overview
Threats
Mitigations
Introduction to Azure Security
9/8/2017 • 30 min to read • Edit Online
Overview
We know that security is job one in the cloud and how important it is that you find accurate and timely information
about Azure security. One of the best reasons to use Azure for your applications and services is to take advantage
of its wide array of security tools and capabilities. These tools and capabilities help make it possible to create secure
solutions on the secure Azure platform. Microsoft Azure provides confidentiality, integrity, and availability of
customer data, while also enabling transparent accountability.
To help you better understand the collection of security controls implemented within Microsoft Azure from both
the customer's and Microsoft operations' perspectives, this white paper, "Introduction to Azure Security", is written
to provide a comprehensive look at the security available with Microsoft Azure.
Azure Platform
Azure is a public cloud service platform that supports a broad selection of operating systems, programming
languages, frameworks, tools, databases, and devices. It can run Linux containers with Docker integration; build
apps with JavaScript, Python, .NET, PHP, Java, and Node.js; build back-ends for iOS, Android, and Windows devices.
Azure public cloud services support the same technologies millions of developers and IT professionals already rely
on and trust. When you build on, or migrate IT assets to, a public cloud service provider you are relying on that
organization’s abilities to protect your applications and data with the services and the controls they provide to
manage the security of your cloud-based assets.
Azure’s infrastructure is designed from facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security requirements.
In addition, Azure provides you with a wide array of configurable security options and the ability to control them so
that you can customize security to meet the unique requirements of your organization’s deployments. This
document helps you understand how Azure security capabilities can help you fulfill these requirements.
NOTE
The primary focus of this document is on customer-facing controls that you can use to customize and increase security for
your applications and services.
We do provide some overview information, but for detailed information on how Microsoft secures the Azure platform itself,
see information provided in the Microsoft Trust Center.
Abstract
Initially, public cloud migrations were driven by cost savings and agility to innovate. Security was considered a
major concern for some time, and even a show stopper, for public cloud migration. However, public cloud security
has transitioned from a major concern to one of the drivers for cloud migration. The rationale behind this is the
superior ability of large public cloud service providers to protect applications and the data of cloud-based assets.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security needs. In addition, Azure
provides you with a wide array of configurable security options and the ability to control them so that you can
customize security to meet the unique requirements of your deployments to meet your IT control policies and
adhere to external regulations.
This paper outlines Microsoft’s approach to security within the Microsoft Azure cloud platform:
Security features implemented by Microsoft to secure the Azure infrastructure, customer data, and applications.
Azure services and security features available to you to manage the Security of the Services and your data
within your Azure subscriptions.
Security Development Cycle, Manage your data all the Trust Center How Microsoft secures
Internal audits time customer data in Azure
services
Mandatory Security training, Control on data location Common Controls Hub How Microsoft manage data
back ground checks location in Azure services
Penetration testing, Provide data access on your The Cloud Services Due Who in Microsoft can access
intrusion detection, DDoS, terms Diligence Checklist your data on what terms
Audits & logging
State of art datacentre, Responding to law Compliance by service, How Microsoft secures
physical security, Secure enforcement location & Industry customer data in Azure
Network services
Security Incident response, Stringent privacy standards Review certification for Azure
Shared Responsibility services, Transparency hub
Operations
This section provides additional information regarding key features in security operations and summary
information about these capabilities.
Operations Management Suite Security and Audit Dashboard
The OMS Security and Audit solution provides a comprehensive view into your organization’s IT security posture
with built-in search queries for notable issues that require your attention. The Security and Audit dashboard is the
home screen for everything related to security in OMS. It provides high-level insight into the Security state of your
computers. It also includes the ability to view all events from the past 24 hours, 7 days, or any other custom time
frame.
In addition, you can configure OMS Security & Compliance to automatically carry out specific actions when a
specific event is detected.
Azure Resource Manager
Azure Resource Manager enables you to work with the resources in your solution as a group. You can deploy,
update, or delete all the resources for your solution in a single, coordinated operation. You use an Azure Resource
Manager template for deployment and that template can work for different environments such as testing, staging,
and production. Resource Manager provides security, auditing, and tagging features to help you manage your
resources after deployment.
Azure Resource Manager template-based deployments help improve the security of solutions deployed in Azure
because standard security control settings and can be integrated into standardized template-based deployments.
This reduces the risk of security configuration errors that might take place during manual deployments.
Application Insights
Application Insights is an extensible Application Performance Management (APM) service for web developers. With
Application Insights, you can monitor your live web applications and automatically detect performance anomalies.
It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your
apps. It monitors your application all the time it's running, both during testing and after you've published or
deployed it.
Application Insights creates charts and tables that show you, for example, what times of day you get most users,
how responsive the app is, and how well it is served by any external services that it depends on.
If there are crashes, failures or performance issues, you can search through the telemetry data in detail to diagnose
the cause. And the service sends you emails if there are any changes in the availability and performance of your
app. Application Insight thus becomes a valuable security tool because it helps with the availability in the
confidentiality, integrity, and availability security triad.
Azure Monitor
Azure Monitor offers visualization, query, routing, alerting, auto scale, and automation on data both from the Azure
infrastructure (Activity Log) and each individual Azure resource (Diagnostic Logs). You can use Azure Monitor to
alert you on security-related events that are generated in Azure logs.
Log Analytics
Log Analytics part of Operations Management Suite – Provides an IT management solution for both on-premises
and third-party cloud-based infrastructure (such as AWS) in addition to Azure resources. Data from Azure Monitor
can be routed directly to Log Analytics so you can see metrics and logs for your entire environment in one place.
Log Analytics can be a useful tool in forensic and other security analysis, as the tool enables you to quickly search
through large amounts of security-related entries with a flexible query approach. In addition, on-premises firewall
and proxy logs can be exported into Azure and made available for analysis using Log Analytics.
Azure Advisor
Azure Advisor is a personalized cloud consultant that helps you to optimize your Azure deployments. It analyzes
your resource configuration and usage telemetry. It then recommends solutions to help improve the performance,
security, and high availability of your resources while looking for opportunities to reduce your overall Azure spend.
Azure Advisor provides security recommendations, which can significant improve your overall security posture for
solutions you deploy in Azure. These recommendations are drawn from security analysis performed by Azure
Security Center.
Azure Security Center
Azure Security Center helps you prevent, detect, and respond to threats with increased visibility into and control
over the security of your Azure resources. It provides integrated security monitoring and policy management
across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad
ecosystem of security solutions.
In addition, Azure Security Center helps with security operations by providing you a single dashboard that surfaces
alerts and recommendations that can be acted upon immediately. Often, you can remediate issues with a single
click within the Azure Security Center console.
Applications
The section provides additional information regarding key features in application security and summary
information about these capabilities.
Web Application vulnerability scanning
One of the easiest ways to get started with testing for vulnerabilities on your App Service app is to use the
integration with Tinfoil Security to perform one-click vulnerability scanning on your app. You can view the test
results in an easy-to-understand report, and learn how to fix each vulnerability with step-by-step instructions.
Penetration Testing
If you prefer to perform your own penetration tests or want to use another scanner suite or provider, you must
follow the Azure penetration testing approval process and obtain prior approval to perform the desired penetration
tests.
Web Application firewall
The web application firewall (WAF) in Azure Application Gateway helps protect web applications from common
web-based attacks like SQL injection, cross-site scripting attacks, and session hijacking. It comes preconfigured with
protection from threats identified by the Open Web Application Security Project (OWASP) as the top 10 common
vulnerabilities.
Authentication and authorization in Azure App Service
App Service Authentication / Authorization is a feature that provides a way for your application to sign in users so
that you don't have to change code on the app backend. It provides an easy way to protect your application and
work with per-user data.
Layered Security Architecture
Since App Service Environments provide an isolated runtime environment deployed into an Azure Virtual Network,
developers can create a layered security architecture providing differing levels of network access for each
application tier. A common desire is to hide API back-ends from general Internet access, and only allow APIs to be
called by upstream web apps. Network Security groups (NSGs) can be used on Azure Virtual Network subnets
containing App Service Environments to restrict public access to API applications.
Web server diagnostics and application diagnostics
App Service web apps provide diagnostic functionality for logging information from both the web server and the
web application. These are logically separated into web server diagnostics and application diagnostics. Web server
includes two major advances in diagnosing and troubleshooting sites and applications.
The first new feature is real-time state information about application pools, worker processes, sites, application
domains, and running requests. The second new advantages are the detailed trace events that track a request
throughout the complete request-and-response process.
To enable the collection of these trace events, IIS 7 can be configured to automatically capture full trace logs, in XML
format, for any particular request based on elapsed time or error response codes.
Web server diagnostics
You can enable or disable the following kinds of logs:
Detailed Error Logging - Detailed error information for HTTP status codes that indicate a failure (status code
400 or greater). This may contain information that can help determine why the server returned the error
code.
Failed Request Tracing - Detailed information on failed requests, including a trace of the IIS components
used to process the request and the time taken in each component. This can be useful if you are attempting
to increase site performance or isolate what is causing a specific HTTP error to be returned.
Web Server Logging - Information about HTTP transactions using the W3C extended log file format. This is
useful when determining overall site metrics such as the number of requests handled or how many requests
are from a specific IP address.
Application diagnostics
Application diagnostics allows you to capture information produced by a web application. ASP.NET applications can
use the System.Diagnostics.Trace class to log information to the application diagnostics log. In Application
Diagnostics, there are two major types of events, those related to application performance and those related to
application failures and errors. The failures and errors can be divided further into connectivity, security, and failure
issues. Failure issues are typically related to a problem with the application code.
In Application Diagnostics, you can view events grouped in these ways:
All (displays all events)
Application Errors (displays exception events)
Performance (displays performance events)
Storage
The section provides additional information regarding key features in Azure storage security and summary
information about these capabilities.
Role -Based Access Control (RBAC )
You can secure your storage account with Role-Based Access Control (RBAC). Restricting access based on the need
to know and least privilege security principles is imperative for organizations that want to enforce Security policies
for data access. These access rights are granted by assigning the appropriate RBAC role to groups and applications
at a certain scope. You can use built-in RBAC roles, such as Storage Account Contributor, to assign privileges to
users. Access to the storage keys for a storage account using the Azure Resource Manager model can be controlled
through Role-Based Access Control (RBAC).
Shared Access Signature
A shared access signature (SAS) provides delegated access to resources in your storage account. The SAS means
that you can grant a client limited permissions to objects in your storage account for a specified period and with a
specified set of permissions. You can grant these limited permissions without having to share your account access
keys.
Encryption in Transit
Encryption in transit is a mechanism of protecting data when it is transmitted across networks. With Azure Storage,
you can secure data using:
Transport-level encryption, such as HTTPS when you transfer data into or out of Azure Storage.
Wire encryption, such as SMB 3.0 encryption for Azure File shares.
Client-side encryption, to encrypt the data before it is transferred into storage and to decrypt the data after it
is transferred out of storage.
Encryption at rest
For many organizations, data encryption at rest is a mandatory step towards data privacy, compliance, and data
sovereignty. There are three Azure storage security features that provide encryption of data that is “at rest”:
Storage Service Encryption allows you to request that the storage service automatically encrypt data when
writing it to Azure Storage.
Client-side Encryption also provides the feature of encryption at rest.
Azure Disk Encryption allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
Storage Analytics
Azure Storage Analytics performs logging and provides metrics data for a storage account. You can use this data to
trace requests, analyze usage trends, and diagnose issues with your storage account. Storage Analytics logs
detailed information about successful and failed requests to a storage service. This information can be used to
monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort
basis. The following types of authenticated requests are logged:
Successful requests.
Failed requests, including timeout, throttling, network, authorization, and other errors.
Requests using a Shared Access Signature (SAS), including failed and successful requests.
Requests to analytics data.
Enabling Browser-Based Clients Using CORS
Cross-Origin Resource Sharing (CORS) is a mechanism that allows domains to give each other permission for
accessing each other’s resources. The User Agent sends extra headers to ensure that the JavaScript code loaded
from a certain domain is allowed to access resources located at another domain. The latter domain then replies
with extra headers allowing or denying the original domain access to its resources.
Azure storage services now support CORS so that once you set the CORS rules for the service, a properly
authenticated request made against the service from a different domain is evaluated to determine whether it is
allowed according to the rules you have specified.
Networking
The section provides additional information regarding key features in Azure network security and summary
information about these capabilities.
Network Layer Controls
Network access control is the act of limiting connectivity to and from specific devices or subnets and represents the
core of network security. The goal of network access control is to make sure that your virtual machines and services
are accessible to only users and devices to which you want them accessible.
Network Security Groups
A Network Security Group (NSG) is a basic stateful packet filtering firewall and it enables you to control access
based on a 5-tuple. NSGs do not provide application layer inspection or authenticated access controls. They can be
used to control traffic moving between subnets within an Azure Virtual Network and traffic between an Azure
Virtual Network and the Internet.
Route Control and Forced Tunneling
The ability to control routing behavior on your Azure Virtual Networks is a critical network security and access
control capability. For example, if you want to make sure that all traffic to and from your Azure Virtual Network
goes through that virtual security appliance, you need to be able to control and customize routing behavior. You
can do this by configuring User-Defined Routes in Azure.
User-Defined Routes allow you to customize inbound and outbound paths for traffic moving into and out of
individual virtual machines or subnets to insure the most secure route possible. Forced tunneling is a mechanism
you can use to ensure that your services are not allowed to initiate a connection to devices on the Internet.
This is different from being able to accept incoming connections and then responding to them. Front-end web
servers need to respond to requests from Internet hosts, and so Internet-sourced traffic is allowed inbound to these
web servers and the web servers can respond.
Forced tunneling is commonly used to force outbound traffic to the Internet to go through on-premises security
proxies and firewalls.
Virtual Network Security Appliances
While Network Security Groups, User-Defined Routes, and forced tunneling provide you a level of security at the
network and transport layers of the OSI model, there may be times when you want to enable security at higher
levels of the stack. You can access these enhanced network security features by using an Azure partner network
security appliance solution. You can find the most current Azure partner network security solutions by visiting the
Azure Marketplace and searching for “security” and “network security.”
Azure Virtual Network
An Azure virtual network (VNet) is a representation of your own network in the cloud. It is a logical isolation of the
Azure network fabric dedicated to your subscription. You can fully control the IP address blocks, DNS settings,
security policies, and route tables within this network. You can segment your VNet into subnets and place Azure
IaaS virtual machines (VMs) and/or Cloud services (PaaS role instances) on Azure Virtual Networks.
Additionally, you can connect the virtual network to your on-premises network using one of the connectivity
options available in Azure. In essence, you can expand your network to Azure, with complete control on IP address
blocks with the benefit of enterprise scale Azure provides.
Azure networking supports various secure remote access scenarios. Some of these include:
Connect individual workstations to an Azure Virtual Network
Connect on-premises network to an Azure Virtual Network with a VPN
Connect on-premises network to an Azure Virtual Network with a dedicated WAN link
Connect Azure Virtual Networks to each other
VPN Gateway
To send network traffic between your Azure Virtual Network and your on-premises site, you must create a VPN
gateway for your Azure Virtual Network. A VPN gateway is a type of virtual network gateway that sends encrypted
traffic across a public connection. You can also use VPN gateways to send traffic between Azure Virtual Networks
over the Azure network fabric.
Express Route
Microsoft Azure ExpressRoute is a dedicated WAN link that lets you extend your on-premises networks into the
Microsoft cloud over a dedicated private connection facilitated by a connectivity provider.
With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Office 365,
and CRM Online. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a
virtual cross-connection through a connectivity provider at a co-location facility.
ExpressRoute connections do not go over the public Internet and thus can be considered more secure than VPN-
based solutions. This allows ExpressRoute connections to offer more reliability, faster speeds, lower latencies, and
higher security than typical connections over the Internet.
Application Gateway
Microsoft Azure Application Gateway provides an Application Delivery Controller (ADC) as a service, offering
various layer 7 load balancing capabilities for your application.
It allows you to optimize web farm productivity by offloading CPU intensive SSL termination to the Application
Gateway (also known as “SSL offload” or “SSL bridging”). It also provides other Layer 7 routing capabilities
including round-robin distribution of incoming traffic, cookie-based session affinity, URL path-based routing, and
the ability to host multiple websites behind a single Application Gateway. Azure Application Gateway is a layer-7
load balancer.
It provides failover, performance-routing HTTP requests between different servers, whether they are on the cloud
or on-premises.
Application provides many Application Delivery Controller (ADC) features including HTTP load balancing, cookie-
based session affinity, Secure Sockets Layer (SSL) offload, custom health probes, support for multi-site, and many
others.
Web Application Firewall
Web Application Firewall is a feature of Azure Application Gateway that provides protection to web applications
that use application gateway for standard Application Delivery Control (ADC) functions. Web application firewall
does this by protecting them against most of the OWASP top 10 common web vulnerabilities.
Compute
The section provides additional information regarding key features in this area and summary information about
these capabilities.
Antimalware & Antivirus
With Azure IaaS, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend
Micro, McAfee, and Kaspersky to protect your virtual machines from malicious files, adware, and other threats.
Microsoft Antimalware for Azure Cloud Services and Virtual Machines is a protection capability that helps identify
and remove viruses, spyware, and other malicious software. Microsoft Antimalware provides configurable alerts
when known malicious or unwanted software attempts to install itself or run on your Azure systems. Microsoft
Antimalware can also be deployed using Azure Security Center
Hardware Security Module
Encryption and authentication do not improve security unless the keys themselves are protected. You can simplify
the management and security of your critical secrets and keys by storing them in Azure Key Vault. Key Vault
provides the option to store your keys in hardware Security modules (HSMs) certified to FIPS 140-2 Level 2
standards. Your SQL Server encryption keys for backup or transparent data encryption can all be stored in Key
Vault with any keys or secrets from your applications. Permissions and access to these protected items are
managed through Azure Active Directory.
Virtual machine backup
Azure Backup is a solution that protects your application data with zero capital investment and minimal operating
costs. Application errors can corrupt your data, and human errors can introduce bugs into your applications that
can lead to security issues. With Azure Backup, your virtual machines running Windows and Linux are protected.
Azure Site Recovery
An important part of your organization's business continuity/disaster recovery (BCDR) strategy is figuring out how
to keep corporate workloads and apps up and running when planned and unplanned outages occur. Azure Site
Recovery helps orchestrate replication, failover, and recovery of workloads and apps so that they are available from
a secondary location if your primary location goes down.
SQL VM TDE
Transparent data encryption (TDE) and column level encryption (CLE) are SQL server encryption features. This form
of encryption requires customers to manage and store the cryptographic keys you use for encryption.
The Azure Key Vault (AKV) service is designed to improve the security and management of these keys in a secure
and highly available location. The SQL Server Connector enables SQL Server to use these keys from Azure Key
Vault.
If you are running SQL Server with on-premises machines, there are steps you can follow to access Azure Key Vault
from your on-premises SQL Server machine. But for SQL Server in Azure VMs, you can save time by using the
Azure Key Vault Integration feature. With a few Azure PowerShell cmdlets to enable this feature, you can automate
the configuration necessary for a SQL VM to access your key vault.
VM Disk Encryption
Azure Disk Encryption is a new capability that helps you encrypt your Windows and Linux IaaS virtual machine
disks. It applies the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide
volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you
control and manage the disk-encryption keys and secrets in your Key Vault subscription. The solution also ensures
that all data on the virtual machine disks are encrypted at rest in your Azure storage.
Virtual networking
Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be
connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical
Azure network fabric. Each logical Azure Virtual Network is isolated from all other Azure Virtual Networks. This
isolation helps insure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
Patch Updates
Patch Updates provide the basis for finding and fixing potential problems and simplify the software update
management process, both by reducing the number of software updates you must deploy in your enterprise and
by increasing your ability to monitor compliance.
Security policy management and reporting
Azure Security Center helps you prevent, detect, and respond to threats, and provides you increased visibility into,
and control over, the security of your Azure resources. It provides integrated Security monitoring and policy
management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works
with a broad ecosystem of security solutions.
Azure Security Center
Security Center helps you prevent, detect, and respond to threats with increased visibility into and control over the
security of your Azure resources. It provides integrated security monitoring and policy management across your
Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of
security solutions.
Directory Objects, Group-based access Self-Service Group Identity Protection, Join a device to Azure
User/Group management / and app Privileged Identity AD, Desktop SSO,
Management provisioning, Self- Management/Self- Management Microsoft Passport for
(add/update/delete)/ Service Password Service application Azure AD,
User-based Reset for cloud users, additions/Dynamic Administrator
provisioning, Device Company Branding Groups, Self-Service Bitlocker recovery,
registration, Single (Logon Pages/Access Password MDM auto-
Sign-On (SSO), Self- Panel customization), Reset/Change/Unlock enrollment, Self-
Service Password Application Proxy, SLA with on-premises Service Bitlocker
Change for cloud 99.9% write-back, Multi- recovery, Additional
users, Connect (Sync Factor Authentication local administrators to
engine that extends (Cloud and On- Windows 10 devices
on-premises premises (MFA via Azure AD Join
directories to Azure Server)), MIM CAL +
Active Directory), MIM Server, Cloud
Security / Usage App Discovery,
Reports Connect Health,
Automatic password
rollover for group
accounts
Cloud App Discovery is a premium feature of Azure Active Directory that enables you to identify cloud
applications that are used by the employees in your organization.
Azure Active Directory Identity Protection is a security service that uses Azure Active Directory anomaly
detection capabilities to provide a consolidated view into risk events and potential vulnerabilities that could
affect your organization’s identities.
Azure Active Directory Domain Services enables you to join Azure VMs to a domain without the need to
deploy domain controllers. Users sign in to these VMs by using their corporate Active Directory credentials,
and can seamlessly access resources.
Azure Active Directory B2C is a highly available, global identity management service for consumer-facing
apps that can scale to hundreds of millions of identities and integrate across mobile and web platforms. Your
customers can sign in to all your apps through customizable experiences that use existing social media
accounts, or you can create new standalone credentials.
Azure Active Directory B2B Collaboration is a secure partner integration solution that supports your cross-
company relationships by enabling partners to access your corporate applications and data selectively by
using their self-managed identities.
Azure Active Directory Join enables you to extend cloud capabilities to Windows 10 devices for centralized
management. It makes it possible for users to connect to the corporate or organizational cloud through
Azure Active Directory and simplifies access to apps and resources.
Azure Active Directory Application Proxy provides SSO and secure remote access for web applications
hosted on-premises.
Next Steps
Getting started with Microsoft Azure Security
Azure services and features you can use to help secure your services and data within Azure
Azure Security Center
Prevent, detect, and respond to threats with increased visibility and control over the security of your Azure
resources
Security health monitoring in Azure Security Center
The monitoring capabilities in Azure Security Center to monitor compliance with policies.
Azure Network Security Overview
6/27/2017 • 17 min to read • Edit Online
Microsoft Azure includes a robust networking infrastructure to support your application and service connectivity
requirements. Network connectivity is possible between resources located in Azure, between on-premises and
Azure hosted resources, and to and from the Internet and Azure.
The goal of this article is to make it easier for you to understand what Microsoft Azure has to offer in the area of
network security. Here we provide basic explanations for core network security concepts and requirements. We
also provide you information on what Azure has to offer in each of these areas as well as links to help you gain a
deeper understanding of interesting areas.
This Azure Network Security Overview article focuses on the following areas:
Azure networking
Network access control
Secure remote access and cross-premises connectivity
Availability
Name resolution
DMZ architecture
Monitoring and threat detection
Azure Networking
Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be
connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical
Azure network fabric. Each logical Azure Virtual Network is isolated from all other Azure Virtual Networks. This
helps insure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
Learn more:
Virtual Network Overview
Availability
Availability is a key component of any security program. If your users and systems can’t access what they need to
access over the network, the service can be considered compromised. Azure has networking technologies that
support the following high-availability mechanisms:
HTTP-based load balancing
Network level load balancing
Global load balancing
Load balancing is a mechanism designed to equally distribute connections among multiple devices. The goals of
load balancing are:
Increase availability – when you load balance connections across multiple devices, one or more of the devices
can become unavailable and the services running on the remaining online devices can continue to serve the
content from the service
Increase performance – when you load balance connections across multiple devices, a single device doesn’t
have to take the processor hit. Instead, the processing and memory demands for serving the content is spread
across multiple devices.
HTTP-based Load Balancing
Organizations that run web-based services often desire to have an HTTP-based load balancer in front of those web
services to help insure adequate levels of performance and high availability. In contrast to traditional network-
based load balancers, the load balancing decisions made by HTTP-based load balancers are based on
characteristics of the HTTP protocol, not on the network and transport layer protocols.
To provide you HTTP-based load balancing for your web-based services, Azure provides you the Azure Application
Gateway. The Azure Application Gateway supports:
HTTP-based load balancing – load balancing decisions are made based on characteristic special to the HTTP
protocol
Cookie-based session affinity – this capability makes sure that connections established to one of the servers
behind that load balancer stays intact between the client and server. This insures stability of transactions.
SSL offload – when a client connection is established with the load balancer, that session between the client and
the load balancer is encrypted using the HTTPS (SSL/) protocol. However, in order to increase performance, you
have the option to have the connection between the load balancer and the web server behind the load balancer
use the HTTP (unencrypted) protocol. This is referred to as “SSL offload” because the web servers behind the
load balancer don’t experience the processor overhead involved with encryption, and therefore should be able
to service requests more quickly.
URL-based content routing – this feature makes it possible for the load balancer to make decisions on where to
forward connections based on the target URL. This provides a lot more flexibility than solutions that make load
balancing decisions based on IP addresses.
Learn more:
Application Gateway Overview
Network Level Load Balancing
In contrast to HTTP-based load balancing, network level load balancing makes load balancing decisions based on IP
address and port (TCP or UDP) numbers. You can gain the benefits of network level load balancing in Azure by
using the Azure Load Balancer. Some key characteristics of the Azure Load Balancer include:
Network level load balancing based on IP address and port numbers
Support for any application layer protocol
Load balances to Azure virtual machines and cloud services role instances
Can be used for both Internet-facing (external load balancing) and non-Internet facing (internal load balancing)
applications and virtual machines
Endpoint monitoring, which is used to determine if any of the services behind the load balancer have become
unavailable
Learn more:
Internet Facing load balancer between multiple Virtual Machines or services
Internal Load Balancer Overview
Global Load Balancing
Some organizations will want the highest level of availability possible. One way to reach this goal is to host
applications in globally distributed datacenters. When an application is hosted in data centers located throughout
the world, it’s possible for an entire geopolitical region to become unavailable and still have the application up and
running.
In addition to the availability advantages you get by hosting applications in globally distributed datacenters, you
also can get performance benefits. These performance benefits can be obtained by using a mechanism that directs
requests for the service to the datacenter that is nearest to the device that is making the request.
Global load balancing can provide you both of these benefits. In Azure, you can gain the benefits of global load
balancing by using Azure Traffic Manager.
Learn more:
What is Traffic Manager?
Name Resolution
Name resolution is a critical function for all services you host in Azure. From a security perspective, compromise of
the name resolution function can lead to an attacker redirecting requests from your sites to an attacker’s site.
Secure name resolution is a requirement for all your cloud hosted services.
There are two types of name resolution you need to address:
Internal name resolution – internal name resolution is used by services on your Azure Virtual Networks, your
on-premises networks, or both. Names used for internal name resolution are not accessible over the Internet.
For optimal security, it’s important that your internal name resolution scheme is not accessible to external users.
External name resolution – external name resolution is used by people and devices outside of your on-premises
and Azure Virtual Networks. These are the names that are visible to the Internet and are used to direct
connection to your cloud-based services.
For internal name resolution, you have two options:
An Azure Virtual Network DNS server – when you create a new Azure Virtual Network, a DNS server is created
for you. This DNS server can resolve the names of the machines located on that Azure Virtual Network. This
DNS server is not configurable and is managed by the Azure fabric manager, thus making it a secure name
resolution solution.
Bring your own DNS server – you have the option of putting a DNS server of your own choosing on your Azure
Virtual Network. This DNS server could be an Active Directory integrated DNS server, or a dedicated DNS server
solution provided by an Azure partner, which you can obtain from the Azure Marketplace.
Learn more:
Virtual Network Overview
Manage DNS Servers used by a Virtual Network (VNet)
For external DNS resolution, you have two options:
Host your own external DNS server on-premises
Host your own external DNS server with a service provider
Many large organizations will host their own DNS servers on-premises. They can do this because they have the
networking expertise and global presence to do so.
In most cases, it’s better to host your DNS name resolution services with a service provider. These service
providers have the network expertise and global presence to ensure very high availability for your name resolution
services. Availability is essential for DNS services because if your name resolution services fail, no one will be able
to reach your Internet facing services.
Azure provides you a highly available and performant external DNS solution in the form of Azure DNS. This
external name resolution solution takes advantage of the worldwide Azure DNS infrastructure. It allows you to host
your domain in Azure using the same credentials, APIs, tools, and billing as your other Azure services. As part of
Azure, it also inherits the strong security controls built into the platform.
Learn more:
Azure DNS Overview
DMZ Architecture
Many enterprise organizations use DMZs to segment their networks to create a buffer-zone between the Internet
and their services. The DMZ portion of the network is considered a low-security zone and no high-value assets are
placed in that network segment. You’ll typically see network security devices that have a network interface on the
DMZ segment and another network interface connected to a network that has virtual machines and services that
accept inbound connections from the Internet.
There are a number of variations of DMZ design and the decision to deploy a DMZ, and then what type of DMZ to
use if you decide to use one, is based on your network security requirements.
Learn more:
Microsoft Cloud Services and Network Security
NOTE
Azure Network watcher is still in public preview so it may not have the same level of availability and reliability as services that
are in general availability release. Certain features may not be supported, may have constrained capabilities, and may not be
available in all Azure locations. For the most up-to-date notifications on availability and status of this service, check the Azure
updates page
Security is a top concern when managing databases, and it has always been a priority for Azure SQL Database.
Azure SQL Database supports connection security with firewall rules and connection encryption. It supports
authentication with username and password and Azure Active Directory Authentication, which uses identities
managed by Azure Active Directory. Authorization uses role-based access control.
Azure SQL Database supports encryption by performing real-time encryption and decryption of databases,
associated backups, and transaction log files at rest without requiring changes to the application.
Microsoft provides additional ways to encrypt enterprise data:
Cell-level encryption to encrypt specific columns or even cells of data with different encryption keys.
If you need a Hardware Security Module or central management of your encryption key hierarchy, consider
using Azure Key Vault with SQL Server in an Azure VM.
Always Encrypted (currently in preview) makes encryption transparent to applications and allows clients to
encrypt sensitive data inside client applications without sharing the encryption keys with SQL Database.
Azure SQL Database Auditing allows enterprises to record events to an audit login Azure Storage. SQL Database
Auditing also integrates with Microsoft Power BI to facilitate drill-down reports and analyses.
SQL Azure databases can be tightly secured to satisfy most regulatory or security requirements, including HIPAA,
ISO 27001/27002, and PCI DSS Level 1, among others. A current list of security compliance certifications is
available at the Microsoft Azure Trust Center site.
This article walks through the basics of securing Microsoft Azure SQL Databases for Structured, Tabular and
Relational Data. In particular, this article will get you started with resources for protecting data, controlling access,
and proactive monitoring.
This Azure Database Security Overview article focuses on the following areas:
Protect data
Access control
Proactive monitoring
Centralized security management
Azure marketplace
Protect data
SQL Database secures your data by providing encryption for data in motion using Transport Layer Security, for
data at rest using Transparent Data Encryption, and for data in use using Always Encrypted.
In this section, we talk about:
Encryption in motion
Encryption at rest
Encryption in use (Client)
For other ways to encrypt your data, consider:
Cell-level encryption to encrypt specific columns or even cells of data with different encryption keys.
If you need a Hardware Security Module or central management of your encryption key hierarchy, consider
using Azure Key Vault with SQL Server in an Azure VM.
Encryption in motion
A common problem for all client/server applications is the need for privacy as data moves over public and private
networks. If data moving over a network is not encrypted, there’s the chance that it can be captured and stolen by
unauthorized users. When dealing with database services, you need to make sure that data is encrypted between
the database client and server, as well as between database servers that communicate with each other and with
middle-tier applications.
One problem when you administer a network is securing data that is being sent between applications across an
untrusted network. You can use TLS/SSL to authenticate servers and clients and then use it to encrypt messages
between the authenticated parties.
In the authentication process, a TLS/SSL client sends a message to a TLS/SSL server, and the server responds with
the information that the server needs to authenticate itself. The client and server perform an additional exchange of
session keys, and the authentication dialog ends. When authentication is completed, SSL-secured communication
can begin between the server and the client using the symmetric encryption keys that are established during the
authentication process.
All connections to Azure SQL Database require encryption (SSL/TLS) at all times while data is "in transit" to and
from the database. SQL Azure uses TLS/SSL to authenticate servers and clients and then use it to encrypt messages
between the authenticated parties. In your application's connection string, you must specify parameters to encrypt
the connection and not to trust the server certificate (this is done for you if you copy your connection string out of
the Azure Classic Portal), otherwise the connection will not verify the identity of the server and will be susceptible
to "man-in-the-middle" attacks. For the ADO.NET driver, for instance, these connection string parameters are
Encrypt=True and TrustServerCertificate=False.
Encryption at rest
You can take several precautions to help secure the database such as designing a secure system, encrypting
confidential assets, and building a firewall around the database servers. However, in a scenario where the physical
media (such as drives or backup tapes) are stolen, a malicious party can just restore or attach the database and
browse the data.
One solution is to encrypt the sensitive data in the database and protects the keys that are used to encrypt the data
with a certificate. This prevents anyone without the keys from using the data, but this kind of protection must be
planned.
To solve this problem, SQL Server and Azure SQL support Transparent Data Encryption (TDE). TDE encrypts SQL
Server and Azure SQL Database data files, known as encryption data at rest.
Azure SQL Database transparent data encryption helps protect against the threat of malicious activity by
performing real-time encryption and decryption of the database, associated backups, and transaction log files at
rest without requiring changes to the application.
TDE encrypts the storage of an entire database by using a symmetric key called the database encryption key. In SQL
Database, the database encryption key is protected by a built-in server certificate. The built-in server certificate is
unique for each SQL Database server.
If a database is in a GeoDR relationship, it is protected by a different key on each server. If two databases are
connected to the same server, they share the same built-in certificate. Microsoft automatically rotates these
certificates at least every 90 days. For a general description of TDE, see Transparent Data Encryption (TDE).
Encryption in use (client)
Most data breaches involve the theft of critical data such as credit card numbers or personally identifiable
information. Databases can be treasure troves of sensitive information. They can contain customers' personal data,
confidential competitive information, and intellectual property. Lost or stolen data, especially customer data, can
result in brand damage, competitive disadvantage, and serious fines—even lawsuits.
Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national
identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server
databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the
encryption keys to the Database Engine (SQL Database or SQL Server).
Always Encrypted provides a separation between those who own the data (and can view it) and those who manage
the data (but should have no access). By ensuring on-premises database administrators, cloud database operators,
or other high-privileged, but unauthorized users, cannot access the encrypted data,
In addition, Always Encrypted makes encryption transparent to applications. An Always Encrypted-enabled driver
installed on the client computer so that it can automatically encrypt and decrypt sensitive data in the client
application. The driver encrypts the data in sensitive columns before passing the data to the Database Engine, and
automatically rewrites queries so that the semantics to the application are preserved. Similarly, the driver
transparently decrypts data, stored in encrypted database columns, contained in query results.
Access control
To provide security, SQL Database controls access with firewall rules limiting connectivity by IP address,
authentication mechanisms requiring users to prove their identity, and authorization mechanisms limiting users to
specific actions and data.
Database access
Data protection begins with controlling access to your data. The datacenter hosting your data manages physical
access, while you can configure a firewall to manage security at the network layer. You also control access by
configuring logins for authentication and defining permissions for server and database roles.
In this section, we talk about:
Firewall and firewall rules
Authentication
Authorization
Firewall and firewall rules
Microsoft Azure SQL Database provides a relational database service for Azure and other Internet-based
applications. To help protect your data, firewalls prevent all access to your database server until you specify which
computers have permission. The firewall grants access to databases based on the originating IP address of each
request. For more information, see Overview of Azure SQL Database firewall rules.
The Azure SQL Database service is only available through TCP port 1433. To access a SQL Database from your
computer, ensure that your client computer firewall allows outgoing TCP communication on TCP port 1433. If not
needed for other applications, block inbound connections on TCP port 1433.
Authentication
SQL database authentication refers to how you prove your identity when connecting to the database. SQL Database
supports two types of authentication:
SQL Authentication: A single login account is created when a logical SQL instance is created, called the SQL
Database Subscriber Account. This account connects using SQL Server authentication (user name and
password). This account is an administrator on the logical server instance and on all user databases attached to
that instance. The permissions of the Subscriber Account cannot be restricted. Only one of these accounts can
exist.
Azure Active Directory Authentication: Azure Active Directory authentication is a mechanism of connecting
to Microsoft Azure SQL Database and SQL Data Warehouse by using identities in Azure Active Directory (Azure
AD). This enables you to centrally manage identities of database users.
NOTE
Dynamic data masking can be configured by the Azure Database admin, server admin, or security officer roles.
The access restriction logic is located in the database tier rather than away from the data in another application tier.
The database system applies the access restrictions every time that data access is attempted from any tier. This
makes your security system more reliable and robust by reducing the surface area of your security system.
Row level security introduces predicate based access control. It features a flexible, centralized, predicate-based
evaluation that can take into consideration metadata or any other criteria the administrator determines as
appropriate. The predicate is used as a criterion to determine whether or not the user has the appropriate access to
the data based on user attributes. Label-based access control can be implemented by using predicate-based access
control.
Proactive monitoring
SQL Database secures your data by providing auditing and threat detection capabilities.
Auditing
SQL Database Auditing increases your ability to gain insight into events and changes that occur within the
database, including updates and queries against the data.
Azure SQL Database Auditing tracks database events and writes them to an audit log in your Azure Storage
account. Auditing can help you maintain regulatory compliance, understand database activity, and gain insight into
discrepancies and anomalies that could indicate business concerns or suspected security violations. Auditing
enables and facilitates adherence to compliance standards but doesn't guarantee compliance.
SQL Database Auditing allows you to:
Retain an audit trail of selected events. You can define categories of database actions to be audited.
Report on database activity. You can use preconfigured reports and a dashboard to get started quickly with
activity and event reporting.
Analyze reports. You can find suspicious events, unusual activity, and trends.
There are two Auditing methods:
Blob auditing - logs are written to Azure Blob Storage. This is a newer auditing method, which provides higher
performance, supports higher granularity object-level auditing, and is more cost effective.
Table auditing - logs are written to Azure Table Storage.
Threat detection
Azure SQL Database threat detection detects suspicious activities that indicate potential security threats. Threat
detection enables you to respond to suspicious events in the database, such as SQL Injections, as they occur. It
provides alerts and allows the use of Azure SQL Database Auditing to explore the suspicious events.
For example, SQL injection is one of the common Web application security issues on the Internet, used to attack
data-driven applications. Attackers take advantage of application vulnerabilities to inject malicious SQL statements
into application entry fields, breaching or modifying data in the database.
Security officers or other designated administrators can get an immediate notification about suspicious database
activities as they occur. Each notification provides details of the suspicious activity and recommends how to further
investigate and mitigate the threat.
Centralized security management
Azure Security Center helps you prevent, detect, and respond to threats. It provides integrated security monitoring
and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed,
and works with a broad ecosystem of security solutions.
Security Center helps you safeguard data in SQL Database by providing visibility into the security of all your
servers and databases. With Security Center, you can:
Define policies for SQL Database encryption and auditing.
Monitor the security of SQL Database resources across all your subscriptions.
Quickly identify and remediate security issues.
Integrate alerts from Azure SQL Database threat detection.
Security Center supports role-based access.
Azure Marketplace
The Azure Marketplace is an online applications and services marketplace that enables start-ups and independent
software vendors (ISVs) to offer their solutions to Azure customers around the world. The Azure Marketplace
combines Microsoft Azure partner ecosystems into a single, unified platform to better serve our customers and
partners. Click here to glance database security products available on Azure market place.
Next steps
Learn more about Secure your Azure SQL Database.
Learn more about Azure Security Center and Azure SQL Database service.
To learn more about threat detection, see SQL Database Threat Detection.
To learn more, see Improve SQL database performance.
Azure storage security overview
8/21/2017 • 4 min to read • Edit Online
Azure Storage is the cloud storage solution for modern applications that rely on durability, availability, and
scalability to meet the needs of their customers. Azure Storage provides a comprehensive set of security
capabilities:
The storage account can be secured using Role-Based Access Control and Azure Active Directory.
Data can be secured in transit between an application and Azure by using Client-Side Encryption, HTTPS, or SMB
3.0.
Data can be set to be automatically encrypted when written to Azure Storage using Storage Service Encryption.
OS and Data disks used by virtual machines can be set to be encrypted using Azure Disk Encryption.
Delegated access to the data objects in Azure Storage can be granted using Shared Access Signatures.
The authentication method used by someone when they access storage can be tracked using Storage analytics.
For a more detailed look at security in Azure Storage, see the Azure Storage security guide. This guide provides a
deep dive into the security features of Azure Storage such as storage account keys, data encryption in transit and at
rest, and storage analytics.
This article provides an overview of Azure security features that can be used with Azure Storage. Links are provided
to articles that give details of each feature so you can learn more.
Here are the core features to be covered in this article:
Role-Based Access Control
Delegated access to storage objects
Encryption in transit
Encryption at rest/Storage Service Encryption
Azure Disk Encryption
Azure Key Vault
Encryption in transit
Encryption in transit is a mechanism of protecting data when it is transmitted across networks. With Azure Storage
you can secure data using:
Transport-level encryption, such as HTTPS when you transfer data into or out of Azure Storage.
Wire encryption, such as SMB 3.0 encryption for Azure File shares.
Client-side encryption, to encrypt the data before it is transferred into storage and to decrypt the data after it is
transferred out of storage.
Learn more about client-side encryption:
Client-Side Encryption for Microsoft Azure Storage
Cloud security controls series: Encrypting Data in Transit
Encryption at rest
For many organizations, data encryption at rest is a mandatory step towards data privacy, compliance, and data
sovereignty. There are three Azure features that provide encryption of data that is “at rest”:
Storage Service Encryption allows you to request that the storage service automatically encrypt data when
writing it to Azure Storage.
Client-side Encryption also provides the feature of encryption at rest.
Azure Disk Encryption allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
Learn more about Storage Service Encryption:
Azure Storage Service Encryption is available for Azure Blob Storage. For details on other Azure storage types,
see File, Disk (Premium Storage), Table, and Queue.
Azure Storage Service Encryption for Data at Rest
Azure Virtual Machines lets you deploy a wide range of computing solutions in an agile way. With support for
Microsoft Windows, Linux, Microsoft SQL Server, Oracle, IBM, SAP, and Azure BizTalk Services, you can deploy any
workload and any language on nearly any operating system.
An Azure virtual machine gives you the flexibility of virtualization without having to buy and maintain the physical
hardware that runs the virtual machine. You can build and deploy your applications with the assurance that your
data is protected and safe in our highly secure datacenters.
With Azure, you can build security-enhanced, compliant solutions that:
Protect your virtual machines from viruses and malware
Encrypt your sensitive data
Secure network traffic
Identify and detect threats
Meet compliance requirements
The goal of this article is to provide an overview of the core Azure security features that can be used with virtual
machines. We also provide links to articles that give details of each feature so you can learn more.
The core Azure Virtual Machine security capabilities to be covered in this article:
Antimalware
Hardware Security Module
Virtual machine disk encryption
Virtual machine backup
Azure Site Recovery
Virtual networking
Security policy management and reporting
Compliance
Antimalware
With Azure, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend Micro,
and Kaspersky to protect your virtual machines from malicious files, adware, and other threats. See the Learn More
section below to find articles on partner solutions.
Microsoft Antimalware for Azure Cloud Services and Virtual Machines is a real-time protection capability that helps
identify and remove viruses, spyware, and other malicious software. Microsoft Antimalware provides configurable
alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems.
Microsoft Antimalware is a single-agent solution for applications and tenant environments, designed to run in the
background without human intervention. You can deploy protection based on the needs of your application
workloads, with either basic secure-by-default or advanced custom configuration, including antimalware
monitoring.
When you deploy and enable Microsoft Antimalware, the following core features are available:
Real-time protection - monitors activity in Cloud Services and on Virtual Machines to detect and block malware
execution.
Scheduled scanning - periodically performs targeted scanning to detect malware, including actively running
programs.
Malware remediation - automatically takes action on detected malware, such as deleting or quarantining
malicious files and cleaning up malicious registry entries.
Signature updates - automatically installs the latest protection signatures (virus definitions) to ensure protection
is up-to-date on a pre-determined frequency.
Antimalware Engine updates – automatically updates the Microsoft Antimalware engine.
Antimalware Platform updates – automatically updates the Microsoft Antimalware platform.
Active protection - reports to Azure telemetry metadata about detected threats and suspicious resources to
ensure rapid response and enables real-time synchronous signature delivery through the Microsoft Active
Protection System (MAPS).
Samples reporting - provides and reports samples to the Microsoft Antimalware service to help refine the
service and enable troubleshooting.
Exclusions – allows application and service administrators to configure certain files, processes, and drives to
exclude them from protection and scanning for performance and other reasons.
Antimalware event collection - records the antimalware service health, suspicious activities, and remediation
actions taken in the operating system event log and collects them into the customer’s Azure Storage account.
Learn more: To learn more about antimalware software to protect your virtual machines, see:
Microsoft Antimalware for Azure Cloud Services and Virtual Machines
Deploying Antimalware Solutions on Azure Virtual Machines
How to install and configure Trend Micro Deep Security as a Service on a Windows VM
How to install and configure Symantec Endpoint Protection on a Windows VM
Security solutions in the Azure Marketplace
Virtual networking
Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be
connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical
Azure network fabric. Each logical Azure Virtual Network is isolated from all other Azure Virtual Networks. This
isolation helps insure that network traffic in your deployments is not accessible to other Microsoft Azure
customers.
Learn more:
Azure Network Security Overview
Virtual Network Overview
Networking features and partnerships for Enterprise scenarios
Compliance
Azure Virtual Machines is certified for FISMA, FedRAMP, HIPAA, PCI DSS Level 1, and other key compliance
programs. This certification makes it easier for your own Azure applications to meet compliance requirements and
for your business to address a wide range of domestic and international regulatory requirements.
Learn more:
Microsoft Trust Center: Compliance
Trusted Cloud: Microsoft Azure Security, Privacy, and Compliance
Azure operational security overview
8/9/2017 • 10 min to read • Edit Online
Azure Operational Security refers to the services, controls, and features available to users for protecting their data,
applications, and other assets in Microsoft Azure. Azure Operational Security is a framework that incorporates the
knowledge gained through a variety of capabilities that are unique to Microsoft, including the Microsoft Security
Development Lifecycle (SDL), the Microsoft Security Response Center program, and deep awareness of the cyber
security threat landscape.
This Azure Operational Security Overview article focuses on the following areas:
Azure Operations Management Suite
Azure Security Center
Azure Monitor
Azure Network watcher
Azure Storage analytics
Azure Active directory
NOTE
See Permissions in Azure Security Center to learn more about roles and allowed actions in Security Center.
Security Center uses the Microsoft Monitoring Agent – this is the same agent used by the Operations Management
Suite and Log Analytics service. Data collected from this agent is stored in either an existing Log Analytics
workspace associated with your Azure subscription or a new workspace(s), taking into account the geolocation of
the VM.
Azure Monitor
Performance issues in your cloud app can impact your business. With multiple interconnected components and
frequent releases, degradations can happen at any time. And if you’re developing an app, your users usually
discover issues that you didn’t find in testing. You should know about these issues immediately, and have tools for
diagnosing and fixing the problems.
Azure Monitor is basic tool for monitoring services running on Azure. It gives you infrastructure-level data about
the throughput of a service and the surrounding environment. If you are managing your apps all in Azure, deciding
whether to scale up or down resources, then Azure Monitor gives you what you use to start.
In addition, you can use monitoring data to gain deep insights about your application. That knowledge can help you
to improve application performance or maintainability, or automate actions that would otherwise require manual
intervention. It includes:
Azure Activity Log
Azure Diagnostic Logs
Metrics
Azure Diagnostics
Azure Activity Log
It is a log that provides insight into the operations that were performed on resources in your subscription. The
Activity Log was previously known as “Audit Logs” or “Operational Logs,” since it reports control-plane events for
your subscriptions.
Azure Diagnostic Logs
Azure Diagnostic Logs are emitted by a resource and provide rich, frequent data about the operation of that
resource. The content of these logs varies by resource type.
For example, Windows event system logs are one category of Diagnostic Log for VMs and blob, table, and queue
logs are categories of Diagnostic Logs for storage accounts.
Diagnostics Logs differ from the Activity Log (formerly known as Audit Log or Operational Log). The Activity log
provides insight into the operations that were performed on resources in your subscription. Diagnostics logs
provide insight into operations that your resource performed itself.
Metrics
Azure Monitor enables you to consume telemetry to gain visibility into the performance and health of your
workloads on Azure. The most important type of Azure telemetry data is the metrics (also called performance
counters) emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these
metrics for monitoring and troubleshooting.
Azure Diagnostics
It is the capability within Azure that enables the collection of diagnostic data on a deployed application. You can use
the diagnostics extension from various different sources. Currently supported are Azure Cloud Service Web and
Worker Roles, Azure Virtual Machines running Microsoft Windows, and Service Fabric.
Network Watcher
Customers build an end-to-end network in Azure by orchestrating and composing various individual network
resources such as VNet, ExpressRoute, Application Gateway, Load balancers, and more. Monitoring is available on
each of the network resources.
The end to end network can have complex configurations and interactions between resources, creating complex
scenarios that need scenario-based monitoring through Network Watcher.
Network Watcher will simplifies monitoring and diagnosing of your Azure network. Diagnostic and visualization
tools available with Network Watcher enable you to take remote packet captures on an Azure Virtual Machine, gain
insights into your network traffic using flow logs, and diagnose VPN Gateway and Connections.
Network Watcher currently has the following capabilities:
Topology - Provides a network level view showing the various interconnections and associations between
network resources in a resource group.
Variable Packet capture - Captures packet data in and out of a virtual machine. Advanced filtering options and
fine-tuned controls such as being able to set time and size limitations provide versatility. The packet data can be
stored in a blob store or on the local disk in .cap format.
IP flows verify - Checks if a packet is allowed or denied based on flow information 5-tuple packet parameters
(Destination IP, Source IP, Destination Port, Source Port, and Protocol). If the packet is denied by a security
group, the rule and group that denied the packet is returned.
Next hop - Determines the next hop for packets being routed in the Azure Network Fabric, enabling you to
diagnose any misconfigured user-defined routes.
Security group view - Gets the effective and applied security rules that are applied on a VM.
NSG Flow logging - Flow logs for Network Security Groups enable you to capture logs related to traffic that are
allowed or denied by the security rules in the group. The flow is defined by a 5-tuple information – Source IP,
Destination IP, Source Port, Destination Port, and Protocol.
Virtual Network Gateway and Connection troubleshooting - Provides the ability to troubleshoot Virtual Network
Gateways and Connections.
Network subscription limits - Enables you to view network resource usage against limits.
Configuring Diagnostics Log – Provides a single pane to enable or disable Diagnostics logs for network
resources in a resource group.
To learn more how to configure network watcher see, configure network watcher.
Next steps
To learn more about OMS Security and Audit solution, see the following articles:
Operations Management Suite | Security & Compliance.
Monitoring and Responding to Security Alerts in Operations Management Suite Security and Audit Solution.
Monitoring Resources in Operations Management Suite Security and Audit Solution.
Azure Security Management and Monitoring
Overview
8/11/2017 • 6 min to read • Edit Online
Azure provides security mechanisms to aid in the management and monitoring of Azure cloud services and virtual
machines. This article provides an overview of these core security features and services. Links are provided to
articles that give details of each so you can learn more.
The security of your Microsoft cloud services is a partnership and shared responsibility between you and Microsoft.
Shared responsibility means Microsoft is responsible for the Microsoft Azure and physical security of its data
centers (by using security protections such as locked badge entry doors, fences, and guards). In addition, Azure
provides strong levels of cloud security at the software layer that meets the security, privacy, and compliance needs
of its demanding customers.
You own your data and identities, the responsibility for protecting them, the security of your on-premises
resources, and the security of cloud components over which you have control. Microsoft provides you with security
controls and capabilities to help you protect your data and applications. Your degree of responsibility for security is
based on the type of cloud service.
The following chart summarizes the balance of responsibility for both Microsoft and the customer.
For a deeper dive into security management, see Security management in Azure.
Here are the core features to be covered in this article:
Role-Based Access Control
Antimalware
Multi-Factor Authentication
ExpressRoute
Virtual network gateways
Privileged identity management
Identity protection
Security Center
Antimalware
With Azure, you can use antimalware software from major security vendors such as Microsoft, Symantec, Trend
Micro, McAfee, and Kaspersky to help protect your virtual machines from malicious files, adware, and other threats.
Microsoft Antimalware offers you the ability to install an antimalware agent for both PaaS roles and virtual
machines. Based on System Center Endpoint Protection, this feature brings proven on-premises security
technology to the cloud.
We also offer deep integration for Trend’s Deep Security™ and SecureCloud™ products in the Azure platform.
DeepSecurity is an Antivirus solution and SecureCloud is an encryption solution. DeepSecurity is deployed inside
VMs using an extension model. Using the portal UI and PowerShell, you can choose to use DeepSecurity inside new
VMs that are being spun up, or existing VMs that are already deployed.
Symantec End Point Protection (SEP) is also supported on Azure. Through portal integration, customers can specify
that they intend to use SEP within a VM. SEP can be installed on a brand new VM via the Azure portal or can be
installed on an existing VM using PowerShell.
Learn more:
Deploying Antimalware Solutions on Azure Virtual Machines
Microsoft Antimalware for Azure Cloud Services and Virtual Machines
How to install and configure Trend Micro Deep Security as a Service on a Windows VM
How to install and configure Symantec Endpoint Protection on a Windows VM
New Antimalware Options for Protecting Azure Virtual Machines – McAfee Endpoint Protection
Multi-Factor Authentication
Azure Multi-factor authentication (MFA) is a method of authentication that requires the use of more than one
verification method and adds a critical second layer of security to user sign-ins and transactions. MFA helps
safeguard access to data and applications while meeting user demand for a simple sign-in process. It delivers
strong authentication via a range of verification options—phone call, text message, or mobile app notification or
verification code and third party OATH tokens.
Learn more:
Multi-factor authentication
What is Azure Multi-Factor Authentication?
How Azure Multi-Factor Authentication works
ExpressRoute
Microsoft Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a dedicated
private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to
Microsoft cloud services, such as Microsoft Azure, Office 365, and CRM Online. Connectivity can be from an any-to-
any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity
provider at a co-location facility. ExpressRoute connections do not go over the public Internet. This allows
ExpressRoute connections to offer more reliability, faster speeds, lower latencies, and higher security than typical
connections over the Internet.
Learn more:
ExpressRoute technical overview
Virtual network gateways
VPN Gateways, also called Azure Virtual Network Gateways, are used to send network traffic between virtual
networks and on-premises locations. They are also used to send traffic between multiple virtual networks within
Azure (VNet-to-VNet). VPN gateways provide secure cross-premises connectivity between Azure and your
infrastructure.
Learn more:
About VPN gateways
Azure Network Security Overview
Identity Protection
Azure Active Directory (AD) Identity Protection provides a consolidated view of suspicious sign-in activities and
potential vulnerabilities to help protect your business. Identity Protection detects suspicious activities for users and
privileged (admin) identities, based on signals like brute-force attacks, leaked credentials, and sign-ins from
unfamiliar locations and infected devices.
By providing notifications and recommended remediation, Identity Protection helps to mitigate risks in real time. It
calculates user risk severity, and you can configure risk-based policies to automatically help safeguard application
access from future threats.
Learn more:
Azure Active Directory Identity Protection
Channel 9: Azure AD and Identity Show: Identity Protection Preview
Security Center
Azure Security Center helps you prevent, detect, and respond to threats, and provides you increased visibility into,
and control over, the security of your Azure resources. It provides integrated security monitoring and policy
management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works
with a broad ecosystem of security solutions.
Security Center helps you optimize and monitor the security of your Azure resources by:
Enabling you to define policies for your Azure subscription resources according to your company’s security
needs and the type of applications or sensitivity of the data in each subscription.
Monitoring the state of your Azure virtual machines, networking, and applications.
Providing a list of prioritized security alerts, including alerts from integrated partner solutions, along with the
information you need to quickly investigate and recommendations on how to remediate an attack.
Learn more:
Introduction to Azure Security Center
Azure Service Fabric security overview
8/29/2017 • 10 min to read • Edit Online
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable
and reliable microservices. Service Fabric addresses the significant challenges of developing and managing cloud
applications. Developers and administrators can avoid complex infrastructure problems and focus on implementing
mission-critical, demanding workloads that are scalable, reliable, and manageable.
This Azure Service Fabric Security overview article focuses on the following areas:
Securing your cluster
Understanding monitoring and diagnostics
Creating more secure environments by using certificates
Using Role-Based Access Control (RBAC)
Securing clusters by using Windows security
Configuring application security in Service Fabric
Securing communication for services in Azure Service Fabric
ClientCertificateCommonNames This is the common name of the first client certificate for
CertificateCommonName. CertificateIssuerThumbprint is the
thumbprint for the issuer of this certificate.
For more information about securing certificates, see Secure a standalone cluster on Windows using X.509
certificates.
NOTE
Learn more about managing secrets in Service Fabric applications.
Next steps
For conceptual information about cluster security, see Create a Service Fabric cluster by using Azure Resource
Manager and Azure portal.
To learn more about cluster security in Service Fabric, see Service Fabric cluster security.
Azure identity management security overview
8/11/2017 • 7 min to read • Edit Online
Microsoft identity and access management solutions help IT protect access to applications and resources across the
corporate datacenter and into the cloud, enabling additional levels of validation such as multi-factor authentication
and conditional access policies. Monitoring suspicious activity through advanced security reporting, auditing and
alerting helps mitigate potential security issues. Azure Active Directory Premium provides single sign-on to
thousands of cloud (SaaS) apps and access to web apps you run on-premises.
Security benefits of Azure Active Directory (AD) include the ability to:
Create and manage a single identity for each user across your hybrid enterprise, keeping users, groups, and
devices in sync
Provide single sign-on access to your applications including thousands of pre-integrated SaaS apps
Enable application access security by enforcing rules-based Multi-Factor Authentication for both on-premises
and cloud applications
Provision secure remote access to on-premises web applications through Azure AD Application Proxy
The goal of this article is to provide an overview of the core Azure security features that help with identity
management. We also provide links to articles that give details of each feature so you can learn more.
The article focuses on the following core Azure Identity management capabilities:
Single sign-on
Reverse proxy
Multi-factor authentication
Security monitoring, alerts, and machine learning-based reports
Consumer identity and access management
Device registration
Privileged identity management
Identity protection
Hybrid identity management
Single sign-on
Single sign-on (SSO) means being able to access all the applications and resources that you need to do business, by
signing in only once using a single user account. Once signed in, you can access all of the applications you need
without being required to authenticate (for example, type a password) a second time.
Many organizations rely upon software as a service (SaaS) applications such as Office 365, Box and Salesforce for
end user productivity. Historically, IT staff needed to individually create and update user accounts in each SaaS
application, and users had to remember a password for each SaaS application.
Azure AD extends on-premises Active Directory environments into the cloud, enabling users to use their primary
organizational account to not only sign in to their domain-joined devices and company resources, but also all the
web and SaaS applications needed for their job.
Not only do users not have to manage multiple sets of usernames and passwords, application access can be
automatically provisioned or de-provisioned based on organizational groups and their status as an employee.
Azure AD introduces security and access governance controls that enable you to centrally manage users' access
across SaaS applications.
Learn more:
Overview of Single Sign-On
What is application access and single sign-on with Azure Active Directory?
Integrate Azure Active Directory single sign-on with SaaS apps
Reverse proxy
Azure AD Application Proxy lets you publish on-premises applications, such as SharePoint sites, Outlook Web App,
and IIS-based apps inside your private network and provides secure access to users outside your network.
Application Proxy provides remote access and single sign-on (SSO) for many types of on-premises web
applications with the thousands of SaaS applications that Azure AD supports. Employees can log in to your apps
from home on their own devices and authenticate through this cloud-based proxy.
Learn more:
Enabling Azure AD Application Proxy
Publish applications using Azure AD Application Proxy
Single-sign-on with Application Proxy
Working with conditional access
Multi-factor authentication
Azure Multi-factor authentication (MFA) is a method of authentication that requires the use of more than one
verification method and adds a critical second layer of security to user sign-ins and transactions. MFA helps
safeguard access to data and applications while meeting user demand for a simple sign-in process. It delivers
strong authentication via a range of verification options—phone call, text message, or mobile app notification or
verification code and third party OAuth tokens.
Learn more:
Multi-factor authentication
What is Azure Multi-Factor Authentication?
How Azure Multi-Factor Authentication works
Device registration
Azure AD Device Registration is the foundation for device-based conditional access scenarios. When a device is
registered, Azure Active Directory Device Registration provides the device with an identity that is used to
authenticate the device when the user signs in. The authenticated device, and the attributes of the device, can then
be used to enforce conditional access policies for applications that are hosted in the cloud and on-premises.
When combined with a mobile device management (MDM) solution such as Intune, the device attributes in Azure
Active Directory are updated with additional information about the device. This allows you to create conditional
access rules that enforce access from devices to meet your standards for security and compliance.
Learn more:
Get started with Azure Active Directory Device Registration
Automatic device registration with Azure Active Directory for Windows domain-joined devices
Set up automatic registration of Windows domain-joined devices with Azure Active Directory
Identity protection
Azure AD Identity Protection is a security service that provides a consolidated view into risk events and potential
vulnerabilities affecting your organization’s identities. Identity Protection leverages existing Azure Active Directory’s
anomaly detection capabilities (available through Azure AD’s Anomalous Activity Reports), and introduces new risk
event types that can detect anomalies in real-time.
Learn more:
Azure Active Directory Identity Protection
Channel 9: Azure AD and Identity Show: Identity Protection Preview
When designing a system, it is important to understand the potential threats to that system, and add appropriate
defenses accordingly, as the system is designed and architected. It is particularly important to design the product
from the start with security in mind because understanding how an attacker might be able to compromise a system
helps make sure appropriate mitigations are in place from the beginning.
Security in IoT
Connected special-purpose devices have a significant number of potential interaction surface areas and interaction
patterns, all of which must be considered to provide a framework for securing digital access to those devices. The
term “digital access” is used here to distinguish from any operations that are carried out through direct device
interaction where access security is provided through physical access control. For example, putting the device into a
room with a lock on the door. While physical access cannot be denied using software and hardware, measures can
be taken to prevent physical access from leading to system interference.
As we explore the interaction patterns, we will look at “device control” and “device data” with the same level of
attention. “Device control” can be classified as any information that is provided to a device by any party with the
goal of changing or influencing its behavior towards its state or the state of its environment. “Device data” can be
classified as any information that a device emits to any other party about its state and the observed state of its
environment.
In order to optimize security best practices, it is recommended that a typical IoT architecture be divided into several
component/zones as part of the threat modeling exercise. These zones are described fully throughout this section
and include:
Device,
Field Gateway,
Cloud gateways, and
Services.
Zones are broad way to segment a solution; each zone often has its own data and authentication and authorization
requirements. Zones can also be used to isolation damage and restrict the impact of low trust zones on higher trust
zones.
Each zone is separated by a Trust Boundary, which is noted as the dotted red line in the diagram below. It
represents a transition of data/information from one source to another. During this transition, the data/information
could be subject to Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of
Privilege (STRIDE).
The components depicted within each boundary are also subjected to STRIDE, enabling a full 360 threat modeling
view of the solution. The sections below elaborate on each of the components and specific security concerns and
solutions that should be put into place.
The sections that follow will discuss standard components typically found in these zones.
The Device Zone
The device environment is the immediate physical space around the device where physical access and/or “local
network” peer-to-peer digital access to the device is feasible. A “local network” is assumed to be a network that is
distinct and insulated from – but potentially bridged to – the public Internet, and includes any short-range wireless
radio technology that permits peer-to-peer communication of devices. It does not include any network
virtualization technology creating the illusion of such a local network and it does also not include public operator
networks that require any two devices to communicate across public network space if they were to enter a peer-to-
peer communication relationship.
The Field Gateway Zone
Field gateway is a device/appliance or some general-purpose server computer software that acts as communication
enabler and, potentially, as a device control system and device data processing hub. The field gateway zone includes
the field gateway itself and all devices that are attached to it. As the name implies, field gateways act outside
dedicated data processing facilities, are usually location bound, are potentially subject to physical intrusion, and will
have limited operational redundancy. All to say that a field gateway is commonly a thing one can touch and
sabotage while knowing what its function is.
A field gateway is different from a mere traffic router in that it has had an active role in managing access and
information flow, meaning it is an application addressed entity and network connection or session terminal. An NAT
device or firewall, in contrast, does not qualify as field gateways since they are not explicit connection or session
terminals, but rather a route (or block) connections or sessions made through them. The field gateway has two
distinct surface areas. One faces the devices that are attached to it and represents the inside of the zone, and the
other faces all external parties and is the edge of the zone.
The cloud gateway zone
Cloud gateway is a system that enables remote communication from and to devices or field gateways from several
different sites across public network space, typically towards a cloud-based control and data analysis system, a
federation of such systems. In some cases, a cloud gateway may immediately facilitate access to special-purpose
devices from terminals such as tablets or phones. In the context discussed here, “cloud” is meant to refer to a
dedicated data processing system that is not bound to the same site as the attached devices or field gateways. Also
in a Cloud Zone, operational measures prevent targeted physical access and are not necessarily exposed to a
“public cloud” infrastructure.
A cloud gateway may potentially be mapped into a network virtualization overlay to insulate the cloud gateway and
all of its attached devices or field gateways from any other network traffic. The cloud gateway itself is neither a
device control system nor a processing or storage facility for device data; those facilities interface with the cloud
gateway. The cloud gateway zone includes the cloud gateway itself along with all field gateways and devices
directly or indirectly attached to it. The edge of the zone is a distinct surface area where all external parties
communicate through.
The services zone
A “service” is defined for this context as any software component or module that is interfacing with devices through
a field- or cloud gateway for data collection and analysis, as well as for command and control. Services are
mediators. They act under their identity towards gateways and other subsystems, store and analyze data,
autonomously issue commands to devices based on data insights or schedules and expose information and control
capabilities to authorized end users.
Information-devices vs. special-purpose devices
PCs, phones, and tablets are primarily interactive information devices. Phones and tablets are explicitly optimized
around maximizing battery lifetime. They preferably turn off partially when not immediately interacting with a
person, or when not providing services like playing music or guiding their owner to a particular location. From a
systems perspective, these information technology devices are mainly acting as proxies towards people. They are
“people actuators” suggesting actions and “people sensors” collecting input.
Special-purpose devices, from simple temperature sensors to complex factory production lines with thousands of
components inside them, are different. These devices are much more scoped in purpose and even if they provide
some user interface, they are largely scoped to interfacing with or be integrated into assets in the physical world.
They measure and report environmental circumstances, turn valves, control servos, sound alarms, switch lights, and
do many other tasks. They help to do work for which an information device is either too generic, too expensive, too
big, or too brittle. The concrete purpose immediately dictates their technical design as well the available monetary
budget for their production and scheduled lifetime operation. The combination of these two key factors constrains
the available operational energy budget, physical footprint, and thus available storage, compute, and security
capabilities.
If something “goes wrong” with automated or remote controllable devices, for example, physical defects or control
logic defects to willful unauthorized intrusion and manipulation. The production lots may be destroyed, buildings
may be looted or burned down, and people may be injured or even die. This is, of course, a whole different class of
damage than someone maxing out a stolen credit card's limit. The security bar for devices that make things move,
and also for sensor data that eventually results in commands that cause things to move, must be higher than in any
e-commerce or banking scenario.
Device control and device data interactions
Connected special-purpose devices have a significant number of potential interaction surface areas and interaction
patterns, all of which must be considered to provide a framework for securing digital access to those devices. The
term “digital access” is used here to distinguish from any operations that are carried out through direct device
interaction where access security is provided through physical access control. For example, putting the device into a
room with a lock on the door. While physical access cannot be denied using software and hardware, measures can
be taken to prevent physical access from leading to system interference.
As we explore the interaction patterns, we will look at “device control” and “device data” with the same level of
attention while threat modeling. “Device control” can be classified as any information that is provided to a device by
any party with the goal of changing or influencing its behavior towards its state or the state of its environment.
“Device data” can be classified as any information that a device emits to any other party about its state and the
observed state of its environment.
The diagram below provides a simplified view of Microsoft’s IoT Architecture using a Data Flow Diagram model
that is used by the Microsoft Threat Modeling Tool:
It is important to note that the architecture separates the device and gateway capabilities. This allows the user to
leverage gateway devices that are more secure: they are capable of communicating with the cloud gateway using
secure protocols, which typically requires greater processing overhead that a native device - such as a thermostat -
could provide on its own. In the Azure services zone, we assume that the Cloud Gateway is represented by the
Azure IoT Hub service.
Device and data sources/data transport
This section explores the architecture outlined above through the lens of threat modeling and gives an overview of
how we are addressing some of the inherent concerns. We will focus on the core elements of a threat model:
Processes (those under our control and external items)
Communication (also called data flows)
Storage (also called data stores)
Processes
In each of the categories outlined in the Azure IoT architecture, we try to mitigate a number of different threats
across the different stages data/information exists in: process, communication, and storage. Below we give an
overview of the most common ones for the “process” category, followed by an overview of how these could be best
mitigated:
Spoofing (S): An attacker may extract cryptographic key material from a device, either at the software or hardware
level, and subsequently access the system with a different physical or virtual device under the identity of the device
the key material has been taken from. A good illustration is remote controls that can turn any TV and that are
popular prankster tools.
Denial of Service (D): A device can be rendered incapable of functioning or communicating by interfering with
radio frequencies or cutting wires. For example, a surveillance camera that had its power or network connection
intentionally knocked out will not report data, at all.
Tampering (T): An attacker may partially or wholly replace the software running on the device, potentially allowing
the replaced software to leverage the genuine identity of the device if the key material or the cryptographic facilities
holding key materials were available to the illicit program. For example, an attacker may leverage extracted key
material to intercept and suppress data from the device on the communication path and replace it with false data
that is authenticated with the stolen key material.
Information Disclosure (I): If the device is running manipulated software, such manipulated software could
potentially leak data to unauthorized parties. For example, an attacker may leverage extracted key material to inject
itself into the communication path between the device and a controller or field gateway or cloud gateway to siphon
off information.
Elevation of Privilege (E): A device that does specific function can be forced to do something else. For example, a
valve that is programmed to open half way can be tricked to open all the way.
COMPONENT THREAT MITIGATION RISK IMPLEMENTATION
Field Gateway S Authenticating the If someone can spoof TLS RSA/PSK, IPSec,
Field gateway to Field Gateway, then it RFC 4279. All the
Cloud Gateway (cert can present itself as same key storage and
based, PSK, Claim any device. attestation concerns
based,..) of devices in general –
best case is use TPM.
6LowPAN extension
for IPSec to support
Wireless Sensor
Networks (WSN).
COMPONENT THREAT MITIGATION RISK IMPLEMENTATION
E Access control
mechanism for Field
Gateway
External Entity Device TID Strong pairing of the Eavesdropping the Securely pairing the
external entity to the connection to the external entity to the
device device. Interfering the device NFC/Bluetooth
communication with LE. Controlling the
the device operational panel of
the device (Physical)
Device storage TRID Storage encryption, Reading data from the Encryption, message
signing the logs storage (PII data), authentication code
tampering with (MAC) or digital
telemetry data. signature. Where
Tampering with possible, strong
queued or cached access control
command control through resource
data. Tampering with access control lists
configuration or (ACLs) or permissions.
firmware update
packages while cached
or queued locally can
lead to OS and/or
system components
being compromised
Field Gateway storage TRID Storage encryption, Reading data from the BitLocker
(queuing the data) signing the logs storage (PII data),
tampering with
telemetry data,
tampering with
queued or cached
command control
data. Tampering with
configuration or
firmware update
packages (destined for
devices or field
gateway) while cached
or queued locally can
lead to OS and/or
system components
being compromised
Additional resources
Refer to the following articles for additional information:
SDL Threat Modeling Tool
Microsoft Azure IoT reference architecture
See also
To learn more about securing your IoT solution, see Secure your IoT deployment.
You can also explore some of the other features and capabilities of the IoT Suite preconfigured solutions:
Predictive maintenance preconfigured solution overview
Frequently asked questions for IoT Suite
You can read about IoT Hub security in Control access to IoT Hub in the IoT Hub developer guide.
Azure encryption overview
8/22/2017 • 11 min to read • Edit Online
This article provides an overview of how encryption is used in Microsoft Azure. It covers the major areas of
encryption, including encryption at rest, encryption in flight, and key management with Key Vault. Each section
includes links for more detailed information.
Next steps
Azure security overview
Azure network security overview
Azure database security overview
Azure virtual machines security overview
Data encryption at rest
Data security and encryption best practices
Security architecture overview
6/27/2017 • 1 min to read • Edit Online
Having a strong architectural foundation is one of the keys to success when it comes to secure solution
deployments in Azure. With this knowledge you’re able to better understand your requirements by knowing the
right questions to ask and more equipped to find the right answers to your questions. Getting right answers to the
right questions goes a long way toward optimizing the security of your deployments.
In this section you’ll see articles on Azure Security Architecture that will help you build secure solutions. A popular
collection of Azure security best practices and patterns is also included. At this time, we have the following articles –
make sure to visit our site and the Azure Security Team blog for updates on a regular basis:
Data Classification for Cloud Readiness
Application Architecture on Microsoft Azure
Azure Security Best Practices and Patterns
Azure Operational Security
7/19/2017 • 19 min to read • Edit Online
Introduction
Overview
We know that security is job one in the cloud and how important it is that you find accurate and timely information
about Azure security. One of the best reasons to use Azure for your applications and services is to take advantage
of the wide array of security tools and capabilities available. These tools and capabilities help make it possible to
create secure solutions on the secure Azure platform. Windows Azure must provide confidentiality, integrity, and
availability of customer data, while also enabling transparent accountability.
To help customers better understand the array of security controls implemented within Microsoft Azure from both
the customer's and Microsoft operational perspectives, this white paper, “Azure Operational Security", is written
that provides a comprehensive look at the operational security available with Windows Azure.
Azure Platform
Azure is a public cloud service platform that supports a broad selection of operating systems, programming
languages, frameworks, tools, databases,and devices. It can run Linux containers with Docker integration; build apps
with JavaScript, Python, .NET, PHP, Java,and Node.js; build back-ends for iOS, Android,and Windows devices. Azure
Cloud service supports the same technologies millions of developers and IT professionals already rely on and trust.
When you build on, or migrate IT assets to, a public cloud service provider you are relying on that organization’s
abilities to protect your applications and data with the services and the controls they provide to manage the
security of your cloud-based assets.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security requirements. In addition,
Azure provides you with a wide array of configurable security options and the ability to control them so that you
can customize security to meet the unique requirements of your organization’s deployments. This document will
helps you understand how Azure security capabilities can help you fulfill these requirements.
Abstract
Azure Operational Security refers to the services, controls, and features available to users for protecting their data,
applications,and other assets in Microsoft Azure. Azure Operational Security is built on a framework that
incorporates the knowledge gained through various capabilities that are unique to Microsoft, including the
Microsoft Security Development Lifecycle (SDL), the Microsoft Security Response Center program, and deep
awareness of the cybersecurity threat landscape.
This white paper outlines Microsoft’s approach to Azure Operational Security within the Microsoft Azure cloud
platform and covers following services:
1. Azure Operations Management Suite
2. Azure Security Center
3. Azure Monitor
4. Azure Network watcher
5. Azure Storage analytics
6. Azure Active directory
Microsoft Operations Management Suite
Microsoft Operations Management Suite (OMS) is the IT management solution for the hybrid cloud. Used alone or
to extend your existing System Center deployment, OMS gives you the maximum flexibility and control for cloud-
based management of your infrastructure.
With OMS, you can manage any instance in any cloud, including on-premises, Azure, AWS, Windows Server, Linux,
VMware, and OpenStack, at a lower cost than competitive solutions. Built for the cloud-first world, OMS offers a
new approach to managing your enterprise that is the fastest, most cost-effective way to meet new business
challenges and accommodate new workloads, applications and cloud environments.
OMS services
The core functionality of OMS is provided by a set of services that run in Azure. Each service provides a specific
management function, and you can combine services to achieve different management scenarios.
SERVICE DESCRIPTION
Log Analytics
Log Analytics provides monitoring services for OMS by collecting data from managed resources into a central
repository. This data could include events, performance data, or custom data provided through the API. Once
collected, the data is available for alerting, analysis, and export.
This method allows you to consolidate data from various sources, so you can combine data from your Azure
services with your existing on-premises environment. It also clearly separates the collection of the data from the
action taken on that data so that all actions are available to all kinds of data.
The Log Analytics service manages your cloud-based data securely by using the following methods:
data segregation
data retention
physical security
incident management
compliance
security standards certifications
Azure Backup
Azure Backup provides data backup and restore services and is part of the OMS suite of products and services. It
protects your application data and retains it for years without any capital investment and with minimal operating
costs. It can back up data from physical and virtual Windows servers in addition to application workloads such as
SQL Server and SharePoint. It can also be used by System Center Data Protection Manager (DPM) to replicate
protected data to Azure for redundancy and long-term storage.
Protected data in Azure Backup is stored in a backup vault located in a particular geographic region. The data is
replicated within the same region and, depending on the type of vault, may also be replicated to another region for
further resiliency.
Management Solutions
Microsoft Operations Management Suite (OMS) is Microsoft's cloud-based IT management solution that helps you
manage and protect your on-premises and cloud infrastructure.
Management Solutions are prepackaged sets of logics that implement a particular management scenario using one
or more OMS services. Different solutions are available from Microsoft and from partners that you can easily add
to your Azure subscription to increase the value of your investment in OMS. As a partner, you can create your own
solutions to support your applications and services and provide them to users through the Azure Marketplace or
Quick Start Templates.
A good example of a solution that uses multiple services to provide additional functionality is the Update
Management solution. This solution uses the Log Analytics agent for Windows and Linux to collect information
about required updates on each agent. It writes this data to the Log Analytics repository where you can analyze it
with an included dashboard.
When you create a deployment, runbooks in Azure Automation are used to install required updates. You manage
this entire process in the portal and don’t need to worry about the underlying details.
Policies that are enabled in the subscription level automatically propagate to all resources groups within the
subscription as shown in the diagram at the right side:
Data collection
Security Center collects data from your virtual machines (VMs) to assess their security state, provide security
recommendations, and alert you to threats. When your first access Security Center, data collection is enabled on all
VMs in your subscription. Data collection is recommended, but you can opt out by turning off data collection in the
Security Center policy.
Data sources
Azure Security Center analyzes data from the following sources to provide visibility into your security state,
identify vulnerabilities and recommend mitigations, and detect active threats:
Azure Services: Uses information about the configuration of Azure services you have deployed by
communicating with that service’s resource provider.
Network Traffic: Uses sampled network traffic metadata from Microsoft’s infrastructure, such as
source/destination IP/port, packet size, and network protocol.
Partner Solutions: Uses security alerts from integrated partner solutions, such as firewalls and antimalware
solutions.
Your Virtual Machines: Uses configuration information and information about security events, such as
Windows event and audit logs, IIS logs, syslog messages, and crash dump files from your virtual machines.
Data protection
To help customers prevent, detect, and respond to threats, Azure Security Center collects and processes security-
related data, including configuration information, metadata, event logs, crash dump files, and more. Microsoft
adheres to strict compliance and security guidelines—from coding to operating a service.
Data segregation: Data is kept logically separate on each component throughout the service. All data is
tagged per organization. This tagging persists throughout the data lifecycle, and it is enforced at each layer
of the service.
Data access: To provide security recommendations and investigate potential security threats, Microsoft
personnel may access information collected or analyzed by Azure services, including crash dump files,
process creation events, VM disk snapshots and artifacts, which may unintentionally include Customer Data
or personal data from your virtual machines. We adhere to the Microsoft Online Services Terms and Privacy
Statement, which state that Microsoft is not uses Customer Data or derive information from it for any
advertising or similar commercial purposes.
Data use: Microsoft uses patterns and threat intelligence seen across multiple tenants to enhance our
prevention and detection capabilities; we do so in accordance with the privacy commitments described in
our Privacy Statement.
Data location
Azure Security Center collects ephemeral copies of your crash dump files and analyzes them for evidence of exploit
attempts and successful compromises. Azure Security Center performs this analysis within the same Geo as the
workspace, and deletes the ephemeral copies when analysis is complete. Machine artifacts are stored centrally in
the same region as the VM.
Your Storage Accounts: A storage account is specified for each region where virtual machines are running.
This enables you to store data in the same region as the virtual machine from which the data is collected.
Azure Security Center Storage: Information about security alerts, including partner alerts,
recommendations, and security health status is stored centrally, currently in the United States. This
information may include related configuration information and security events collected from your virtual
machines as needed to provide you with the security alert, recommendation, or security health status.
Azure Monitor
The OMS Security and Audit solution enables IT to actively monitor all resources, which can help minimize the
impact of security incidents. OMS Security and Audit have security domains that can be used for monitoring
resources. The security domain provides quick access to options, for security monitoring the following domains are
covered in more details:
Malware assessment
Update assessment
Identity and Access.
Azure Monitor provides pointers to information on specific types of resources. It offers visualization, query, routing,
alerting, auto scale, and automation on data both from the Azure infrastructure (Activity Log) and each individual
Azure resource (Diagnostic Logs).
Cloud applications are complex with many moving parts. Monitoring provides data to ensure that your application
stays up and running in a healthy state. It also helps you to stave off potential problems or troubleshoot past ones.
In addition, you can use monitoring data to gain deep insights about your application. That knowledge can help you
to improve application performance or maintainability, or automate actions that would otherwise require manual
intervention.
Azure Activity Log
It is a log that provides insight into the operations that were performed on resources in your subscription. The
Activity Log was previously known as “Audit Logs” or “Operational Logs,” since it reports control-plane events for
your subscriptions.
Using the Activity Log, you can determine the ‘what, who, and when’ for any write operations (PUT, POST, DELETE)
taken on the resources in your subscription. You can also understand the status of the operation and other relevant
properties. The Activity Log does not include read (GET) operations or operations for resources that use the Classic
model.
Azure Diagnostic Logs
These logs are emitted by a resource and provide rich, frequent data about the operation of that resource. The
content of these logs varies by resource type.
For example, Windows event system logs are one category of Diagnostic Log for VMs and blob, table, and queue
logs are categories of Diagnostic Logs for storage accounts.
Diagnostics Logs differ from the Activity Log (formerly known as Audit Log or Operational Log). The Activity log
provides insight into the operations that were performed on resources in your subscription. Diagnostics logs
provide insight into operations that your resource performed itself.
Metrics
Azure Monitor enables you to consume telemetry to gain visibility into the performance and health of your
workloads on Azure. The most important type of Azure telemetry data is the metrics (also called performance
counters) emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these
metrics for monitoring and troubleshooting. Metrics are a valuable source of telemetry and enable you to do the
following tasks:
Track the performance of your resource (such as a VM, website, or logic app) by plotting its metrics on a
portal chart and pinning that chart to a dashboard.
Get notified of an issue that impacts the performance of your resource when a metric crosses a certain
threshold.
Configure automated actions, such as auto scaling a resource or firing a runbook when a metric crosses a
certain threshold.
Perform advanced analytics or reporting on performance or usage trends of your resource.
Archive the performance or health history of your resource for compliance or auditing purposes.
Azure Diagnostics
It is the capability within Azure that enables the collection of diagnostic data on a deployed application. You can use
the diagnostics extension from various different sources. Currently supported are Azure Cloud Service Web and
Worker Roles, Azure Virtual Machines running Microsoft Windows,and Service Fabric. Other Azure services have
their own separate diagnostics.
NOTE
For more information on billing and data retention policies, see Storage Analytics and Billing. For optimal performance, you
want to limit the number of highly utilized disks attached to the virtual machine to avoid possible throttling. If all disks are
not being highly utilized at the same time, the storage account can support a larger number disk.
NOTE
For more information on storage account limits, see Azure Storage Scalability and Performance Targets.
AUTHENTICATED ANONYMOUS
Failed requests, including timeout, throttling, network, Requests using a Shared Access Signature (SAS), including
authorization, and other errors failed and successful requests
Requests using a Shared Access Signature (SAS), including Time out errors for both client and server
failed and successful requests
AUTHENTICATED ANONYMOUS
Requests to analytics data Failed GET requests with error code 304 (Not Modified)
Requests made by Storage Analytics itself, such as log creation All other failed anonymous requests are not logged. A full list
or deletion, are not logged. A full list of the logged data is of the logged data is documented in the Storage Analytics
documented in the Storage Analytics Logged Operations and Logged Operations and Status Messages and Storage
Status Messages and Storage Analytics Log Format topics. Analytics Log Format.
Sign-ins from unknown sources Application usage: summary Directory audit report
The data of these reports can be useful to your applications, such as SIEM systems, audit, and business intelligence
tools. The Azure AD reporting APIs provide programmatic access to the data through a set of REST-based APIs. You
can call these APIs from various programming languages and tools.
Events in the Azure AD Audit report are retained for 180 days.
NOTE
For more information about retention on reports, see Azure Active Directory Report Retention Policies.
For customers interested in storing their audit events for longer retention periods, the Reporting API can be used to
regularly pull audit events into a separate data store.
Summary
This article summaries protecting your privacy and securing your data, while delivering software and services that
help you manage the IT infrastructure of your organization. Microsoft recognizes that when they entrust their data
to others, that trust requires rigorous security. Microsoft adheres to strict compliance and security guidelines—
from coding to operating a service. Securing and protecting data is a top priority at Microsoft.
This article explains
How data is collected, processed, and secured in the Operations Management Suite (OMS).
Quickly analyze events across multiple data sources. Identify security risks and understand the scope and
impact of threats and attacks to mitigate the damage of a security breach.
Identify attack patterns by visualizing outbound malicious IP traffic and malicious threat types. Understand
the security posture of your entire environment regardless of platform.
Capture all the log and event data required for a security or compliance audit. Slash the time and resources
needed to supply a security audit with a complete, searchable, and exportable log and event data set.
Collect security-related events, audit,and breach analysis to keep a close eye your assets:
Security posture
Notable issue
Summaries threats
Next Steps
Design and operational security
Microsoft designs its services and software with security in mind to help ensure that its cloud infrastructure is
resilient and defended from attacks.
Operations Management Suite | Security & Compliance
Use Microsoft security data and analysis to perform more intelligent and effective threat detection.
Azure Security Center planning and operations A set of steps and tasks that you can follow to optimize your use
of Security Center based on your organization’s security requirements and cloud management model.
Azure Advanced Threat Detection
8/21/2017 • 23 min to read • Edit Online
Introduction
Overview
Microsoft has developed a series of White Papers, Security Overviews, Best Practices, and Checklists to assist Azure
customers about the various security-related capabilities available in and surrounding the Azure Platform. The
topics range in terms of breadth and depth and are updated periodically. This document is part of that series as
summarized in the following abstract section.
Azure Platform
Azure is a public cloud service platform that supports the broadest selection of operating systems, programming
languages, frameworks, tools, databases, and devices. It supports the following programming languages:
Run Linux containers with Docker integration.
Build apps with JavaScript, Python, .NET, PHP, Java, and Node.js
Build back-ends for iOS, Android, and Windows devices.
Azure public cloud services support the same technologies millions of developers and IT professionals already rely
on and trust.
When you are migrating to a public cloud with an organization, that organization is responsible to protect your
data and provide security and governance around the system.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security needs. Azure provides a
wide array of options to configure and customize security to meet the requirements of your app deployments. This
document helps you meet these requirements.
Abstract
Microsoft Azure offers built in advanced threat detection functionality through services like Azure Active Directory,
Azure Operations Management Suite (OMS), and Azure Security Center. This collection of security services and
capabilities provides a simple and fast way to understand what is happening within your Azure deployments.
This white paper will guide you the “Microsoft Azure approaches” towards threat vulnerability diagnostic and
analysing the risk associated with the malicious activities targeted against servers and other Azure resources. This
helps you to identify the methods of identification and vulnerability management with optimized solutions by the
Azure platform and customer-facing security services and technologies.
This white paper focuses on the technology of Azure platform and customer-facing controls, and does not attempt
to address SLAs, pricing models, and DevOps practice considerations.
you can manage, control, and monitor access within your organization. This includes access to resources in Azure
AD and other Microsoft online services like Office 365 or Microsoft Intune.
Azure AD Privileged Identity Management helps you:
Get an alert and report on Azure AD administrators and "just in time" administrative access to Microsoft
Online Services like Office 365 and Intune
Get reports about administrator access history and changes in administrator assignments
Get alerts about access to a privileged role
Security Domains: in this area, you will be able to further explore security records over time, access
malware assessment, update assessment, network security, identity and access information, computers with
security events and quickly have access to Azure Security Center dashboard.
Notable Issues: this option allows you to quickly identify the number of active issues and the severity of
these issues.
Detections (Preview): enables you to identify attack patterns by visualizing security alerts as they take
place against your resources.
Threat Intelligence: enables you to identify attack patterns by visualizing the total number of servers with
outbound malicious IP traffic, the malicious threat type, and a map that shows where these IPs are coming
from.
Common security queries: this option provides you a list of the most common security queries that you
can use to monitor your environment. When you click in one of those queries, it opens the Search blade with
the results for that query.
Insight and Analytics
At the center of Log Analytics is the OMS repository, which is hosted in the Azure cloud.
Data is collected into the repository from connected sources by configuring data sources and adding solutions to
your subscription.
Data sources and solutions will each create different record types that have their own set of properties but may still
be analyzed together in queries to the repository. This allows you to use the same tools and methods to work with
different kinds of data collected by different sources.
Most of your interaction with Log Analytics is through the OMS portal, which runs in any browser and provides you
with access to configuration settings and multiple tools to analyze and act on collected data. From the portal, you
can use log searches where you construct queries to analyze collected data, dashboards, which you can customize
with graphical views of your most valuable searches, and solutions, which provide additional functionality and
analysis tools.
Solutions add functionality to Log Analytics. They primarily run in the cloud and provide analysis of data collected
in the OMS repository. They may also define new record types to be collected that can be analyzed with Log
Searches or by additional user interface provided by the solution in the OMS dashboard. The Security and Audit is
an example of these types of solutions.
Automation & Control: Alert on security configuration drifts
Azure Automation automates administrative processes with runbooks that are based on PowerShell and run in the
Azure cloud. Runbooks can also be executed on a server in your local data center to manage local resources. Azure
Automation provides configuration management with PowerShell DSC (Desired State Configuration).
You can create and manage DSC resources hosted in Azure and apply them to cloud and on-premises systems to
define and automatically enforce their configuration or get reports on drift to help insure that security
configurations remain within policy.
Azure Security Center
Azure Security Center helps protect your Azure resources. It provides integrated security monitoring and policy
management across your Azure subscriptions. Within the service, you are able to define polices not only against
your Azure subscriptions, but also against Resource Groups, so you can be more granular.
Microsoft security researchers are constantly on the lookout for threats. They have access to an expansive set of
telemetry gained from Microsoft’s global presence in the cloud and on-premises. This wide-reaching and diverse
collection of datasets enables Microsoft to discover new attack patterns and trends across its on-premises
consumer and enterprise products, as well as its online services.
Thus, Security Center can rapidly update its detection algorithms as attackers release new and increasingly
sophisticated exploits. This approach helps you keep pace with a fast-moving threat environment.
Security Center threat detection works by automatically collecting security information from your Azure resources,
the network, and connected partner solutions. It analyzes this information, correlating information from multiple
sources, to identify threats. Security alerts are prioritized in Security Center along with recommendations on how to
remediate the threat.
Security Center employs advanced security analytics, which go far beyond signature-based approaches.
Breakthroughs in big data and machine learning technologies are used to evaluate events across the entire cloud
fabric – detecting threats that would be impossible to identify using manual approaches and predicting the
evolution of attacks. These security analytics includes the following.
Threat Intelligence
Microsoft has an immense amount of global threat intelligence. Telemetry flows in from multiple sources, such as
Azure, Office 365, Microsoft CRM online, Microsoft Dynamics AX, outlook.com, MSN.com, the Microsoft Digital
Crimes Unit (DCU), and Microsoft Security Response Center (MSRC).
Researchers also receive threat intelligence information that is shared among major cloud service providers and
subscribes to threat intelligence feeds from third parties. Azure Security Center can use this information to alert you
to threats from known bad actors. Some examples include:
Harnessing the Power of Machine Learning - Azure Security Center has access to a vast amount of data
about cloud network activity, which can be used to detect threats targeting your Azure deployments. For
example:
Brute Force Detections - Machine learning is used to create a historical pattern of remote access attempts,
which allows it to detect brute force attacks against SSH, RDP, and SQL ports.
Outbound DDoS and Botnet Detection - A common objective of attacks targeting cloud resources is to
use the compute power of these resources to execute other attacks.
New Behavioral Analytics Servers and VMs - Once a server or virtual machine is compromised, attackers
employ a wide variety of techniques to execute malicious code on that system while avoiding detection,
ensuring persistence, and obviating security controls.
Azure SQL Database Threat Detection - Threat Detection for Azure SQL Database, which identifies
anomalous database activities indicating unusual and potentially harmful attempts to access or exploit
databases.
Behavioral analytics
Behavioral analytics is a technique that analyzes and compares data to a collection of known patterns. However,
these patterns are not simple signatures. They are determined through complex machine learning algorithms that
are applied to massive datasets.
They are also determined through careful analysis of malicious behaviors by expert analysts. Azure Security Center
can use behavioral analytics to identify compromised resources based on analysis of virtual machine logs, virtual
network device logs, fabric logs, crash dumps, and other sources.
In addition, there is correlation with other signals to check for supporting evidence of a widespread campaign. This
correlation helps to identify events that are consistent with established indicators of compromise.
Some examples include:
Suspicious process execution: Attackers employ several techniques to execute malicious software without
detection. For example, an attacker might give malware the same names as legitimate system files but place
these files in an alternate location, use a name that is very like a benign file, or mask the file’s true extension.
Security Center models processes behaviors and monitors process executions to detect outliers such as
these.
Hidden malware and exploitation attempts: Sophisticated malware can evade traditional antimalware
products by either never writing to disk or encrypting software components stored on disk. However, such
malware can be detected using memory analysis, as the malware must leave traces in memory to function.
When software crashes, a crash dump captures a portion of memory at the time of the crash. By analyzing
the memory in the crash dump, Azure Security Center can detect techniques used to exploit vulnerabilities in
software, access confidential data, and surreptitiously persist within a compromised machine without
impacting the performance of your machine.
Lateral movement and internal reconnaissance: To persist in a compromised network and
locate/harvest valuable data, attackers often attempt to move laterally from the compromised machine to
others within the same network. Security Center monitors process and login activities to discover attempts
to expand an attacker’s foothold within the network, such as remote command execution, network probing,
and account enumeration.
Malicious PowerShell Scripts: PowerShell can be used by attackers to execute malicious code on target
virtual machines for a various purposes. Security Center inspects PowerShell activity for evidence of
suspicious activity.
Outgoing attacks: Attackers often target cloud resources with the goal of using those resources to mount
additional attacks. Compromised virtual machines, for example, might be used to launch brute force attacks
against other virtual machines, send SPAM, or scan open ports and other devices on the Internet. By
applying machine learning to network traffic, Security Center can detect when outbound network
communications exceed the norm. When SPAM, Security Center also correlates unusual email traffic with
intelligence from Office 365 to determine whether the mail is likely nefarious or the result of a legitimate
email campaign.
Anomaly Detection
Azure Security Center also uses anomaly detection to identify threats. In contrast to behavioral analytics (which
depends on known patterns derived from large data sets), anomaly detection is more “personalized” and focuses
on baselines that are specific to your deployments. Machine learning is applied to determine normal activity for
your deployments and then rules are generated to define outlier conditions that could represent a security event.
Here’s an example:
Inbound RDP/SSH brute force attacks: Your deployments may have busy virtual machines with many logins
each day and other virtual machines that have few or any logins. Azure Security Center can determine baseline
login activity for these virtual machines and use machine learning to define around the normal login activities. If
there is any discrepancy with the baseline defined for login related characteristics, then an alert may be
generated. Again, machine learning determines what is significant.
Continuous Threat Intelligence Monitoring
Azure Security Center operates with security research and data science teams throughout the world that
continuously monitor for changes in the threat landscape. This includes the following initiatives:
Threat intelligence monitoring: Threat intelligence includes mechanisms, indicators, implications, and
actionable advice about existing or emerging threats. This information is shared in the security community
and Microsoft continuously monitors threat intelligence feeds from internal and external sources.
Signal sharing: Insights from security teams across Microsoft’s broad portfolio of cloud and on-premises
services, servers, and client endpoint devices are shared and analyzed.
Microsoft security specialists: Ongoing engagement with teams across Microsoft that work in specialized
security fields, like forensics and web attack detection.
Detection tuning: Algorithms are run against real customer data sets and security researchers work with
customers to validate the results. True and false positives are used to refine machine learning algorithms.
These combined efforts culminate in new and improved detections, which you can benefit from instantly – there’s
no action for you to take.
Next Steps
Azure Security Center detection capabilities
Azure Security Center’s advanced detection capabilities helps to identify active threats targeting your Microsoft
Azure resources and provides you with the insights needed to respond quickly.
Azure SQL Database Threat Detection
Azure SQL Database Threat Detection helped address their concerns about potential threats to their database.
Azure Logging and Auditing
6/27/2017 • 23 min to read • Edit Online
Introduction
Overview
To assist current and prospective Azure customers in understanding and using the various security-related
capabilities available in and surrounding the Azure Platform, Microsoft has developed a series of white papers,
security overviews, best practices, and checklists. The topics range in terms of breadth and depth and are updated
periodically. This document is part of that series as summarized in the following Abstract section.
Azure Platform
Azure is an open and flexible cloud service platform that supports the broadest selection of operating systems,
programming languages, frameworks, tools, databases,and devices.
For example, you can:
Run Linux containers with Docker integration.
Build apps with JavaScript, Python, .NET, PHP, Java,and Node.js
Build back-ends for iOS, Android,and Windows devices.
Azure public cloud services support the same technologies millions of developers and IT professionals already rely
on and trust.
When you build on, or migrate IT assets to, a cloud provider, you are relying on that organization’s abilities to
protect your applications and data with the services and the controls they provide to manage the security of your
cloud-based assets.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security needs. In addition, Azure
provides you with a wide array of configurable security options and the ability to control them so that you can
customize security to meet the unique requirements of your deployments. This document will helps you meet these
requirements.
Abstract
Auditing and logging of security-related events, and related alerts, are important components in an effective data
protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help
you detect patterns that may indicate attempted or successful external penetration of the network, as well as
internal attacks. You can use auditing to monitor user activity, document regulatory compliance, perform forensic
analysis, and more. Alerts provide immediate notification when security events occur.
Microsoft Azure services and products provide you with configurable security auditing and logging options to help
you identify gaps in your security policies and mechanisms, and address those gaps to help prevent breaches.
Microsoft services offer some (and in some cases, all) of the following options: centralized monitoring, logging, and
analysis systems to provide continuous visibility; timely alerts; and reports to help you manage the large amount of
information generated by devices and services.
Microsoft Azure log data can be exported to Security Incident and Event Management (SIEM) systems for analysis
and integrates with third-party auditing solutions.
This whitepaper provides an introduction for generating, collecting, and analyzing security logs from services
hosted on Azure, and it can help you gain security insights into your Azure deployments. The scope of this white
paper is limited to applications and services built and deployed in Azure.
NOTE
Certain recommendations contained herein may result in increased data, network, or compute resource usage, and increase
your license or subscription costs.
Activity Logs Control-plane events on Provide insight into the Rest API & Azure Monitor
Azure Resource Manager operations that were
resources performed on resources in
your subscription.
Azure Diagnostic Logs frequent data about the Provide insight into Azure Monitor, Stream
operation of Azure Resource operations that your
Manager resources in resource performed itself
subscription
AAD Reporting Logs and Reports User sign-in activities & Graph API
System activity information
about users and group
management
Virtual Machine & Cloud Windows Event log & Linux Captures system data and Windows using WAD
Services Syslog logging data on the virtual (Windows Azure Diagnostics
machines and transfers that storage) and Linux in Azure
data into a storage account monitor
of your choice.
LOG CATEGORY LOG TYPE USAGES INTEGRATION
Storage Analytics Storage logging and Provides insight into trace REST API or the client library
provides metrics data for a requests, analyze usage
storage account trends, and diagnose issues
with your storage account.
NSG (Network Security JSON format and shows View information about Network Watcher
Group) Flow Logs outbound and inbound ingress and egress IP traffic
flows on a per rule basis through a Network Security
Group
Application insight Logs, exceptions,and custom Application Performance REST API, Power BI
diagnostics Management (APM) service
for web developers on
multiple platforms.
Process Data / Security Alert Azure Security Center Alert, Security information and REST APIs, JSON
OMS Alert alerts.
Activity Log
The Azure Activity Log, provides insight into the operations that were performed on resources in your subscription.
The Activity Log was previously known as “Audit Logs” or “Operational Logs,” since it reports control-plane events
for your subscriptions. Using the Activity Log, you can determine the “what, who, and when” for any write
operations (PUT, POST, DELETE) taken on the resources in your subscription. You can also understand the status of
the operation and other relevant properties. The Activity Log does not include read (GET) operations.
Here PUT, POST, DELETE refers to all the write operations activity log contains on the resources. For example, you
can use the activity logs to find an error when troubleshooting or to monitor how a user in your organization
modified a resource.
You can retrieve events from your Activity Log using the Azure portal, CLI, PowerShell cmdlets, and Azure Monitor
REST API. Activity logs have 19-day data retention period.
Integration Scenarios
Create an email or webhook alert that triggers off an Activity Log event.
Stream it to an Event Hub for ingestion by a third-party service or custom analytics solution such as PowerBI.
Analyze it in PowerBI using the PowerBI content pack.
Save it to a Storage Account for archival or manual inspection. You can specify the retention time (in days)
using Log Profiles.
Query and view it in the Azure portal.
Query it via PowerShell Cmdlet, CLI, or REST API.
Export the Activity Log with Log Profiles to log Analytics.
You can use a storage account or event hub namespace that is not in the same subscription as the one emitting log.
The user who configures the setting must have the appropriate RBAC access to both subscriptions
Azure Diagnostic Logs
Azure Diagnostic Logs are emitted by a resource that provide rich, frequent data about the operation of that
resource. The content of these logs varies by resource type (for example, Windows event system logsare one
category of Diagnostic Log for VMs and blob, table, and queue logs are categories of Diagnostic Logs for storage
accounts) and differ from the Activity Log, which provides insight into the operations that were performed on
resources in your subscription.
Azure Diagnostics logs offer multiple configuration options that is,Azure portal, using PowerShell, Command-line
interface (CLI),and REST API.
Integration Scenarios
Save them to a Storage Account for auditing or manual inspection. You can specify the retention time (in
days) using the Diagnostic Settings.
Stream them to Event Hubs for ingestion by a third-party service or custom analytics solution such as
PowerBI.
Analyze them with OMS Log Analytics.
Supported services, schema for Diagnostic Logs and supported log categories per resource type
Microsoft.Network/loadBalan LoadBalancerProbeHealthSta
cers tus
Microsoft.Network/networks NetworkSecurityGroupRuleC
ecuritygroups ounter
Microsoft.Network/applicatio ApplicationGatewayPerforma
nGateways nceLog
Microsoft.Network/applicatio ApplicationGatewayFirewallL
nGateways og
Microsoft.DataLakeAnalytics/ Requests
accounts
Microsoft.DataLakeStore/acc Requests
ounts
Microsoft.Logic/integrationA IntegrationAccountTrackingE
ccounts vents
Microsoft.Automation/auto JobStreams
mationAccounts
SERVICE SCHEMA & DOCS RESOURCE TYPE CATEGORY
Microsoft.EventHub/namesp OperationalLogs
aces
Microsoft.StreamAnalytics/st Authoring
reamingjobs
Sign-ins from unknown sources Application usage: summary Directory audit report
The data of these reports can be useful to your applications, such as SIEM systems, audit, and business intelligence
tools. The Azure AD reporting APIs provide programmatic access to the data through a set of REST-based APIs. You
can call these APIs from various programming languages and tools.
Events in the Azure AD Audit report are retained for 180 days.
NOTE
For more information about retention on reports, see Azure Active Directory Report Retention Policies.
For customers interested in storing their audit events for longer retention periods, the Reporting API can be used to
regularly pull audit events into a separate data store.
Virtual Machine logs using Azure Diagnostics
Azure Diagnostics is the capability within Azure that enables the collection of diagnostic data on a deployed
application. You can use the diagnostics extension from several different sources. Currently supported are Azure
Cloud Service Web and Worker Roles,
NOTE
For more information on billing and data retention policies, see Storage Analytics and Billing.
NOTE
For more information on storage account limits, see Azure Storage Scalability and Performance Targets.
AUTHENTICATED ANONYMOUS
Failed requests, including timeout, throttling, network, Requests using a Shared Access Signature (SAS), including
authorization, and other errors failed and successful requests
Requests using a Shared Access Signature (SAS), including Time out errors for both client and server
failed and successful requests
Requests to analytics data Failed GET requests with error code 304 (Not Modified)
Requests made by Storage Analytics itself, such as log creation All other failed anonymous requests are not logged. A full list
or deletion, are not logged. A full list of the logged data is of the logged data is documented in the Storage Analytics
documented in the Storage Analytics Logged Operations and Logged Operations and Status Messages and Storage
Status Messages and Storage Analytics Log Format topics. Analytics Log Format.
Application map The components of your app, with key metrics and alerts.
Diagnostic search for instance data Search and filter events such as requests, exceptions,
dependency calls, log traces, and page views.
Metrics Explorer for aggregated data Explore, filter, and segment aggregated data such as rates of
requests, failures, and exceptions; response times, page load
times.
Dashboards Mash up data from multiple resources and share with others.
Great for multi-component applications, and for continuous
display in the team room.
INTEGRATION SCENARIOS DESCRIPTION
Live Metrics Stream When you deploy a new build, watch these near-real-time
performance indicators to make sure everything works as
expected.
Automatic and manual alerts Automatic alerts adapt to your app's normal patterns of
telemetry and trigger when there's something outside the
usual pattern. You can also set alerts on particular levels of
custom or standard metrics.
Visual Studio See performance data in the code. Go to code from stack
traces.
REST API Write code to run queries over your metrics and raw data.
Log Analytics
Log Analytics is a service in Operations Management Suite (OMS) that helps you collect and analyze data generated
by resources in your cloud and on-premises environments. It gives you real-time insights using integrated search
and custom dashboards to readily analyze millions of records across all your workloads and servers regardless of
their physical location.
At the center of Log Analytics is the OMS repository,which is hosted in the Azure cloud. Data is collected into the
repository from connected sources by configuring data sources and adding solutions to your subscription. Data
sources and solutions will each create different record types that have their own set of properties but may still be
analyzed together in queries to the repository. This allows you to use the same tools and methods to work with
different kinds of data collected by different sources.
Connected sources are the computers and other resources that generate data collected by Log Analytics. This can
include agents installed on Windows and Linux computers that connect directly or agents in a connected System
Center Operations Manager management group. Log Analytics can also collect data from Azure storage.
Data sources are the different kinds of data collected from each connected source. This includes events and
performance data from Windows and Linux agents in addition to sources such as IIS logs, and custom text logs. You
configure each data source that you want to collect, and the configuration is automatically delivered to each
connected source.
There are four different ways of collecting logs and metrics for Azure services:
1. Azure diagnostics direct to Log Analytics (Diagnostics in the following table)
2. Azure diagnostics to Azure storage to Log Analytics (Storage in the following table)
3. Connectors for Azure services (Connectors in the following table)
4. Scripts to collect and then post data into Log Analytics (blanks in the following table and for services that are
not listed)
Microsoft.Logic/
integrationAccounts
Microsoft.Sql/
servers/
elasticPools
SERVICE RESOURCE TYPE LOGS METRICS SOLUTION
Diagnostics
Microsoft.Compute/
virtualMachineScaleSe
ts/
virtualMachines
Microsoft.Web/
sites/
slots
Azure log integration collects Azure Diagnostics from your Windows (WAD) virtual machines, Azure Activity Logs,
Azure Security Center alerts,and Azure Resource Provider logs. This integration provides a unified dashboard for all
your assets, on-premises or in the cloud, so that you can aggregate, correlate, analyze, and alert for security events.
Azure log integration currently supports integration of Azure Activity Logs, Windows Event log from Windows
virtual machine in your Azure subscription, Azure Security Center alerts, Azure Diagnostic logs and Azure Active
Directory audit logs.
LOG TYPE LOG ANALYTICS SUPPORTING JSON (SPLUNK, ARCSIGHT, QRADAR)
The following table explains the Log category and SIEM integration detail.
Get started with Azure log integration - Tutorial walks you through installation of Azure log integration and
integrating logs from Azure WAD storage, Azure Activity Logs, Azure Security Center alerts,and Azure Active
Directory audit logs.
Integration Scenarios
Partner configuration steps – This blog post shows you how to configure Azure log integration to work with
partner solutions Splunk, HP ArcSight, and IBM QRadar.
Azure log Integration frequently asked questions (FAQ) - This FAQ answers questions about Azure log
integration.
Integrating Security Center alerts with Azure log Integration – This document shows you how to sync
Security Center alerts, along with virtual machine security events collected by Azure Diagnostics and Azure
Audit Logs, with your log analytics or SIEM solution.
Next Steps
Auditing and logging
Protect data by maintaining visibility and responding quickly to timely security alerts
Security Logging and Audit Log Collection within Azure
What settings you need to enforce to make sure your Azure instances are collecting the correct Security and Audit
logs.
Configure audit settings for a site collection
As a site collection administrator, one can retrieve the history of actions taken by a particular user and can also
retrieve the history of actions taken during a particular date range.
Search the audit log in the Office 365 Security & Compliance Center
One can use the Office 365 Security & Compliance Center to search the unified audit log to view user and
administrator activity in your Office 365 organization.
Isolation in the Azure Public Cloud
8/21/2017 • 22 min to read • Edit Online
Introduction
Overview
To assist current and prospective Azure customers understand and utilize the various security-related capabilities
available in and surrounding the Azure platform, Microsoft has developed a series of White Papers, Security
Overviews, Best Practices, and Checklists. The topics range in terms of breadth and depth and are updated
periodically. This document is part of that series as summarized in the Abstract section following.
Azure Platform
Azure is an open and flexible cloud service platform that supports the broadest selection of operating systems,
programming languages, frameworks, tools, databases, and devices. For example, you can:
Run Linux containers with Docker integration;
Build apps with JavaScript, Python, .NET, PHP, Java, and Node.js; and
Build back-ends for iOS, Android, and Windows devices.
Microsoft Azure supports the same technologies millions of developers and IT professionals already rely on and
trust.
When you build on, or migrate IT assets to, a public cloud service provider, you are relying on that organization’s
abilities to protect your applications and data with the services and the controls they provide to manage the
security of your cloud-based assets.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security needs. In addition, Azure
provides you with a wide array of configurable security options and the ability to control them so that you can
customize security to meet the unique requirements of your deployments. This document helps you meet these
requirements.
Abstract
Microsoft Azure allows you to run applications and virtual machines (VMs) on shared physical infrastructure. One
of the prime economic motivations to running applications in a cloud environment is the ability to distribute the
cost of shared resources among multiple customers. This practice of multi-tenancy improves efficiency by
multiplexing resources among disparate customers at low costs. Unfortunately, it also introduces the risk of sharing
physical servers and other infrastructure resources to run your sensitive applications and VMs that may belong to
an arbitrary and potentially malicious user.
This article outlines how Microsoft Azure provides isolation against both malicious and non-malicious users and
serves as a guide for architecting cloud solutions by offering various isolation choices to architects. This white
paper focuses on the technology of Azure platform and customer-facing security controls, and does not attempt to
address SLAs, pricing models, and DevOps practice considerations.
Azure Active Directory hosts each tenant in its own protected container, with policies and permissions to and within
the container solely owned and managed by the tenant.
The concept of tenant containers is deeply ingrained in the directory service at all layers, from portals all the way to
persistent storage.
Even when metadata from multiple Azure Active Directory tenants is stored on the same physical disk, there is no
relationship between the containers other than what is defined by the directory service, which in turn is dictated by
the tenant administrator.
Azure Role -Based Access Control (RBAC )
Azure Role-Based Access Control (RBAC) helps you to share various components available within an Azure
subscription by providing fine-grained access management for Azure. Azure RBAC enables you to segregate duties
within your organization and grant access based on what users need to perform their jobs. Instead of giving
everybody unrestricted permissions in Azure subscription or resources, you can allow only certain actions.
Azure RBAC has three basic roles that apply to all resource types:
Owner has full access to all resources including the right to delegate access to others.
Contributor can create and manage all types of Azure resources but can’t grant access to others.
Reader can view existing Azure resources.
The rest of the RBAC roles in Azure allow management of specific Azure resources. For example, the Virtual
Machine Contributor role allows the user to create and manage virtual machines. It does not give them access to
the Azure Virtual Network or the subnet that the virtual machine connects to.
RBAC built-in roles list the roles available in Azure. It specifies the operations and scope that each built-in role
grants to users. If you're looking to define your own roles for even more control, see how to build Custom roles in
Azure RBAC.
Some other capabilities for Azure Active Directory include:
Azure AD enables SSO to SaaS applications, regardless of where they are hosted. Some applications are
federated with Azure AD, and others use password SSO. Federated applications can also support user
provisioning and password vaulting.
Access to data in Azure Storage is controlled via authentication. Each storage account has a primary key
(storage account key, or SAK) and a secondary secret key (the shared access signature, or SAS).
Azure AD provides Identity as a Service through federation by using Active Directory Federation Services,
synchronization, and replication with on-premises directories.
Azure Multi-Factor Authentication is the multi-factor authentication service that requires users to verify sign-
ins by using a mobile app, phone call, or text message. It can be used with Azure AD to help secure on-
premises resources with the Azure Multi-Factor Authentication server, and also with custom applications and
directories using the SDK.
Azure AD Domain Services lets you join Azure virtual machines to an Active Directory domain without
deploying domain controllers. You can sign in to these virtual machines with your corporate Active Directory
credentials and administer domain-joined virtual machines by using Group Policy to enforce security
baselines on all your Azure virtual machines.
Azure Active Directory B2C provides a highly available global-identity management service for consumer-
facing applications that scales to hundreds of millions of identities. It can be integrated across mobile and
web platforms. Your consumers can sign in to all your applications through customizable experiences by
using their existing social accounts or by creating credentials.
Isolation from Microsoft Administrators & Data Deletion
Microsoft takes strong measures to protect your data from inappropriate access or use by unauthorized persons.
These operational processes and controls are backed by the Online Services Terms, which offer contractual
commitments that govern access to your data.
Microsoft engineers do not have default access to your data in the cloud. Instead, they are granted access,
under management oversight, only when necessary. That access is carefully controlled and logged, and
revoked when it is no longer needed.
Microsoft may hire other companies to provide limited services on its behalf. Subcontractors may access
customer data only to deliver the services for which, we have hired them to provide, and they are prohibited
from using it for any other purpose. Further, they are contractually bound to maintain the confidentiality of
our customers’ information.
Business services with audited certifications such as ISO/IEC 27001 are regularly verified by Microsoft and
accredited audit firms, which perform sample audits to attest that access, only for legitimate business purposes.
You can always access your own customer data at any time and for any reason.
If you delete any data, Microsoft Azure deletes the data, including any cached or backup copies. For in-scope
services, that deletion will occur within 90 days after the end of the retention period. (In-scope services are defined
in the Data Processing Terms section of our Online Services Terms.)
If a disk drive used for storage suffers a hardware failure, it is securely erased or destroyed before Microsoft returns
it to the manufacturer for replacement or repair. The data on the drive is overwritten to ensure that the data cannot
be recovered by any means.
Compute Isolation
Microsoft Azure provides various cloud-based computing services that include a wide selection of compute
instances & services that can scale up and down automatically to meet the needs of your application or enterprise.
These compute instance and service offer isolation at multiple levels to secure data without sacrificing the flexibility
in configuration that customers demand.
Hyper-V & Root OS Isolation Between Root VM & Guest VMs
Azure’s compute platform is based on machine virtualization—meaning that all customer code executes in a Hyper-
V virtual machine. On each Azure node (or network endpoint), there is a Hypervisor that runs directly over the
hardware and divides a node into a variable number of Guest Virtual Machines (VMs).
Each node also has one special Root VM, which runs the Host OS. A critical boundary is the isolation of the root VM
from the guest VMs and the guest VMs from one another, managed by the hypervisor and the root OS. The
hypervisor/root OS pairing leverages Microsoft's decades of operating system security experience, and more recent
learning from Microsoft's Hyper-V, to provide strong isolation of guest VMs.
The Azure platform uses a virtualized environment. User instances operate as standalone virtual machines that do
not have access to a physical host server, and this isolation is enforced by using physical processor (ring-0/ring-3)
privilege levels.
Ring 0 is the most privileged and 3 is the least. The guest OS runs in a lesser-privileged Ring 1, and applications run
in the least privileged Ring 3. This virtualization of physical resources leads to a clear separation between guest OS
and hypervisor, resulting in additional security separation between the two.
The Azure hypervisor acts like a micro-kernel and passes all hardware access requests from guest virtual machines
to the host for processing by using a shared-memory interface called VMBus. This prevents users from obtaining
raw read/write/execute access to the system and mitigates the risk of sharing system resources.
Advanced VM placement algorithm & protection from side channel attacks
Any cross-VM attack involves two steps: placing an adversary-controlled VM on the same host as one of the victim
VMs, and then breaching the isolation boundary to either steal sensitive victim information or affect its
performance for greed or vandalism. Microsoft Azure provides protection at both steps by using an advanced VM
placement algorithm and protection from all known side channel attacks including noisy neighbor VMs.
The Azure Fabric Controller
The Azure Fabric Controller is responsible for allocating infrastructure resources to tenant workloads, and it
manages unidirectional communications from the host to virtual machines. The VM placing algorithm of the Azure
fabric controller is highly sophisticated and nearly impossible to predict as physical host level.
The Azure hypervisor enforces memory and process separation between virtual machines, and it securely routes
network traffic to guest OS tenants. This eliminates possibility of and side channel attack at VM level.
In Azure, the root VM is special: it runs a hardened operating system called the root OS that hosts a fabric agent
(FA). FAs are used in turn to manage guest agents (GA) within guest OSes on customer VMs. FAs also manage
storage nodes.
The collection of Azure hypervisor, root OS/FA, and customer VMs/GAs comprises a compute node. FAs are
managed by a fabric controller (FC), which exists outside of compute and storage nodes (compute and storage
clusters are managed by separate FCs). If a customer updates their application’s configuration file while it’s
running, the FC communicates with the FA, which then contacts GAs, which notify the application of the
configuration change. In the event of a hardware failure, the FC will automatically find available hardware and
restart the VM there.
Communication from a Fabric Controller to an agent is unidirectional. The agent implements an SSL-protected
service that only responds to requests from the controller. It cannot initiate connections to the controller or other
privileged internal nodes. The FC treats all responses as if they were untrusted.
Isolation extends from the Root VM from Guest VMs, and the Guest VMs from one another. Compute nodes are
also isolated from storage nodes for increased protection.
The hypervisor and the host OS provide network packet - filters to help assure that untrusted virtual machines
cannot generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure
endpoints, or send/receive inappropriate broadcast traffic.
Additional Rules Configured by Fabric Controller Agent to Isolate VM
By default, all traffic is blocked when a virtual machine is created, and then the fabric controller agent configures
the packet filter to add rules and exceptions to allow authorized traffic.
There are two categories of rules that are programmed:
Machine configuration or infrastructure rules: By default, all communication is blocked. There are
exceptions to allow a virtual machine to send and receive DHCP and DNS traffic. Virtual machines can also
send traffic to the “public” internet and send traffic to other virtual machines within the same Azure Virtual
Network and the OS activation server. The virtual machines’ list of allowed outgoing destinations does not
include Azure router subnets, Azure management, and other Microsoft properties.
Role configuration file: This defines the inbound Access Control Lists (ACLs) based on the tenant's service
model.
VLAN Isolation
There are three VLANs in each cluster:
Storage Isolation
Logical Isolation Between Compute and Storage
As part of its fundamental design, Microsoft Azure separates VM-based computation from storage. This separation
enables computation and storage to scale independently, making it easier to provide multi-tenancy and isolation.
Therefore, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except
logically. This means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a
table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty.
The first time a customer writes data on the virtual disk, space on the physical disk is allocated, and a
pointer to it is placed in the table.
Isolation Using Storage Access control
Access Control in Azure Storage has a simple access control model. Each Azure subscription can create one or
more Storage Accounts. Each Storage Account has a single secret key that is used to control access to all data in
that Storage Account.
Access to Azure Storage data (including Tables) can be controlled through a SAS (Shared Access Signature)
token, which grants scoped access. The SAS is created through a query template (URL), signed with the SAK
(Storage Account Key). That signed URL can be given to another process (that is, delegated), which can then fill in
the details of the query and make the request of the storage service. A SAS enables you to grant time-based access
to clients without revealing the storage account’s secret key.
The SAS means that we can grant a client limited permissions, to objects in our storage account for a specified
period of time and with a specified set of permissions. We can grant these limited permissions without having to
share your account access keys.
IP Level Storage Isolation
You can establish firewalls and define an IP address range for your trusted clients. With an IP address range, only
clients that have an IP address within the defined range can connect to Azure Storage.
IP storage data can be protected from unauthorized users via a networking mechanism that is used to allocate a
dedicated or dedicated tunnel of traffic to IP storage.
Encryption
Azure offers following types of Encryption to protect data:
Encryption in transit
Encryption at rest
Encryption in Transit
Encryption in transit is a mechanism of protecting data when it is transmitted across networks. With Azure Storage,
you can secure data using:
Transport-level encryption, such as HTTPS when you transfer data into or out of Azure Storage.
Wire encryption, such as SMB 3.0 encryption for Azure File shares.
Client-side encryption, to encrypt the data before it is transferred into storage and to decrypt the data after it
is transferred out of storage.
Encryption at Rest
For many organizations, data encryption at rest is a mandatory step towards data privacy, compliance, and data
sovereignty. There are three Azure features that provide encryption of data that is “at rest”:
Storage Service Encryption allows you to request that the storage service automatically encrypt data when
writing it to Azure Storage.
Client-side Encryption also provides the feature of encryption at rest.
Azure Disk Encryption allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
Azure Disk Encryption
Azure Disk Encryption for virtual machines (VMs) helps you address organizational security and compliance
requirements by encrypting your VM disks (including boot and data disks) with keys and policies you control in
Azure Key Vault.
The Disk Encryption solution for Windows is based on Microsoft BitLocker Drive Encryption, and the Linux solution
is based on dm-crypt.
The solution supports the following scenarios for IaaS VMs when they are enabled in Microsoft Azure:
Integration with Azure Key Vault
Standard tier VMs: A, D, DS, G, GS, and so forth, series IaaS VMs
Enabling encryption on Windows and Linux IaaS VMs
Disabling encryption on OS and data drives for Windows IaaS VMs
Disabling encryption on data drives for Linux IaaS VMs
Enabling encryption on IaaS VMs that are running Windows client OS
Enabling encryption on volumes with mount paths
Enabling encryption on Linux VMs that are configured with disk striping (RAID) by using mdadm
Enabling encryption on Linux VMs by using LVM(Logical Volume Manager) for data disks
Enabling encryption on Windows VMs that are configured by using storage spaces
All Azure public regions are supported
The solution does not support the following scenarios, features, and technology in the release:
Basic tier IaaS VMs
Disabling encryption on an OS drive for Linux IaaS VMs
IaaS VMs that are created by using the classic VM creation method
Integration with your on-premises Key Management Service
Azure Files (shared file system), Network File System (NFS), dynamic volumes, and Windows VMs that are
configured with software-based RAID systems
The account and subscription are Microsoft Azure platform concepts to associate billing and management.
Logical servers and databases are SQL Azure-specific concepts and are managed by using SQL Azure, provided
OData and TSQL interfaces or via SQL Azure portal that integrated into Azure portal.
SQL Azure servers are not physical or VM instances, instead they are collections of databases, sharing management
and security policies, which are stored in so called “logical master” database.
The tier behind the gateways is called “back-end”. This is where all the data is stored in a highly available fashion.
Each piece of data is said to belong to a “partition” or “failover unit”, each of them having at least three replicas.
Replicas are stored and replicated by SQL Server engine and managed by a failover system often referred to as
“fabric”.
Generally, the back-end system does not communicate outbound to other systems as a security precaution. This is
reserved to the systems in the front-end (gateway) tier. The gateway tier machines have limited privileges on the
back-end machines to minimize the attack surface as a defense-in-depth mechanism.
Isolation by Machine Function and Access
SQL Azure (is composed of services running on different machine functions. SQL Azure is divided into “backend”
Cloud Database and “front-end” (Gateway/Management) environments, with the general principle of traffic only
going into back-end and not out. The front-end environment can communicate to the outside world of other
services and in general, has only limited permissions in the back-end (enough to call the entry points it needs to
invoke).
Networking Isolation
Azure deployment has multiple layers of network isolation. The following diagram shows various layers of network
isolation Azure provides to customers. These layers are both native in the Azure platform itself and customer-
defined features. Inbound from the Internet, Azure DDoS provides isolation against large-scale attacks against
Azure. The next layer of isolation is customer-defined public IP addresses (endpoints), which are used to determine
which traffic can pass through the cloud service to the virtual network. Native Azure virtual network isolation
ensures complete isolation from all other networks, and that traffic only flows through user configured paths and
methods. These paths and methods are the next layer, where NSGs, UDR, and network virtual appliances can be
used to create isolation boundaries to protect the application deployments in the protected network.
Traffic isolation: A virtual network is the traffic isolation boundary on the Azure platform. Virtual machines (VMs)
in one virtual network cannot communicate directly to VMs in a different virtual network, even if both virtual
networks are created by the same customer. Isolation is a critical property that ensures customer VMs and
communication remains private within a virtual network.
Subnet offers an additional layer of isolation with in virtual network based on IP range. IP addresses in the virtual
network, you can divide a virtual network into multiple subnets for organization and security. VMs and PaaS role
instances deployed to subnets (same or different) within a VNet can communicate with each other without any
extra configuration. You can also configure network security group (NSGs) to allow or deny network traffic to a VM
instance based on rules configured in access control list (ACL) of NSG. NSGs can be associated with either subnets
or individual VM instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all
the VM instances in that subnet.
Next Steps
Network Isolation Options for Machines in Windows Azure Virtual Networks
This includes the classic front-end and back-end scenario where machines in a particular back-end network or
subnetwork may only allow certain clients or other computers to connect to a particular endpoint based on a
whitelist of IP addresses.
Compute Isolation
Microsoft Azure provides a various cloud-based computing services that include a wide selection of compute
instances & services that can scale up and down automatically to meet the needs of your application or enterprise.
Storage Isolation
Microsoft Azure separates customer VM-based computation from storage. This separation enables computation
and storage to scale independently, making it easier to provide multi-tenancy and isolation. Therefore, Azure
Storage runs on separate hardware with no network connectivity to Azure Compute except logically. All requests
run over HTTP or HTTPS based on customer’s choice.
Azure security technical capabilities
9/7/2017 • 31 min to read • Edit Online
To assist current and prospective Azure customers understand and utilize the various Security-related capabilities
available in and surrounding the Azure Platform, Microsoft has developed a series of White Papers, Security
Overviews, Best Practices, and Checklists. The topics range in terms of breadth and depth and are updated
periodically. This document is part of that series as summarized in the Abstract section below. Further information
on this Azure Security series can be found at (URL).
Azure platform
Microsoft Azure is a cloud platform comprised of infrastructure and application services, with integrated data
services and advanced analytics, and developer tools and services, hosted within Microsoft’s public cloud data
centers. Customers use Azure for many different capacities and scenarios, from basic compute, networking, and
storage, to mobile and web app services, to full cloud scenarios like Internet of Things, and can be used with open
source technologies, and deployed as hybrid cloud or hosted within a customer’s datacenter. Azure provides cloud
technology as building blocks to help companies save costs, innovate quickly, and manage systems proactively.
When you build on, or migrate IT assets to a cloud provider, you are relying on that organization’s abilities to
protect your applications and data with the services and the controls they provide to manage the security of your
cloud-based assets.
Microsoft Azure is the only cloud computing provider that offers a secure, consistent application platform and
infrastructure-as-a-service for teams to work within their different cloud skillsets and levels of project complexity,
with integrated data services and analytics that uncover intelligence from data wherever it exists, across both
Microsoft and non-Microsoft platforms, open frameworks and tools, providing choice for integrating cloud with on-
premises as well deploying Azure cloud services within on-premises datacenters. As part of the Microsoft Trusted
Cloud, customers rely on Azure for industry-leading security, reliability, compliance, privacy, and the vast network
of people, partners, and processes to support organizations in the cloud.
With Microsoft Azure, you can:
Accelerate innovation with the cloud.
Power business decisions & apps with insights.
Build freely and deploy anywhere.
Protect their business.
Scope
The focal point of this whitepaper concerns security features and functionality supporting Microsoft Azure’s core
components, namely Microsoft Azure Storage, Microsoft Azure SQL Databases, Microsoft Azure’s virtual machine
model, and the tools and infrastructure that manage it all. This white paper focus on Microsoft Azure technical
capabilities available to you as customers to fulfil their role in protecting the security and privacy of their data.
The importance of understanding this shared responsibility model is essential for customers who are moving to the
cloud. Cloud providers offer considerable advantages for security and compliance efforts, but these advantages do
not absolve the customer from protecting their users, applications, and service offerings.
For IaaS solutions, the customer is responsible or has a shared responsibility for securing and managing the
operating system, network configuration, applications, identity, clients, and data. PaaS solutions build on IaaS
deployments, the customer is still responsible or has a shared responsibility for securing and managing
applications, identity, clients, and data. For SaaS solutions, Nonetheless, the customer continues to be accountable.
They must ensure that data is classified correctly, and they share a responsibility to manage their users and end-
point devices.
This document does not provide detailed coverage of any of the related Microsoft Azure platform components such
as Azure Web Sites, Azure Active Directory, HDInsight, Media Services, and other services that are layered atop the
core components. Although a minimum level of general information is provided, readers are assumed familiar with
Azure basic concepts as described in other references provided by Microsoft and included in links provided in this
white paper.
Subscriptions also have an association with a directory. The directory defines a set of users. These can be users
from the work or school that created the directory, or they can be external users (that is, Microsoft Accounts).
Subscriptions are accessible by a subset of those directory users who have been assigned as either Service
Administrator (SA) or Co-Administrator (CA); the only exception is that, for legacy reasons, Microsoft Accounts
(formerly Windows Live ID) can be assigned as SA or CA without being present in the directory.
Security-oriented companies should focus on giving employees the exact permissions they need. Too many
permissions can expose an account to attackers. Too few permissions mean that employees can't get their work
done efficiently. Azure Role-Based Access Control (RBAC) helps address this problem by offering fine-grained
access management for Azure.
Using RBAC, you can segregate duties within your team and grant only the amount of access to users that they
need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or
resources, you can allow only certain actions. For example, use RBAC to let one employee manage virtual machines
in a subscription, while another can manage SQL databases within the same subscription.
ENCRYPTION MODELS
• Azure Resource Providers • Azure Resource Providers • Azure Resource Providers • Azure services cannot see
perform the encryption and perform the encryption and perform the encryption and decrypted data
decryption operations decryption operations decryption operations • Customers keep keys on-
• Microsoft manages the • Customer controls keys via • Customer controls keys premises (or in other secure
keys Azure Key Vault On-Prem stores). Keys are not
• Full cloud functionality • Full cloud functionality • Full cloud functionality available to Azure services
• Reduced cloud functionality
NOTE
Not just "application data" or "PII' but any data relating to application including account metadata (subscription mappings,
contract info, PII).
Consider what stores you are using to store data. For example:
External storage (for example, SQL Azure, Document DB, HDInsights, Data Lake, etc.)
Temporary storage (any local cache that includes tenant data)
In-memory cache (could be put into the page file.)
Leverage the existing encryption at rest support in Azure
For each store you use, leverage the existing Encryption at Rest support.
Azure Storage: See Azure Storage Service Encryption for Data at Rest,
SQL Azure: See Transparent Data Encryption (TDE), SQL Always Encrypted
VM & Local disk storage (Azure Disk Encryption)
For VM and Local disk storage use Azure Disk Encryption where supported:
IaaS
Services with IaaS VMs (Windows or Linux) should use Azure Disk Encryption to encrypt volumes containing
customer data.
PaaS v2
Services running on PaaS v2 using Service Fabric can use Azure disk encryption for Virtual Machine Scale Set
[VMSS] to encrypt their PaaS v2 VMs.
PaaS v1
Azure Disk Encryption currently is not supported on PaaS v1. Therefore, you must use application level encryption
to encrypt persisted data at rest. This includes, but is not limited to, application data, temporary files, logs, and crash
dumps.
Most services should attempt to leverage the encryption of a storage resource provider. Some services have to do
explicit encryption, for example, any persisted key material (Certificates, root / master keys) must be stored in Key
Vault.
If you support service-side encryption with customer-managed keys there needs to be a way for the customer to
get the key to us. The supported and recommended way to do that by integrating with Azure Key Vault (AKV). In
this case customers can add and manage their keys in Azure Key Vault. A customer can learn how to use AKV via
Getting Started with Key Vault.
To integrate with Azure Key Vault, you'd add code to request a key from AKV when needed for decryption.
See Azure Key Vault – Step by Step for info on how to integrate with AKV.
If you support customer managed keys, you need to provide a UX for the customer to specify which Key Vault (or
Key Vault URI) to use.
As Encryption at Rest involves the encryption of host, infrastructure and tenant data, the loss of the keys due to
system failure or malicious activity could mean all the encrypted data is lost. It is therefore critical that your
Encryption at Rest solution has a comprehensive disaster recovery story resilient to system failures and malicious
activity.
Services that implement Encryption at Rest are usually still susceptible to the encryption keys or data being left
unencrypted on the host drive (for example, in the page file of the host OS.) Therefore, services must ensure the
host volume for their services is encrypted. To facilitate this Compute team has enabled the deployment of Host
Encryption, which uses Bitlocker NKP and extensions to the DCM service and agent to encrypt the host volume.
Most services are implemented on standard Azure VMs. Such services should get Host Encryption automatically
when Compute enables it. For services running in Compute managed clusters host encryption is enabled
automatically as Windows Server 2016 is rolled out.
Encryption in-transit
Protecting data in transit should be essential part of your data protection strategy. Since data is moving back and
forth from many locations, the general recommendation is that you always use SSL/TLS protocols to exchange data
across different locations. In some circumstances, you may want to isolate the entire communication channel
between your on-premises and cloud infrastructure by using a virtual private network (VPN).
For data moving between your on-premises infrastructure and Azure, you should consider appropriate safeguards
such as HTTPS or VPN.
For organizations that need to secure access from multiple workstations located on-premises to Azure, use Azure
site-to-site VPN.
For organizations that need to secure access from one workstation located on-premises to Azure, use Point-to-Site
VPN.
Larger data sets can be moved over a dedicated high-speed WAN link such as ExpressRoute. If you choose to use
ExpressRoute, you can also encrypt the data at the application-level using SSL/TLS or other protocols for added
protection.
If you are interacting with Azure Storage through the Azure Portal, all transactions occur via HTTPS. Storage REST
API over HTTPS can also be used to interact with Azure Storage and Azure SQL Database.
Organizations that fail to protect data in transit are more susceptible for man-in-the-middle attacks, eavesdropping,
and session hijacking. These attacks can be the first step in gaining access to confidential data.
You can learn more about Azure VPN option by reading the article Planning and design for VPN Gateway.
Enforce file level data encryption
Azure RMS uses encryption, identity, and authorization policies to help secure your files and email. Azure RMS
works across multiple devices — phones, tablets, and PCs by protecting both within your organization and outside
your organization. This capability is possible because Azure RMS adds a level of protection that remains with the
data, even when it leaves your organization’s boundaries.
When you use Azure RMS to protect your files, you are using industry-standard cryptography with full support of
FIPS 140-2. When you leverage Azure RMS for data protection, you have the assurance that the protection stays
with the file, even if it is copied to storage that is not under the control of IT, such as a cloud storage service. The
same occurs for files shared via e-mail, the file is protected as an attachment to an email message, with instructions
how to open the protected attachment. When planning for Azure RMS adoption we recommend the following:
Install the RMS sharing app. This app integrates with Office applications by installing an Office add-in so that
users can easily protect files directly.
Configure applications and services to support Azure RMS
Create custom templates that reflect your business requirements. For example: a template for top secret data
that should be applied in all top secret related emails.
Organizations that are weak on data classification and file protection may be more susceptible to data leakage.
Without proper file protection, organizations won’t be able to obtain business insights, monitor for abuse and
prevent malicious access to files.
NOTE
You can learn more about Azure RMS by reading the article Getting Started with Azure Rights Management.
NOTE
For a more detailed list of rules and their protections see the following Core rule sets:
Azure also provides several easy-to-use features to help secure both inbound and outbound traffic for your app.
Azure also helps customers secure their application code by providing externally provided functionality to scan
your web application for vulnerabilities.
Secure your web app using various means of authentication and authorization
Setup Azure Active Directory authentication for your app
Secure traffic to your app by enabling Transport Layer Security (TLS/SSL) - HTTPS
Force all incoming traffic over HTTPS connection
Enable Strict Transport Security (HSTS)
Restrict access to your app by client's IP address
Restrict access to your app by client's behavior - request frequency and concurrency
Scan your web app code for vulnerabilities using Tinfoil Security Scanning
Configure TLS mutual authentication to require client certificates to connect to your web app
Configure a client certificate for use from your app to securely connect to external resources
Remove standard server headers to avoid tools from fingerprinting your app
Securely connect your app with resources in a private network using Point-To-Site VPN
Securely connect your app with resources in a private network using Hybrid Connections
Azure App Service uses the same Antimalware solution used by Azure Cloud Services and Virtual Machines. To
learn more about this refer to our Antimalware documentation.
Azure Operational Security is built on a framework that incorporates the knowledge gained through a various
capabilities that are unique to Microsoft, including the Microsoft Security Development Lifecycle (SDL), the
Microsoft Security Response Centre program, and deep awareness of the cybersecurity threat landscape.
Microsoft operations management suite (OMS )
Microsoft Operations Management Suite (OMS) is the IT management solution for the hybrid cloud. Used alone or
to extend your existing System Center deployment, OMS gives you the maximum flexibility and control for cloud-
based management of your infrastructure.
With OMS, you can manage any instance in any cloud, including on-premises, Azure, AWS, Windows Server, Linux,
VMware, and OpenStack, at a lower cost than competitive solutions. Built for the cloud-first world, OMS offers a
new approach to managing your enterprise that is the fastest, most cost-effective way to meet new business
challenges and accommodate new workloads, applications and cloud environments.
Log analytics
Log Analytics provides monitoring services for OMS by collecting data from managed resources into a central
repository. This data could include events, performance data, or custom data provided through the API. Once
collected, the data is available for alerting, analysis, and export.
This method allows you to consolidate data from a variety of sources, so you can combine data from your Azure
services with your existing on-premises environment. It also clearly separates the collection of the data from the
action taken on that data so that all actions are available to all kinds of data.
Azure security center
Azure Security Center helps you prevent, detect, and respond to threats with increased visibility into and control
over the security of your Azure resources. It provides integrated security monitoring and policy management
across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad
ecosystem of security solutions.
Security Center analyzes the security state of your Azure resources to identify potential security vulnerabilities. A
list of recommendations guides you through the process of configuring needed controls.
Examples include:
Provisioning antimalware to help identify and remove malicious software
Configuring network security groups and rules to control traffic to VMs
Provisioning of web application firewalls to help defend against attacks that target your web applications
Deploying missing system updates
Addressing OS configurations that do not match the recommended baselines
Security Center automatically collects, analyzes, and integrates log data from your Azure resources, the network,
and partner solutions like antimalware programs and firewalls. When threats are detected, a security alert is
created. Examples include detection of:
Compromised VMs communicating with known malicious IP addresses
Advanced malware detected by using Windows error reporting
Brute force attacks against VMs
Security alerts from integrated antimalware programs and firewalls
Azure monitor
Azure Monitor provides pointers to information on specific types of resources. It offers visualization, query, routing,
alerting, auto scale, and automation on data both from the Azure infrastructure (Activity Log) and each individual
Azure resource (Diagnostic Logs).
Cloud applications are complex with many moving parts. Monitoring provides data to ensure that your application
stays up and running in a healthy state. It also helps you to stave off potential problems or troubleshoot past ones.
In addition,
you can use monitoring data to gain deep insights about your application. That knowledge can help you to improve
application performance or maintainability, or automate actions that would otherwise require manual intervention.
Auditing your network security is vital for detecting network vulnerabilities and ensuring compliance with your IT
security and regulatory governance model. With Security Group view, you can retrieve the configured Network
Security Group and security rules, as well as the effective security rules. With the list of rules applied, you can
determine the ports that are open and ss network vulnerability.
Network watcher
Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network level in,
to, and from Azure. Network diagnostic and visualization tools available with Network Watcher help you
understand, diagnose, and gain insights to your network in Azure. This service includes packet capture, next hop, IP
flow verify, security group view, NSG flow logs. Scenario level monitoring provides an end to end view of network
resources in contrast to individual network resource monitoring.
Storage analytics
Storage Analytics can store metrics that include aggregated transaction statistics and capacity data about requests
to a storage service. Transactions are reported at both the API operation level as well as at the storage service level,
and capacity is reported at the storage service level. Metrics data can be used to analyze storage service usage,
diagnose issues with requests made against the storage service, and to improve the performance of applications
that use a service.
Application insights
Application Insights is an extensible Application Performance Management (APM) service for web developers on
multiple platforms. Use it to monitor your live web application. It will automatically detect performance anomalies.
It includes powerful analytics tools to help you diagnose issues and to understand what users do with your app. It's
designed to help you continuously improve performance and usability. It works for apps on a wide variety of
platforms including .NET, Node.js and J2EE, hosted on-premises or in the cloud. It integrates with your devOps
process, and has connection points to a various development tools.
It monitors:
Request rates, response times, and failure rates - Find out which pages are most popular, at what times
of day, and where your users are. See which pages perform best. If your response times and failure rates go
high when there are more requests, then perhaps you have a resourcing problem.
Dependency rates, response times, and failure rates - Find out whether external services are slowing
you down.
Exceptions - Analyze the aggregated statistics, or pick specific instances and drill into the stack trace and
related requests. Both server and browser exceptions are reported.
Page views and load performance - reported by your users' browsers.
AJAX calls from web pages - rates, response times, and failure rates.
User and session counts.
Performance counters from your Windows or Linux server machines, such as CPU, memory, and network
usage.
Host diagnostics from Docker or Azure.
Diagnostic trace logs from your app - so that you can correlate trace events with requests.
Custom events and metrics that you write yourself in the client or server code, to track business events
such as items sold, or games won. The infrastructure for your application is typically made up of many
components – maybe a virtual machine, storage account, and virtual network, or a web app, database,
database server, and 3rd party services. You do not see these components as separate entities, instead you
see them as related and interdependent parts of a single entity. You want to deploy, manage, and monitor
them as a group. Azure Resource Manager enables you to work with the resources in your solution as a
group.
You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. You use a
template for deployment and that template can work for different environments such as testing, staging, and
production. Resource Manager provides security, auditing, and tagging features to help you manage your
resources after deployment.
The benefits of using Resource Manager
Resource Manager provides several benefits:
You can deploy, manage, and monitor all the resources for your solution as a group, rather than handling
these resources individually.
You can repeatedly deploy your solution throughout the development lifecycle and have confidence your
resources are deployed in a consistent state.
You can manage your infrastructure through declarative templates rather than scripts.
You can define the dependencies between resources, so they are deployed in the correct order.
You can apply access control to all services in your resource group because Role-Based Access Control
(RBAC) is natively integrated into the management platform.
You can apply tags to resources to logically organize all the resources in your subscription.
You can clarify your organization's billing by viewing costs for a group of resources sharing the same tag.
NOTE
Resource Manager provides a new way to deploy and manage your solutions. If you used the earlier deployment model and
want to learn about the changes, see Understanding Resource Manager Deployment and classic deployment.
Next steps
Find out more about security by reading some of our in-depth security topics:
Auditing and logging
Cybercrime
Design and operational security
Encryption
Identity and access management
Network security
Threat management
Governance in Azure
8/10/2017 • 31 min to read • Edit Online
We know that security is job one in the cloud and how important it is that you find accurate and timely information
about Azure security. One of the best reasons to use Azure for your applications and services is to take advantage
of its wide array of security tools and capabilities. These tools and capabilities help make it possible to create secure
solutions on the secure Azure platform.
To help you better understand the array of Governance controls implemented within Microsoft Azure from both the
customer's and Microsoft operations' perspectives, this article, "Governance in Azure", is written that provides a
comprehensive look at the Governance features available with Microsoft Azure.
Azure platform
Azure is a public cloud service platform that supports a broad selection of operating systems, programming
languages, frameworks, tools, databases and devices. It can run Linux containers with Dockers integration; build
apps with JavaScript, Python, .NET, PHP, Java and Node.js; build back-ends for iOS, Android and Windows devices.
Azure public cloud services support the same technologies millions of developers and IT professionals already rely
on and trust.
When you build on, or migrate IT assets to, a public cloud service provider you are relying on that organization's
abilities to protect your applications and data with the services and the controls they provide to manage the
security of your cloud-based assets.
Azure's infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security requirements. In addition,
Azure provides you many security options and the ability to control them so that you can customize security to
meet the unique requirements of your organization's deployments.
This document will help you understand how Azure Governance capabilities can help you fulfill these requirements.
Abstract
Microsoft Azure cloud governance provides an integrated audit and consulting approach for reviewing and
advising organizations on their usage of the Azure platform. Microsoft Azure cloud governance refers to the
decision-making processes, criteria and policies involved in the planning, architecture, acquisition, deployment,
operation and management of a Cloud computing.
To create a plan for Microsoft Azure cloud governance, you need to take an in-depth look at the people, processes,
and technologies currently in place, and then build frameworks that make it easy for IT to consistently support
business needs while providing end users with the flexibility to use the powerful features of Microsoft Azure.
This paper describes how you can achieve an elevated level of governance of your IT resources in Microsoft Azure.
This paper can help you understand the security and governance features built in to Microsoft Azure.
The following are main the governance issues discussed in this paper:
Implementation of policies, processes and procedures as per organization goals.
Security and continuous compliance with organization standards
Alerting and Monitoring
Implementation of policies, processes and procedures
Management has established roles and responsibilities to oversee implementation of the information security
policy and operational continuity across Azure. Microsoft Azure management is responsible for overseeing security
and continuity practices within their respective teams (including third parties), and facilitating compliance with
security policies, processes and standards.
Here are the factors evolved:
Account Provisioning
Subscription Controls
Role Based access controls
Resource Management
Resource tracking
Critical Resource Control
API Access to Billing Information
Networking Controls
Account provisioning
Defining account hierarchy is a major step to use and structure Azure services within a company and is the core
governance structure. In case of customers with the enterprise agreement, customers can further subdivide the
environment into departments, accounts, and finally, subscriptions.
If you do not have an enterprise agreement, consider using Azure tags at subscription level to define hierarchy. An
Azure subscription is the basic unit where all resources are contained. It also defines several limits within Azure,
such as number of cores, resources, etc. Subscriptions can contain Resource Groups, which can contain Resources.
RBAC principles apply on those three levels.
Every enterprise is different and the hierarchy using Azure Tags in case of non-enterprise customers allows for
significant flexibility in how Azure is organized within the company. Before deploying resources in Microsoft Azure,
you should model hierarchy and understand the impact on billing, resource access, and complexity.
Subscription controls
Subscription controls how resources usage is reported and billed. Subscriptions can be setup for separate billing
and payment. As mentioned earlier under one Azure account we can have multiple subscriptions. Subscriptions can
be used to determine the Azure resource usage of multiple departments in a company.
For example, if a company has IT, HR and Marketing departments and these departments have different projects
running. Based on the usage of Azure resources like virtual machines by each department, they can be billed
accordingly. By this we can control the finances of each department.
Azure subscriptions establish three parameters:
a unique subscriber ID
a billing location
Set of available resources
For an individual, that would include one Microsoft account ID, a credit card number and the full suite of Azure
services -- although, Microsoft enforces consumption limits, depending on the subscription type.
Azure enrollment hierarchies define how services are structured within an Enterprise Agreement. The Enterprise
Portal allows customers to divide access to Azure resources associated with an Enterprise Agreement based on
flexible hierarchies customizable to an organization's unique needs. The hierarchy pattern should match an
organization's management and geographic structure so that the associated billing and resource access can be
accurately accounted for.
The three high-level patterns are functional, business unit, and geographic, using departments as an administrative
construct for account groupings. Within each department, accounts can be assigned subscriptions, which create
silos for billing and several key limits in Azure (e.g., number of VMs, storage accounts, etc.).
For organizations with an Enterprise Agreement, Azure subscriptions follow a four-level hierarchy:
enterprise enrolment administrator
department administrator
account owner
Service administrator
This hierarchy governs the following:
Billing relationship
Account administration
Role Based Access Control (RBAC) to artifacts
Boundaries/Limits
Boundaries
Usage and billing (rate card based on offer numbers)
Limits
Virtual Network
Attached to 1 AAD (1 AAD be associated with many subscriptions)
Associated to an enterprise enrollment account
This proliferation of subscriptions is no longer needed. With role-based access control, you can assign users to
standard roles (such as common "reader" and "writer" types of roles). You can also define custom roles.
Azure Role-Based Access Control (RBAC) enables fine-grained access management for Azure. Using RBAC, you can
grant only the amount of access that users need to perform their jobs. Security-oriented companies should focus
on giving employees the exact permissions they need. Too many permissions expose an account to attackers. Too
few permissions mean that employees can't get their work done efficiently. Azure Role-Based Access Control
(RBAC) helps address this problem by offering fine-grained access management for Azure. RBAC helps you to
segregate duties within your team and grant only the amount of access to users that they need to perform their
jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, you can allow
only certain actions.
For example, use RBAC to let one employee manage virtual machines in a subscription, while another can manage
SQL databases within the same subscription.
Azure RBAC has three basic roles that apply to all resource types:
Owner has full access to all resources including the right to delegate access to others.
Contributor can create and manage all types of Azure resources but can't grant access to others.
Reader can view existing Azure resources.
The rest of the RBAC roles in Azure allow management of specific Azure resources. For example, the Virtual
Machine Contributor role allows the user to create and manage virtual machines. It does not give them access to
the virtual network or the subnet that the virtual machine connects to.
RBAC built-in roles lists the roles available in Azure. It specifies the operations and scope that each built-in role
grants to users.
Grant access by assigning the appropriate RBAC role to users, groups, and applications at a certain scope. The
scope of a role assignment can be a subscription, a resource group, or a single resource. A role assigned at a parent
scope also grants access to the children contained within it.
For example, a user with access to a resource group can manage all the resources it contains, like websites, virtual
machines, and subnets.
Azure RBAC only supports management operations of the Azure resources in the Azure portal and Azure Resource
Manager APIs. It cannot authorize all data level operations for Azure resources. For example, you can authorize
someone to manage Storage Accounts, but not to the blobs or tables within a Storage Account cannot. Similarly, a
SQL database can be managed, but not the tables within it.
If you want more details about how RBAC helps you manage access, see What is Role-Based Access Control.
You can also create a custom role in Azure Role-Based Access Control (RBAC) if none of the built-in roles meet your
specific access needs. Custom roles can be created using Azure PowerShell, Azure Command-Line Interface (CLI),
and the REST API. Just like built-in roles, custom roles can be assigned to users, groups, and applications at
subscription, resource group, and resource scopes.
Within each subscription, you can grant up to 2000 role assignments.
Resource management
Azure originally provided only the classic deployment model. In this model, each resource existed independently;
there was no way to group related resources together. Instead, you had to manually track which resources made up
your solution or application, and remember to manage them in a coordinated approach.
To deploy a solution, you had to either create each resource individually through the classic portal or create a script
that deployed all the resources in the correct order. To delete a solution, you had to delete each resource
individually. You could not easily apply and update access control policies for related resources. Finally, you could
not apply tags to resources to label them with terms that help you monitor your resources and manage billing.
In 2014, Azure introduced Resource Manager, which added the concept of a resource group. A resource group is a
container for resources that share a common lifecycle. The Resource Manager deployment model provides several
benefits:
You can deploy, manage, and monitor all the services for your solution as a group, rather than handling
these services individually.
You can repeatedly deploy your solution throughout its lifecycle and have confidence your resources are
deployed in a consistent state.
You can apply access control to all resources in your resource group, and those policies are automatically
applied when new resources are added to the resource group.
You can apply tags to resources to logically organize all the resources in your subscription.
You can use JavaScript Object Notation (JSON) to define the infrastructure for your solution. The JSON file is
known as a Resource Manager template.
You can define the dependencies between resources so they are deployed in the correct order.
Resource Manager enables you to put resources into meaningful groups for management, billing, or natural
affinity. As mentioned earlier, Azure has two deployment models. In the earlier Classic model, the basic unit of
management was the subscription. It was difficult to break down resources within a subscription, which led to the
creation of large numbers of subscriptions. With the Resource Manager model, we saw the introduction of resource
groups.
A resource group is a container that holds related resources for an Azure solution. The resource group can include
all the resources for the solution, or only those resources that you want to manage as a group. You decide how you
want to allocate resources to resource groups based on what makes the most sense for your organization.
For recommendations about templates, see Best practices for creating Azure Resource Manager templates.
Azure Resource Manager analyzes dependencies to ensure resources are created in the correct order. If one
resource relies on a value from another resource (such as a virtual machine needing a storage account for disks),
you set a dependency.
NOTE
For more information, see Defining dependencies in Azure Resource Manager templates.
You can also use the template for updates to the infrastructure. For example, you can add a resource to your
solution and add configuration rules for the resources that are already deployed. If the template specifies creating a
resource but that resource already exists, Azure Resource Manager performs an update instead of creating a new
asset. Azure Resource Manager updates the existing asset to the same state as it would be as new.
Resource Manager provides extensions for scenarios when you need additional operations such as installing
software that is not included in the setup.
Resource tracking
As users in your organization add resources to the subscription, it becomes increasingly important to associate
resources with the appropriate department, customer, and environment. You can attach metadata to resources
through tags. You use tags to provide information about the resource or the owner. Tags enable you to not only
aggregate and group resources in several ways, but use that data for the purposes of chargeback.
Use tags when you have a complex collection of resource groups and resources, and need to visualize those assets
in the way that makes the most sense to you. For example, you could tag resources that serve a similar role in your
organization or belong to the same department.
Without tags, users in your organization can create multiple resources that may be difficult to later identify and
manage. For example, you may wish to delete all the resources for a project. If those resources are not tagged for
the project, you must manually find them. Tagging can be an important way for you to reduce unnecessary costs in
your subscription.
Resources do not need to reside in the same resource group to share a tag. You can create your own tag taxonomy
to ensure that all users in your organization use common tags rather than users inadvertently applying slightly
different tags (such as "dept" instead of "department").
Resource policies enable you to create standard rules for your organization. You can create policies that ensure
resources are tagged with the appropriate values.
NOTE
For more information, see Apply resource policies for tags.
You can also view tagged resources through the Azure portal.
The usage report for your subscription includes tag names and values, which enables you to break out costs by
tags.
NOTE
For more information about tags, see Using tags to organize your Azure resources.
NOTE
For more information about programmatic access to billing information, see Gain insights into your Microsoft Azure resource
consumption. For REST API operations, see Azure Billing REST API Reference.
When you download the usage CSV for services that support tags with billing, the tags appear in the Tags column.
For another example, placing a ReadOnly lock on an App Service resource prevents Visual Studio Server Explorer
from displaying files for the resource because that interaction requires write access.
Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To
learn about setting permissions for users and roles, see Azure Role-based Access Control.
When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you
add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.
To create or delete management locks, you must have access to Microsoft.Authorization/ or
Microsoft.Authorization/locks/ actions. Of the built-in roles, only Owner and User Access Administrator are
granted those actions.
Networking controls
Access to resources can be either internal (within the corporation's network) or external (through the internet). It is
easy for users in your organization to inadvertently put resources in the wrong spot, and potentially open them to
malicious access. As with on premises/ devices, enterprises must add appropriate controls to ensure that Azure
users make the right decisions.
For subscription governance, we identify core resources that provide basic control of access. The core resources
consist of:
Network connectivity
Virtual Networks are container objects for subnets. Though not strictly necessary, it is often used when connecting
applications to internal corporate resources. The Azure Virtual Network service enables you to securely connect
Azure resources to each other with virtual networks (VNets).
A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud
dedicated to your subscription. You can also connect VNets to your on-premises network.
Following are capabilities for Azure Virtual Networks:
Isolation: VNets are isolated from one another. You can create separate VNets for development, testing, and
production that use the same CIDR address blocks. Conversely, you can create multiple VNets that use
different CIDR address blocks and connect networks together. You can segment a VNet into multiple
subnets. Azure provides internal name resolution for VMs and Cloud Services role instances connected to a
VNet. You can optionally configure a VNet to use your own DNS servers, instead of using Azure internal
name resolution.
Internet connectivity: All Azure Virtual Machines (VM) and Cloud Services role instances connected to a
VNet have access to the Internet, by default. You can also enable inbound access to specific resources, as
needed.
Azure resource connectivity: Azure resources such as Cloud Services and VMs can be connected to the
same VNet. The resources can connect to each other using private IP addresses, even if they are in different
subnets. Azure provides default routing between subnets, VNets, and on-premises networks, so you don't
have to configure and manage routes.
VNet connectivity: VNets can be connected to each other, enabling resources connected to any VNet to
communicate with any resource on any other VNet.
On-premises connectivity: VNets can be connected to on-premises networks through private network
connections between your network and Azure, or through a site-to-site VPN connection over the Internet.
Traffic filtering: VM and Cloud Services role instances network traffic can be filtered inbound and
outbound by source IP address and port, destination IP address and port, and protocol.
Routing: You can optionally override Azure's default routing by configuring your own routes, or using BGP
routes through a network gateway.
NOTE
For more information about how to apply recommendations, read Implementing security recommendations in Azure Security
Center.
Security Center collects data from your virtual machines to assess their security state, provide security
recommendations, and alert you to threats. When you first access Security Center, data collection is enabled on all
virtual machines in your subscription. Data collection is recommended but you can opt-out by disabling data
collection in the Security Center policy.
Finally, Azure Security Center is an open platform that enables Microsoft partners and independent software
vendors to create software that plugs into Azure Security Center to enhance its capabilities.
Azure Security Center monitors the following Azure resources:
Virtual machines (VMs) (including Cloud Services)
Azure Virtual Networks
Azure SQL service
Partner solutions integrated with your Azure subscription such as a web application firewall on VMs and on
App Service Environment.
Operations Management Suite
The OMS software development and service team's information security and governance program supports its
business requirements and adheres to laws and regulations as described at Microsoft Azure Trust Center and
Microsoft Trust Center Compliance. How OMS establish security requirements, identifies security controls,
manages, and monitors risks are also described there. Annually, we review polices, standards, procedures, and
guidelines.
Each OMS development team member receives formal application security training. Internally, we use a version
control system for software development. Each software project is protected by the version control system.
Microsoft has a security and compliance team that oversees and assesses all services in Microsoft. Information
security officers make up the team and they are not associated with the engineering departments that develop
OMS. The security officers have their own management chain and conduct independent assessments of products
and services to ensure security and compliance.
Operations Management Suite (also known as OMS) is a collection of management services that were designed in
the cloud from the start. Rather than deploying and managing on premises resources, OMS components are
entirely hosted in Azure. Configuration is minimal, and you can be up and running literally in a matter of minutes.
Just because OMS services run in the cloud doesn't mean that they can't effectively manage your on-premises
environment.
Put an agent on any Windows or Linux computer in your data center, and it will send data to Log Analytics where it
can be analyzed along with all other data collected from cloud or on premises services. Use Azure Backup and
Azure Site Recovery to leverage the cloud for backup and high availability for on premises resources.
Runbooks in the cloud can't typically access your on-premises resources, but you can install an agent on one or
more computers too that will host runbooks in your data center. When you start a runbook, you simply specify
whether you want it to run in the cloud or on a local worker.
The core functionality of OMS is provided by a set of services that run in Azure. Each service provides a specific
management function, and you can combine services to achieve different management scenarios.
Azure operation manager extends its functionalities by providing management solutions. Management Solutions
are prepackaged sets of logic that implement a management scenario leveraging one or more OMS services.
Different solutions are available from Microsoft and from partners that you can easily add to your Azure
subscription to increase the value of your investment in OMS.
As a partner, you can create your own solutions to support your applications and services and provide them to
users through the Azure Marketplace or Quick Start Templates.
NOTE
See Set alerts in Application Insights and Monitor availability and responsiveness of any website.
Log Analytics (Operations Management Suite): Enables the routing of Activity and Diagnostic Logs to Log
Analytics. Operations Management Suite allows metric, log, and other alert types.
NOTE
For more information, see Alerts in Log Analytics.
Azure Monitor: Enables alerts based on both metric values and activity log events. You can use the Azure
Monitor REST API to manage alerts.
NOTE
For more information, see Using the Azure portal, PowerShell, or the command-line interface to create alerts.
Monitoring
Performance issues in your cloud app can impact your business. With multiple interconnected components and
frequent releases, degradations can happen at any time. And if you're developing an app, your users usually
discover issues that you didn't find in testing. You should know about these issues immediately, and have tools for
diagnosing and fixing the problems. Microsoft Azure has a range of tools for identifying these problems.
How do I monitor my Azure cloud apps?
There is a range of tools for monitoring Azure applications and services. Some of their features overlap. This is
partly for historical reasons and partly due to the blurring between development and operation of an application.
Here are the principal tools:
Azure Monitor is basic tool for monitoring services running on Azure. It gives you infrastructure-level data
about the throughput of a service and the surrounding environment. If you are managing your apps all in
Azure, deciding whether to scale up or down resources, then Azure Monitor gives you what you use to start.
Application Insights can be used for development and as a production monitoring solution. It works by
installing a package into your app, and so gives you a more internal view of what's going on. Its data
includes response times of dependencies, exception traces, debugging snapshots, execution profiles. It
provides powerful smart tools for analyzing all this telemetry both to help you debug an app and to help you
understand what users are doing with it. You can tell whether a spike in response times is due to something
in an app, or some external resourcing issue. If you use Visual Studio and the app is at fault, you can be taken
right to the problem line(s) of code so you can fix it.
Log Analytics is for those who need to tune performance and plan maintenance on applications running in
production. It is based in Azure. It collects and aggregates data from many sources, though with a delay of
10 to 15 minutes. It provides a holistic IT management solution for Azure, on-premises, and third-party
cloud-based infrastructure (such as Amazon Web Services). It provides richer tools to analyze data across
more sources, allows complex queries across all logs, and can proactively alert on specified conditions. You
can even collect custom data into its central repository so can query and visualize it.
System Center Operations Manager (SCOM) is for managing and monitoring large cloud installations.
You might be already familiar with it as a management tool for on-premises Windows Sever and Hyper-V
based-clouds, but it can also integrate with and manage Azure apps. Among other things, it can install
Application Insights on existing live apps. If an app goes down, it tells you in seconds.
Next steps
Best practices for creating Azure Resource Manager templates.
Examples of implementing Azure subscription governance.
Microsoft Azure Government.
Azure Data Encryption-at-Rest
9/6/2017 • 19 min to read • Edit Online
There are multiple tools within Microsoft Azure to safeguard data according to your company’s security and
compliance needs. This paper focuses on how data is protected at rest across Microsoft Azure, discusses the
various components taking part in the data protection implementation, and reviews pros and cons of the different
key management protection approaches.
Encryption at Rest is a common security requirement. A benefit of Microsoft Azure is that organizations can
achieve Encryption at Rest without having the cost of implementation and management and the risk of a custom
key management solution. Organizations have the option of letting Azure completely manage Encryption at Rest.
Additionally, organizations have various options to closely manage encryption or encryption keys.
When Server-side encryption with Service Managed keys is used, the key creation, storage and service access are
all managed by the service. Typically, the foundational Azure resource providers will store the Data Encryption Keys
in a store that is close to the data and quickly available and accessible while the Key Encryption Keys are stored in a
secure internal store.
Advantages
Simple setup
Microsoft manages key rotation, backup and redundancy
Customer does not have the cost associated with implementation or the risk of a custom key management
scheme.
Disadvantages
No customer control over the encryption keys (key specification, lifecycle, revocation, etc.)
No ability to segregate key management from overall management model for the service
Server-side encryption using customer managed keys in Azure Key Vault
For scenarios where the requirement is to encrypt the data at rest and control the encryption keys customers can
use server-side Encryption using Customer Managed Keys in Key Vault. Some services may store only the root Key
Encryption Key in Azure Key Vault and store the encrypted Data Encryption Key in an internal location closer to the
data. In that scenario customers can bring their own keys to Key Vault (BYOK – Bring Your Own Key), or generate
new ones, and use them to encrypt the desired resources. While the Resource Provider performs the encryption
and decryption operations it uses the configured key as the root key for all encryption operations.
K e y A c c e ss
The server-side encryption model with customer managed keys in Azure Key Vault involves the service accessing
the keys to encrypt and decrypt as needed. Encryption at rest keys are made accessible to a service through an
access control policy granting that service identity access to receive the key. An Azure service running on behalf of
an associated subscription can be configured with an identity for that service within that subscription. The service
can perform Azure Active Directory authentication and receive an authentication token identifying itself as that
service acting on behalf of the subscription. That token can then be presented to Key Vault to obtain a key it has
been given access to.
For operations using encryption keys, a service identity can be granted access to any of the following operations:
decrypt, encrypt, unwrapKey, wrapKey, verify, sign, get, list, update, create, import, delete, backup, and restore.
To obtain a key for use in encrypting or decrypting data at rest the service identity that the Resource Manager
service instance will run as must have UnwrapKey (to get the key for decryption) and WrapKey (to insert a key into
key vault when creating a new key).
NOTE
For more detail on Key Vault authorization see the secure your key vault page in the Azure Key Vault documentation.
Advantages
Full control over the keys used – encryption keys are managed in the customer’s Key Vault under the
customer’s control.
Ability to encrypt multiple services to one master
Can segregate key management from overall management model for the service
Can define service and key location across regions
Disadvantages
Customer has full responsibility for key access management
Customer has full responsibility for key lifecycle management
Additional Setup & configuration overhead
Server-side encryption using service managed keys in customer controlled hardware
For scenarios where the requirement is to encrypt the data at rest and manage the keys in a proprietary repository
outside of Microsoft’s control, some Azure services enable the Host Your Own Key (HYOK) key management
model. In this model, the service must retrieve the key from an external site and therefore performance and
availability guarantees are impacted, and configuration is more complex. Additionally, since the service does have
access to the DEK during the encryption and decryption operations the overall security guarantees of this model
are similar to when the keys are customer managed in Azure Key Vault. As a result, this model is not appropriate
for most organizations unless they have specific key management requirements necessitating it. Due to these
limitations, most Azure Services do not support server-side encryption using server-managed keys in customer
controlled hardware.
K e y A c c e ss
When server-side encryption using service managed keys in customer controlled hardware is used the keys are
maintained on a system configured by the customer. Azure services that support this model provide a means of
establishing a secure connection to a customer supplied key store.
Advantages
Full control over the root key used – encryption keys are managed by a customer provided store
Ability to encrypt multiple services to one master
Can segregate key management from overall management model for the service
Can define service and key location across regions
Disadvantages
Full responsibility for key storage, security, performance and availability
Full responsibility for key access management
Full responsibility for key lifecycle management
Significant setup, configuration and ongoing maintenance costs
Increased dependency on network availability between the customer datacenter and Azure datacenters.
ENCRYPTION
MODEL
Client
ENCRYPTION
MODEL
Storage and
Databases
Cosmos DB Yes - - -
(Document DB)
Backup - - - Yes
Intelligence and
Analytics
HDInsights Yes - - -
(Azure Blob
Storage)
Power BI Yes - - -
IoT Services
Event Hubs - - - -
Conclusion
Protection of customer data stored within Azure Services is of paramount importance to Microsoft. All Azure
hosted services are committed to providing Encryption at Rest options. Foundational services such as Azure
Storage, SQL Azure and key analytics and intelligence services already provide Encryption at Rest options. Some of
these services support either customer controlled keys and client-side encryption as well as service managed keys
and encryption. Microsoft Azure services are broadly enhancing Encryption at Rest availability and new options are
planned for preview and general availability in the upcoming months.
Getting started with Microsoft Azure security
6/27/2017 • 16 min to read • Edit Online
When you build or migrate IT assets to a cloud provider, you are relying on that organization’s abilities to protect
your applications and data with the services and the controls they provide to manage the security of your cloud-
based assets.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security needs. In addition, Azure
provides you with a wide array of configurable security options and the ability to control them so that you can
customize security to meet the unique requirements of your deployments.
In this overview article on Azure security, we’ll look at:
Azure services and features you can use to help secure your services and data within Azure.
How Microsoft secures the Azure infrastructure to help protect your data and applications.
Virtualization
The Azure platform uses a virtualized environment. User instances operate as standalone virtual machines that do
not have access to a physical host server, and this isolation is enforced by using physical processor (ring-0/ring-3)
privilege levels.
Ring 0 is the most privileged and 3 is the least. The guest OS runs in a lesser-privileged Ring 1, and applications run
in the least privileged Ring 3. This virtualization of physical resources leads to a clear separation between guest OS
and hypervisor, resulting in additional security separation between the two.
The Azure hypervisor acts like a micro-kernel and passes all hardware access requests from guest virtual machines
to the host for processing by using a shared-memory interface called VMBus. This prevents users from obtaining
raw read/write/execute access to the system and mitigates the risk of sharing system resources.
Isolation
Another important cloud security requirement is separation to prevent unauthorized and unintentional transfer of
information between deployments in a shared multi-tenant architecture.
Azure implements network access control and segregation through VLAN isolation, ACLs, load balancers, and IP
filters. It restricts external traffic inbound to ports and protocols on your virtual machines that you define. Azure
implements network filtering to prevent spoofed traffic and restrict incoming and outgoing traffic to trusted
platform components. Traffic flow policies are implemented on boundary protection devices that deny traffic by
default.
Network Address Translation (NAT) is used to separate internal network traffic from external traffic. Internal traffic
is not externally routable. Virtual IP addresses that are externally routable are translated into internal Dynamic IP
addresses that are only routable within Azure.
External traffic to Azure virtual machines is firewalled via ACLs on routers, load balancers, and Layer 3 switches.
Only specific known protocols are permitted. ACLs are in place to limit traffic originating from guest virtual
machines to other VLANs used for management. In addition, traffic filtered via IP filters on the host OS further
limits the traffic on both data link and network layers.
How Azure implements isolation
The Azure Fabric Controller is responsible for allocating infrastructure resources to tenant workloads, and it
manages unidirectional communications from the host to virtual machines. The Azure hypervisor enforces memory
and process separation between virtual machines, and it securely routes network traffic to guest OS tenants. Azure
also implements isolation for tenants, storage, and virtual networks.
Each Azure AD tenant is logically isolated by using security boundaries.
Azure storage accounts are unique to each subscription, and access must be authenticated by using a storage
account key.
Virtual networks are logically isolated through a combination of unique private IP addresses, firewalls, and IP
ACLs. Load balancers route traffic to the appropriate tenants based on endpoint definitions.
Your subscription can contain multiple isolated private networks (and include firewall, load balancing, and network
address translation).
Azure provides three primary levels of network segregation in each Azure cluster to logically segregate traffic.
Virtual local area networks (VLANs) are used to separate customer traffic from the rest of the Azure network.
Access to the Azure network from outside the cluster is restricted through load balancers.
Network traffic to and from virtual machines must pass through the hypervisor virtual switch. The IP filter
component in the root OS isolates the root virtual machine from the guest virtual machines and the guest virtual
machines from one another. It performs filtering of traffic to restrict communication between a tenant's nodes and
the public Internet (based on the customer's service configuration), segregating them from other tenants.
The IP filter helps prevent guest virtual machines from:
Generating spoofed traffic.
Receiving traffic not addressed to them.
Directing traffic to protected infrastructure endpoints.
Sending or receiving inappropriate broadcast traffic.
You can place your virtual machines onto Azure virtual networks. These virtual networks are similar to the networks
you configure in on-premises environments, where they are typically associated with a virtual switch. Virtual
machines connected to the same virtual network can communicate with one another without additional
configuration. You can also configure different subnets within your virtual network.
You can use the following Azure Virtual Network technologies to help secure communications on your virtual
network:
Network Security Groups (NSGs). You can use an NSG to control traffic to one or more virtual machine
instances in your virtual network. An NSG contains access control rules that allow or deny traffic based on traffic
direction, protocol, source address and port, and destination address and port.
User-defined routing. You can control the routing of packets through a virtual appliance by creating user-
defined routes that specify the next hop for packets flowing to a specific subnet to go to a virtual network
security appliance.
IP forwarding. A virtual network security appliance must be able to receive incoming traffic that is not
addressed to itself. To allow a virtual machine to receive traffic addressed to other destinations, you enable IP
forwarding for the virtual machine.
Forced tunneling. Forced tunneling lets you redirect or "force" all Internet-bound traffic generated by your
virtual machines in a virtual network back to your on-premises location via a site-to-site VPN tunnel for
inspection and auditing
Endpoint ACLs. You can control which machines are allowed inbound connections from the Internet to a virtual
machine on your virtual network by defining endpoint ACLs.
Partner network security solutions. There are a number of partner network security solutions that you can
access from the Azure Marketplace.
How Azure implements virtual networks and firewalls
Azure implements packet-filtering firewalls on all host and guest virtual machines by default. Windows OS images
from the Azure Marketplace also have Windows Firewall enabled by default. Load balancers at the perimeter of
Azure public-facing networks control communications based on IP ACLs managed by customer administrators.
If Azure moves a customer’s data as part of normal operations or during a disaster, it does so over private,
encrypted communications channels. Other capabilities employed by Azure to use in virtual networks and firewalls
are:
Native host firewall: Azure Service Fabric and Azure Storage run on a native OS that has no hypervisor. Hence
the windows firewall is configured with the previous two sets of rules. Storage runs native to optimize
performance.
Host firewall: The host firewall is to protect the host operating system that runs the hypervisor. The rules are
programmed to allow only the Service Fabric controller and jump boxes to talk to the host OS on a specific port.
The other exceptions are to allow DHCP response and DNS Replies. Azure uses a machine configuration file that
has the template of firewall rules for the host OS. The host itself is protected from external attack by a Windows
firewall configured to permit communication only from known, authenticated sources.
Guest firewall: Replicates the rules in the virtual machine Switch packet filter but programmed in different
software (such as the Windows Firewall piece of the guest OS). The guest virtual machine firewall can be
configured to restrict communications to or from the guest virtual machine, even if the communication is
permitted by configurations at the host IP Filter. For example, you may choose to use the guest virtual machine
firewall to restrict communication between two of your VNets that have been configured to connect to one
another.
Storage firewall (FW): The firewall on the storage front end filters traffic to be only on ports 80/443 and other
necessary utility ports. The firewall on the storage back end restricts communications to come only from storage
front-end servers.
Virtual Network Gateway: The Azure Virtual Network Gateway serves as the cross-premises gateway
connecting your workloads in Azure Virtual Network to your on-premises sites. It is required to connect to on-
premises sites through IPsec site-to-site VPN tunnels, or through ExpressRoute circuits. For IPsec/IKE VPN
tunnels, the gateways perform IKE handshakes and establish the IPsec S2S VPN tunnels between the virtual
networks and on-premises sites. Virtual network gateways also terminate point-to-site VPNs.
Audit logs recording privileged user access and activities, authorized and unauthorized access attempts, system
exceptions, and information security events are retained for a set period of time. The retention of your logs is at
your discretion because you configure log collection and retention to your own requirements.
How Azure implements logging and monitoring
Azure deploys Management Agents (MA) and Azure Security Monitor (ASM) agents to each compute, storage, or
fabric node under management whether they are native or virtual. Each Management Agent is configured to
authenticate to a service team storage account with a certificate obtained from the Azure certificate store and
forward pre-configured diagnostic and event data to the storage account. These agents are not deployed to
customers’ virtual machines.
Azure administrators access logs through a web portal for authenticated and controlled access to the logs. An
administrator can parse, filter, correlate, and analyze logs. The Azure service team storage accounts for logs are
protected from direct administrator access to help prevent against log tampering.
Microsoft collects logs from network devices using the Syslog protocol, and from host servers using Microsoft
Audit Collection Services (ACS). These logs are placed into a log database from which alerts for suspicious events
are generated. The administrator can access and analyze these logs.
Azure Diagnostics is a feature of Azure that enables you to collect diagnostic data from an application running in
Azure. This is diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource
usage, traffic analysis, capacity planning, and auditing. After the diagnostic data is collected, it can be transferred to
an Azure storage account for persistence. Transfers can either be scheduled or on demand.
Threat mitigation
In addition to isolation, encryption, and filtering, Azure employs a number of threat mitigation mechanisms and
processes to protect infrastructure and services. These include internal controls and technologies used to detect and
remediate advanced threats such as DDoS, privilege escalation, and the OWASP Top-10.
The security controls and risk management processes Microsoft has in place to secure its cloud infrastructure
reduce the risk of security incidents. In the event an incident occurs, the Security Incident Management (SIM) team
within the Microsoft Online Security Services and Compliance (OSSC) team is ready to respond at any time.
How Azure implements threat mitigation
Azure has security controls in place to implement threat mitigation and also to help customers mitigate potential
threats in their environments. The following list summarizes the threat mitigation capabilities offered by Azure:
Azure Antimalware is enabled by default on all infrastructure servers. You can optionally enable it within your
own virtual machines.
Microsoft maintains continuous monitoring across servers, networks, and applications to detect threats and
prevent exploits. Automated alerts notify administrators of anomalous behaviors, allowing them to take
corrective action on both internal and external threats.
You can deploy third-party security solutions within your subscriptions, such as web application firewalls from
Barracuda.
Microsoft’s approach to penetration testing includes “Red-Teaming,” which involves Microsoft security
professionals attacking (non-customer) live production systems in Azure to test defenses against real-world,
advanced, persistent threats.
Integrated deployment systems manage the distribution and installation of security patches across the Azure
platform.
Next steps
Azure Trust Center
Azure Security Team Blog
Microsoft Security Response Center
Active Directory Blog
Azure security best practices and patterns
6/27/2017 • 1 min to read • Edit Online
We currently have the following Azure security best practices and patterns articles. Make sure to visit this site
periodically to see updates to our growing list of Azure security best practices and patterns:
Azure network security best practices
Azure data security and encryption best practices
Identity management and access control security best practices
Internet of Things security best practices
Azure IaaS Security Best Practices
Azure boundary security best practices
Implementing a secure hybrid network architecture in Azure
Azure PaaS Best Practices
Azure provides a secure platform on which you can build your solutions. We also provide services and
technologies to make your solutions on Azure more secure. Because of the many options available to you, many of
you have voiced an interest in what Microsoft recommends as best practices and patterns for improving security.
We understand your interest and have created a collection of documents that describe things you can do, given the
right context, to improve the security of Azure deployments.
In these best practices and patterns articles, we discuss a collection of best practices and useful patterns for specific
topics. These best practices and patterns are derived from our experiences with these technologies and the
experiences of customers like yourself.
For each best practice we strive to explain:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
Possible alternatives to the best practice
How you can learn to enable the best practice
We look forward to including many more articles on Azure security architecture and best practices. If there are
topics that you'd like us to include, let us know in the discussion area at the bottom of this page.
Azure Security Services and Technologies
8/21/2017 • 1 min to read • Edit Online
In our discussions with current and future Azure customers, we’re often asked “do you have a list of all the security
related services and technologies that Azure has to offer?”
We understand that when you’re evaluating your cloud service provider technical options, it’s helpful to have such
a list available that you can use to dig down deeper when the time is right for you.
The following is our initial effort at providing a list. Over time, this list will change and grow, just as Azure does. The
list is categorized, and the list of categories will also grow over time. Make sure to check this page on a regular
basis to stay up-to-date on our security-related services and technologies.
Azure Networking
Network Security Groups
Azure VPN Gateway
Azure Application Gateway
Azure Load Balancer
Azure ExpressRoute
Azure Traffic Manager
Azure Application Proxy
Azure Network Security Best Practices
6/27/2017 • 17 min to read • Edit Online
Microsoft Azure enables you to connect virtual machines and appliances to other networked devices by placing
them on Azure Virtual Networks. An Azure Virtual Network is a virtual network construct that allows you to
connect virtual network interface cards to a virtual network to allow TCP/IP-based communications between
network enabled devices. Azure Virtual Machines connected to an Azure Virtual Network are able to connect to
devices on the same Azure Virtual Network, different Azure Virtual Networks, on the Internet or even on your own
on-premises networks.
In this article we will discuss a collection of Azure network security best practices. These best practices are derived
from our experience with Azure networking and the experiences of customers like yourself.
For each best practice, we’ll explain:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
Possible alternatives to the best practice
How you can learn to enable the best practice
This Azure Network Security Best Practices article is based on a consensus opinion, and Azure platform capabilities
and feature sets, as they exist at the time this article was written. Opinions and technologies change over time and
this article will be updated on a regular basis to reflect those changes.
Azure Network security best practices discussed in this article include:
Logically segment subnets
Control routing behavior
Enable Forced Tunneling
Use Virtual network appliances
Deploy DMZs for security zoning
Avoid exposure to the Internet with dedicated WAN links
Optimize uptime and performance
Use global load balancing
Disable RDP Access to Azure Virtual Machines
Enable Azure Security Center
Extend your datacenter into Azure
NOTE
user Defined Routes are not required and the default system routes will work in most instances.
You can learn more about User Defined Routes and how to configure them by reading the article What are User
Defined Routes and IP Forwarding.
We know that security is job one in the cloud and how important it is that you find accurate and timely information
about Azure security. One of the best reasons to use Azure for your applications and services is to take advantage
of Azure’s wide array of security tools and capabilities. These tools and capabilities help make it possible to create
secure solutions on the Azure platform.
Microsoft Azure provides confidentiality, integrity, and availability of customer data, while also enabling transparent
accountability. To help you better understand the collection of network security controls implemented within
Microsoft Azure from the customer's perspective, this article, “Azure Network Security", is written to provide a
comprehensive look at the network security controls available with Microsoft Azure.
This paper is intended to inform you about the wide range of network controls that you can configure to enhance
the security of the solutions you deploy in Azure. If you are interested in what Microsoft does to secure the network
fabric of the Azure platform itself, see the Azure security section in the Microsoft Trust Center.
Azure platform
Azure is a public cloud service platform that supports a broad selection of operating systems, programming
languages, frameworks, tools, databases, and devices. It can run Linux containers with Docker integration; build
apps with JavaScript, Python, .NET, PHP, Java, and Node.js; build back-ends for iOS, Android, and Windows devices.
Azure cloud services support the same technologies millions of developers and IT professionals already rely on and
trust.
When you build on, or migrate IT assets to, a public cloud service provider, you are relying on that organization’s
abilities to protect your applications and data with the services and the controls they provide to manage the
security of your cloud-based assets.
Azure’s infrastructure is designed from the facility to applications for hosting millions of customers simultaneously,
and it provides a trustworthy foundation upon which businesses can meet their security requirements. In addition,
Azure provides you with an extensive collection of configurable security options and the ability to control them so
that you can customize security to meet the unique requirements of your organization’s deployments.
Abstract
Microsoft public cloud services deliver hyper-scale services and infrastructure, enterprise-grade capabilities, and
many choices for hybrid connectivity. You can choose to access these services either via the Internet or with Azure
ExpressRoute, which provides private network connectivity. The Microsoft Azure platform allows you to seamlessly
extend your infrastructure into the cloud and build multi-tier architectures. Additionally, third parties can enable
enhanced capabilities by offering security services and virtual appliances.
Azure’s network services maximize flexibility, availability, resiliency, security, and integrity by design. This white
paper provides details on the networking functions of Azure and information on how customers can use Azure’s
native security features to help protect their information assets.
The intended audiences for this whitepaper include:
Technical managers, network administrators, and developers who are looking for security solutions available
and supported in Azure.
SMEs or business process executives who want to get a high-level overview the Azure networking
technologies and services that are relevant in discussions around network security in the Azure public cloud.
The Azure network infrastructure enables you to securely connect Azure resources to each other with virtual
networks (VNets). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the
Azure cloud network dedicated to your subscription. You can connect VNets to your on-premises networks.
Azure supports dedicated WAN link connectivity to your on-premises network and an Azure Virtual Network with
ExpressRoute. The link between Azure and your site uses a dedicated connection that does not go over the public
Internet. If your Azure application is running in multiple datacenters, you can use Azure Traffic Manager to route
requests from users intelligently across instances of the application. You can also route traffic to services not
running in Azure if they are accessible from the Internet.
NOTE
Not all aspects of Azure networking are described – we discuss only those considered to be pivotal in planning and designing
a secure network infrastructure around your services and applications you deploy in Azure.
In this paper, will be cover the following Azure networking enterprise capabilities:
Basic network connectivity
Hybrid Connectivity
Security Controls
Network Validation
Basic network connectivity
The Azure Virtual Network service enables you to securely connect Azure resources to each other with virtual
networks (VNet). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the
Azure network infrastructure dedicated to your subscription. You can also connect VNets to each other and to your
on-premises networks using site-to-site VPNs and dedicated WAN links.
With the understanding that you use VMs to host servers in Azure, the question is how those VMs connect to a
network. The answer is that VMs connect to an Azure Virtual Network.
Azure Virtual Networks are like the virtual networks you use on-premises with your own virtualization platform
solutions, such as Microsoft Hyper-V or VMware.
Intra-VNet connectivity
You can connect VNets to each other, enabling resources connected to either VNet to communicate with each other
across VNets. You can use either or both of the following options to connect VNets to each other:
Peering: Enables resources connected to different Azure VNets within the same Azure location to
communicate with each other. The bandwidth and latency across the VNet is the same as if the resources
were connected to the same VNet. To learn more about peering, read Virtual network peering.
VNet-to-VNet connection: Enables resources connected to different Azure VNet within the same, or
different Azure locations. Unlike peering, bandwidth is limited between VNets because traffic must flow
through an Azure VPN Gateway.
To learn more about connecting VNets with a VNet-to-VNet connection, read the Configure a VNet-to-VNet
connection article.
Azure virtual network capabilities:
As you can see, an Azure Virtual Network provides virtual machines to connect to the network so that they can
connect to other network resources in a secure fashion. However, basic connectivity is just the beginning. The
following capabilities of the Azure Virtual Network service expose security characteristics of the Azure Virtual
Network:
Isolation
Internet connectivity
Azure resource connectivity
VNet connectivity
On-premises connectivity
Traffic filtering
Routing
Isolation
VNets are isolated from one another. You can create separate VNets for development, testing, and production that
use the same CIDR address blocks. Conversely, you can create multiple VNets that use different CIDR address
blocks and connect networks together. You can segment a VNet into multiple subnets.
Azure provides internal name resolution for VMs and Cloud Services role instances connected to a VNet. You can
optionally configure a VNet to use your own DNS servers, instead of using Azure internal name resolution.
You can implement multiple VNets within each Azure subscription and Azure region. Each VNet is isolated from
other VNets. For each VNet you can:
Specify a custom private IP address space using public and private (RFC 1918) addresses. Azure assigns
resources connected to the VNet a private IP address from the address space, you assign.
Segment the VNet into one or more subnets and allocate a portion of the VNet address space to each
subnet.
Use Azure-provided name resolution or specify your own DNS server for use by resources connected to a
VNet. To learn more about name resolution in VNets, read the Name resolution for VMs and Cloud Services.
Internet connectivity
All Azure Virtual Machines (VM) and Cloud Services role instances connected to a VNet have access to the Internet,
by default. You can also enable inbound access to specific resources, as needed.(VM) and Cloud Services role
instances connected to a VNet have access to the Internet,by default. You can also enable inbound access to specific
resources, as needed.
All resources connected to a VNet have outbound connectivity to the Internet by default. The private IP address of
the resource is source network address translated (SNAT) to a public IP address by the Azure infrastructure. You
can change the default connectivity by implementing custom routing and traffic filtering. To learn more about
outbound Internet connectivity, read the Understanding outbound connections in Azure.
To communicate inbound to Azure resources from the Internet, or to communicate outbound to the Internet
without SNAT, a resource must be assigned a public IP address. To learn more about public IP addresses, read the
Public IP addresses.
Azure resource connectivity
Azure resources such as Cloud Services and VMs can be connected to the same VNet. The resources can connect to
each other using private IP addresses, even if they are in different subnets. Azure provides default routing between
subnets, VNets, and on-premises networks, so you don't have to configure and manage routes.
You can connect several Azure resources to a VNet, such as Virtual Machines (VM), Cloud Services, App Service
Environments, and Virtual Machine Scale Sets. VMs connect to a subnet within a VNet through a network interface
(NIC). To learn more about NICs, read the Network interfaces.
VNet connectivity
VNets can be connected to each other, enabling resources connected to any VNet to communicate with any
resource on any other VNet.
You can connect VNets to each other, enabling resources connected to either VNet to communicate with each other
across VNets. You can use either or both of the following options to connect VNets to each other:
Peering: Enables resources connected to different Azure VNets within the same Azure location to
communicate with each other. The bandwidth and latency across the VNets is the same as if the resources
were connected to the same VNet.To learn more about peering, read the Virtual network peering.
VNet-to-VNet connection: Enables resources connected to different Azure VNet within the same, or
different Azure locations. Unlike peering, bandwidth is limited between VNets because traffic must flow
through an Azure VPN Gateway. To learn more about connecting VNets with a VNet-to-VNet connection. To
learn more, read the Configure a VNet-to-VNet connection .
On-premises connectivity
VNets can be connected to on-premises networks through private network connections between your network and
Azure, or through a site-to-site VPN connection over the Internet.
You can connect your on-premises network to a VNet using any combination of the following options:
Point-to-site virtual private network (VPN): Established between a single PC connected to your network
and the VNet. This connection type is great if you're just getting started with Azure, or for developers,
because it requires little or no changes to your existing network. The connection uses the SSTP protocol to
provide encrypted communication over the Internet between the PC and the VNet. The latency for a point-
to-site VPN is unpredictable since the traffic traverses the Internet.
Site-to-site VPN: Established between your VPN device and an Azure VPN Gateway. This connection type
enables any on-premises resource you authorize to access a VNet. The connection is an IPsec/IKE VPN that
provides encrypted communication over the Internet between your on-premises device and the Azure VPN
gateway. The latency for a site-to-site connection is unpredictable since the traffic traverses the Internet.
Azure ExpressRoute: Established between your network and Azure, through an ExpressRoute partner. This
connection is private. Traffic does not traverse the Internet. The latency for an ExpressRoute connection is
predictable since traffic doesn't traverse the Internet. To learn more about all the previous connection
options, read the Connection topology diagrams.
Traffic filtering
VM and Cloud Services role instances network traffic can be filtered inbound and outbound by source IP address
and port, destination IP address and port, and protocol.
You can filter network traffic between subnets using either or both of the following options:
Network security groups (NSG): Each NSG can contain multiple inbound and outbound security rules that
enable you to filter traffic by source and destination IP address, port, and protocol. You can apply an NSG to
each NIC in a VM. You can also apply an NSG to the subnet a NIC, or other Azure resource, is connected to.
To learn more about NSGs, read the Network security groups.
Virtual Network Appliances: A virtual network appliance is a VM running software that performs a
network function, such as a firewall. View a list of available NVAs in the Azure Marketplace. NVAs are also
available that provide WAN optimization and other network traffic functions. NVAs are typically used with
user-defined or BGP routes. You can also use an NVA to filter traffic between VNets.
Routing
You can optionally override Azure's default routing by configuring your own routes, or using BGP routes through a
network gateway.
Azure creates route tables that enable resources connected to any subnet in any VNet to communicate with each
other, by default. You can implement either or both of the following options to override the default routes Azure
creates:
User-defined routes: You can create custom route tables with routes that control where traffic is routed to
for each subnet. To learn more about user-defined routes, read the User-defined routes.
BGP routes: If you connect your VNet to your on-premises network using an Azure VPN Gateway or
ExpressRoute connection, you can propagate BGP routes to your VNets.
Hybrid internet connectivity: Connect to an on-premises network
You can connect your on-premises network to a VNet using any combination of the following options:
Internet connectivity
Point-to-site VPN (P2S VPN)
Site-to-Site VPN (S2S VPN)
ExpressRoute
Internet Connectivity
As its name suggests, Internet connectivity makes your workloads accessible from the Internet, by having you
expose different public endpoints to workloads that live inside the virtual network. These workloads could be
exposed using Internet-facing Load Balancer or simply assigning a public IP address to the VM. This way, it
becomes possible for anything on the Internet to be able to reach that virtual machine, provided a host firewall,
network security groups (NSG), and User-Defined Routes allow that to happen.
In this scenario, you could expose an application that needs to be public to the Internet and be able to connect to it
from anywhere, or from specific locations depending on the configuration of your workloads.
Point-to-Site VPN or Site-to-Site VPN
These two falls into the same category. They both need your VNet to have a VPN Gateway and you can connect to it
using either a VPN Client for your workstation as part of the Point-to-Site configuration or you can configure your
on-premises VPN device to be able to terminate a site-to-site VPN. This way, on-premises devices can connect to
resources within the VNet.
A Point-to-Site (P2S) configuration lets you create a secure connection from an individual client computer to a
virtual network. P2S is a VPN connection over SSTP (Secure Socket Tunneling Protocol).
Point-to-Site connections are useful when you want to connect to your VNet from a remote location, such as from
home or a conference center, or when you only have a few clients that need to connect to a virtual network.
P2S connections do not require a VPN device or a public-facing IP address. You establish the VPN connection from
the client computer. Therefore, P2S is not recommended way to connect to Azure in case you need a persistent
connection from many on-premises devices and computers to your Azure network.
NOTE
For more information about Point-to-Site connections, see the Point-to-Site FA v Q.
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network
over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel.
This type of connection requires a VPN device located on-premises that has an externally facing public IP address
assigned to it. This connection takes place over the Internet and allows you to “tunnel” information inside an
encrypted link between your network and Azure. Site-to-site VPN is a secure, mature technology that has been
deployed by enterprises of all sizes for decades. Tunnel encryption is performed using IPsec tunnel mode.
While site-to-site VPN is a trusted, reliable, and established technology, traffic within the tunnel does traverse the
Internet. In addition, bandwidth is relatively constrained to a maximum of about 200 Mbps.
If you require an exceptional level of security or performance for your cross-premises connections, we recommend
that you use Azure ExpressRoute for your cross-premises connectivity. ExpressRoute is a dedicated WAN link
between your on-premises location or an Exchange hosting provider. Because this is a telco connection, your data
doesn’t travel over the Internet and therefore is not exposed to the potential risks inherent in Internet
communications.
NOTE
For more information about VPN gateways, see About VPN gateway.
NOTE
For information on how to connect your network to Microsoft using ExpressRoute, see ExpressRoute connectivity models and
ExpressRoute technical overview.
As with the site-to-site VPN options, ExpressRoute also allows you to connect to resources that are not necessarily
in only one VNet. In fact, depending on the SKU, you can connect to 10 VNets. If you have the premium add-on,
connections to up to 100 VNets are possible, depending on bandwidth. To learn more about what these types of
connections look like, read Connection topology diagrams.
Security controls
An Azure Virtual Network provides a secure, logical network that is isolated from other virtual networks and
supports many security controls that you use on your on-premises networks. Customers create their own structure
by using: subnets—they use their own private IP address range, configure route tables, network security groups,
access control lists (ACLs), gateways, and virtual appliances to run their workloads in the cloud.
The following are security controls you can use on your Azure Virtual Networks:
Network Access Controls
User-Defined Routes
Network Security Appliance
Application Gateway
Azure Web Application Firewall
Network Availability Control
Network access controls
While the Azure Virtual Network (VNet) is the cornerstone of Azure networking model and provides isolation and
protection, the Network Security Groups (NSG) are the main tool you use to enforce and control network traffic
rules at the network level.
You can control access by permitting or denying communication between the workloads within a virtual network,
from systems on customer’s networks via cross-premises connectivity, or direct Internet communication.
In the diagram, both VNets and NSGs reside in a specific layer in the Azure overall security stack, where NSGs, UDR,
and network virtual appliances can be used to create security boundaries to protect the application deployments in
the protected network.
NSGs use a 5-tuple to evaluate traffic (and are used in the rules you configure for the NSG):
Source and destination IP address
Source and destination port
Protocol: Transmission Control Protocol (TCP) or User Datagram Protocol (UDP)
This means you can control access between a single VM and a group of VMs, or a single VM to another single VM,
or between entire subnets. Again, keep in mind that this is simple stateful packet filtering, not full packet inspection.
There is no protocol validation or network level IDS or IPS capability in a Network Security Group.
An NSG comes with some built-in rules that you should be aware of. These are:
Allow all traffic within a specific virtual network: All VMs on the same Azure Virtual Network can
communicate with each other.
Allow Azure load balancing to inbound: This rule allows traffic from any source address to any
destination address for the Azure load balancer.
Deny all inbound: This rule blocks all traffic sourcing from the Internet that you have explicitly allowed.
Allow all traffic outbound to the Internet: This rule allows VMs to initiate connections to the Internet. If
you do not want these connections initiated, you need to create a rule to block those connections or enforce
forced tunneling.
System routes and user-defined routes
When you add virtual machines (VMs) to a virtual network (VNet) in Azure, you notice that the VMs are able to
communicate with each other over the network, automatically. You do not need to specify a gateway, even though
the VMs are in different subnets.
The same is true for communication from the VMs to the public Internet, and even to your on-premises network
when a hybrid connection from Azure to your own datacenter is present.
This flow of communication is possible because Azure uses a series of system routes to define how IP traffic flows.
System routes control the flow of communication in the following scenarios:
From within the same subnet.
From a subnet to another within a VNet.
From VMs to the Internet.
From a VNet to another VNet through a VPN gateway.
From a VNet to another VNet through VNet Peering (Service Chaining).
From a VNet to your on-premises network through a VPN gateway.
Many enterprises have strict security and compliance requirements that require on-premises inspection of all
network packets to enforce specific polices. Azure provides a mechanism called forced tunneling that routes traffic
from the VMs to on-premises by creating a custom route or by Border Gateway Protocol (BGP) advertisements
through ExpressRoute or VPN.
Forced tunneling in Azure is configured via virtual network user-defined routes (UDR). Redirecting traffic to an on-
premises site is expressed as a Default Route to the Azure VPN gateway.
The following section lists the current limitation of the routing table and routes for an Azure Virtual Network:
Each virtual network subnet has a built-in, system routing table. The system routing table has the following
three groups of routes:
Local VNet routes: Directly to the destination VMs in the same virtual network
On premises routes: To the Azure VPN gateway
Default route: Directly to the Internet. Packets destined to the private IP addresses not covered by
the previous two routes are dropped.
With the release of user-defined routes, you can create a routing table to add a default route, and then
associate the routing table to your VNet subnet to enable forced tunneling on those subnets.
You need to set a "default site" among the cross-premises local sites connected to the virtual network.
Forced tunneling must be associated with a VNet that has a dynamic routing VPN gateway (not a static
gateway).
ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a
default route via the ExpressRoute BGP peering sessions.
NOTE
For more information, see the ExpressRoute Documentation for more information.
NOTE
Resource Manager provides a new way to deploy and manage your solutions. If you used the earlier deployment model and
want to learn about the changes, see Understanding Resource Manager deployment and classic deployment.
NOTE
For more information on Audit logs, see Audit operations with Resource Manager. Audit logs are available for operations
done on all network resources.
Metrics
Metrics are performance measurements and counters collected over a period. Metrics are currently available for
Application Gateway. Metrics can be used to trigger alerts based on threshold. Azure Application Gateway by
default monitors the health of all resources in its back-end pool and automatically removes any resource
considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds
them back to the healthy back-end pool once they become available and respond to health probes. Application
gateway sends the health probes with the same port that is defined in the back-end HTTP settings. This
configuration ensures that the probe is testing the same port that customers would be using to connect to the
backend.
NOTE
See Application Gateway Diagnostics to view how metrics can be used to create alerts.
Diagnostic logs
Periodic and spontaneous events are created by network resources and logged in storage accounts, sent to an
Event Hub, or Log Analytics. These logs provide insights into the health of a resource. These logs can be viewed in
tools such as Power BI and Log Analytics. To learn how to view diagnostic logs, visit Log Analytics.
Diagnostic logs are available for Load Balancer, Network Security Groups, Routes, and Application Gateway.
Network Watcher provides a diagnostic logs view. This view contains all networking resources that support
diagnostic logging. From this view, you can enable and disable networking resources conveniently and quickly.
Log analytics
Log Analytics is a service in Operations Management Suite (OMS) that monitors your cloud and on-premises
environments to maintain their availability and performance. It collects data generated by resources in your cloud
and on-premises environments and from other monitoring tools to provide analysis across multiple sources.
Log Analytics offers the following solutions for monitoring your networks:
Network Performance Monitor (NPM)
Azure Application Gateway analytics
Azure Network Security Group analytics
Network performance monitor (NPM)
The Network Performance Monitor management solution is a network monitoring solution that monitors the
health, availability, and reachability of networks.
It is used to monitor connectivity between:
public cloud and on-premises
data centers and user locations (branch offices)
subnets hosting various tiers of a multi-tiered application.
Azure application gateway analytics in log analytics
The following logs are supported for Application Gateways:
ApplicationGatewayAccessLog
ApplicationGatewayPerformanceLog
ApplicationGatewayFirewallLog
The following metrics are supported for Application Gateways:
5-minute throughput
Azure network security group analytics in log analytics
The following logs are supported for network security groups:
NetworkSecurityGroupEvent: Contains entries for which NSG rules are applied to VMs and instance roles
based on MAC address. The status for these rules is collected every 60 seconds.
NetworkSecurityGroupRuleCounter: Contains entries for how many times each NSG rule is applied to
deny or allow traffic.
Next steps
Find out more about security by reading some of our in-depth security topics:
Log Analytics for Network Security Groups (NSGs)
Networking innovations that drive the cloud disruption
SONiC: The networking switch software that powers the Microsoft Global Cloud
How Microsoft builds its fast and reliable global network
Lighting up network innovation
Microsoft cloud services and network security
6/27/2017 • 37 min to read • Edit Online
Microsoft cloud services deliver hyper-scale services and infrastructure, enterprise-grade capabilities, and many
choices for hybrid connectivity. Customers can choose to access these services either via the Internet or with Azure
ExpressRoute, which provides private network connectivity. The Microsoft Azure platform allows customers to
seamlessly extend their infrastructure into the cloud and build multi-tier architectures. Additionally, third parties
can enable enhanced capabilities by offering security services and virtual appliances. This white paper provides an
overview of security and architectural issues that customers should consider when using Microsoft cloud services
accessed via ExpressRoute. It also covers creating more secure services in Azure virtual networks.
Fast start
The following logic chart can direct you to a specific example of the many security techniques available with the
Azure platform. For quick reference, find the example that best fits your case. For expanded explanations, continue
reading through the paper.
Example 1: Build a perimeter network (also known as DMZ, demilitarized zone, or screened subnet) to help protect
applications with network security groups (NSGs).
Example 2: Build a perimeter network to help protect applications with a firewall and NSGs.
Example 3: Build a perimeter network to help protect networks with a firewall, user-defined route (UDR), and NSG.
Example 4: Add a hybrid connection with a site-to-site, virtual appliance virtual private network (VPN).
Example 5: Add a hybrid connection with a site-to-site, Azure VPN gateway.
Example 6: Add a hybrid connection with ExpressRoute.
Examples for adding connections between virtual networks, high availability, and service chaining will be added to
this document over the next few months.
This approach provides a more secure foundation for customers to deploy their services in the Microsoft cloud.
The next step is for customers to design and create a security architecture to protect these services.
Inbound from the Internet, Azure DDoS helps protect against large-scale attacks against Azure. The next layer is
customer-defined public IP addresses (endpoints), which are used to determine which traffic can pass through the
cloud service to the virtual network. Native Azure virtual network isolation ensures complete isolation from all
other networks and that traffic only flows through user configured paths and methods. These paths and methods
are the next layer, where NSGs, UDR, and network virtual appliances can be used to create security boundaries to
protect the application deployments in the protected network.
The next section provides an overview of Azure virtual networks. These virtual networks are created by customers,
and are what their deployed workloads are connected to. Virtual networks are the basis of all the network security
features required to establish a perimeter network to protect customer deployments in Azure.
NOTE
Traffic isolation refers only to traffic inbound to the virtual network. By default outbound traffic from the VNet to the
internet is allowed, but can be prevented if desired by NSGs.
Multi-tier topology: Virtual networks allow customers to define multi-tier topology by allocating subnets and
designating separate address spaces for different elements or “tiers” of their workloads. These logical
groupings and topologies enable customers to define different access policy based on the workload types, and
also control traffic flows between the tiers.
Cross-premises connectivity: Customers can establish cross-premises connectivity between a virtual network
and multiple on-premises sites or other virtual networks in Azure. To construct a connection, customers can use
VNet Peering, Azure VPN Gateways, third-party network virtual appliances, or ExpressRoute. Azure supports
site-to-site (S2S) VPNs using standard IPsec/IKE protocols and ExpressRoute private connectivity.
NSG allows customers to create rules (ACLs) at the desired level of granularity: network interfaces, individual
VMs, or virtual subnets. Customers can control access by permitting or denying communication between the
workloads within a virtual network, from systems on customer’s networks via cross-premises connectivity, or
direct Internet communication.
UDR and IP Forwarding allow customers to define the communication paths between different tiers within a
virtual network. Customers can deploy a firewall, IDS/IPS, and other virtual appliances, and route network
traffic through these security appliances for security boundary policy enforcement, auditing, and inspection.
Network virtual appliances in the Azure Marketplace: Security appliances such as firewalls, load balancers,
and IDS/IPS are available in the Azure Marketplace and the VM Image Gallery. Customers can deploy these
appliances into their virtual networks, and specifically, at their security boundaries (including the perimeter
network subnets) to complete a multi-tiered secure network environment.
With these features and capabilities, one example of how a perimeter network architecture could be constructed in
Azure is the following diagram:
TIP
Keep the following two groups separate: the individuals authorized to access the perimeter network security gear and the
individuals authorized as application development, deployment, or operations administrators. Keeping these groups separate
allows for a segregation of duties and prevents a single person from bypassing both applications security and network
security controls.
TIP
Use the smallest number of boundaries that satisfy the security requirements for a given situation. With more boundaries,
operations and troubleshooting can be more difficult, as well as the management overhead involved with managing the
multiple boundary policies over time. However, insufficient boundaries increase risk. Finding the balance is critical.
The preceding figure shows a high-level view of a three security boundary network. The boundaries are between
the perimeter network and the Internet, the Azure front-end and back-end private subnets, and the Azure back-end
subnet and the on-premises corporate network.
2) Where are the boundaries located?
Once the number of boundaries are decided, where to implement them is the next decision point. There are
generally three choices:
Using an Internet-based intermediary service (for example, a cloud-based Web application firewall, which is not
discussed in this document)
Using native features and/or network virtual appliances in Azure
Using physical devices on the on-premises network
On purely Azure networks, the options are native Azure features (for example, Azure Load Balancers) or network
virtual appliances from the rich partner ecosystem of Azure (for example, Check Point firewalls).
If a boundary is needed between Azure and an on-premises network, the security devices can reside on either side
of the connection (or both sides). Thus a decision must be made on the location to place security gear.
In the previous figure, the Internet-to-perimeter network and the front-to-back-end boundaries are entirely
contained within Azure, and must be either native Azure features or network virtual appliances. Security devices on
the boundary between Azure (back-end subnet) and the corporate network could be either on the Azure side or
the on-premises side, or even a combination of devices on both sides. There can be significant advantages and
disadvantages to either option that must be seriously considered.
For example, using existing physical security gear on the on-premises network side has the advantage that no new
gear is needed. It just needs reconfiguration. The disadvantage, however, is that all traffic must come back from
Azure to the on-premises network to be seen by the security gear. Thus Azure-to-Azure traffic could incur
significant latency, and affect application performance and user experience, if it was forced back to the on-
premises network for security policy enforcement.
3) How are the boundaries implemented?
Each security boundary will likely have different capability requirements (for example, IDS and firewall rules on the
Internet side of the perimeter network, but only ACLs between the perimeter network and back-end subnet).
Deciding on which device (or how many devices) to use depends on the scenario and security requirements. In the
following section, examples 1, 2, and 3 discuss some options that could be used. Reviewing the Azure native
network features and the devices available in Azure from the partner ecosystem shows the myriad options
available to solve virtually any scenario.
Another key implementation decision point is how to connect the on-premises network with Azure. Should you
use the Azure virtual gateway or a network virtual appliance? These options are discussed in greater detail in the
following section (examples 4, 5, and 6).
Additionally, traffic between virtual networks within Azure may be needed. These scenarios will be added in the
future.
Once you know the answers to the previous questions, the Fast Start section can help identify which examples are
most appropriate for a given scenario.
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with two subnets: “FrontEnd” and “BackEnd”
A Network Security Group that is applied to both subnets
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
A public IP associated with the application web server
For scripts and an Azure Resource Manager template, see the detailed build instructions.
NSG description
In this example, an NSG group is built and then loaded with six rules.
TIP
Generally speaking, you should create your specific “Allow” rules first, followed by the more generic “Deny” rules. The given
priority dictates which rules are evaluated first. Once traffic is found to apply to a specific rule, no further rules are evaluated.
NSG rules can apply in either the inbound or outbound direction (from the perspective of the subnet).
Declaratively, the following rules are being built for inbound traffic:
1. Internal DNS traffic (port 53) is allowed.
2. RDP traffic (port 3389) from the Internet to any Virtual Machine is allowed.
3. HTTP traffic (port 80) from the Internet to web server (IIS01) is allowed.
4. Any traffic (all ports) from IIS01 to AppVM1 is allowed.
5. Any traffic (all ports) from the Internet to the entire virtual network (both subnets) is denied.
6. Any traffic (all ports) from the front-end subnet to the back-end subnet is denied.
With these rules bound to each subnet, if an HTTP request was inbound from the Internet to the web server, both
rules 3 (allow) and 5 (deny) would apply. But because rule 3 has a higher priority, only it would apply, and rule 5
would not come into play. Thus the HTTP request would be allowed to the web server. If that same traffic was
trying to reach the DNS01 server, rule 5 (deny) would be the first to apply, and the traffic would not be allowed to
pass to the server. Rule 6 (deny) blocks the front-end subnet from talking to the back-end subnet (except for
allowed traffic in rules 1 and 4). This rule-set protects the back-end network in case an attacker compromises the
web application on the front end. The attacker would have limited access to the back-end “protected” network
(only to resources exposed on the AppVM01 server).
There is a default outbound rule that allows traffic out to the Internet. For this example, we’re allowing outbound
traffic and not modifying any outbound rules. To lock down traffic in both directions, user-defined routing is
required (see example 3).
Conclusion
This example is a relatively simple and straightforward way of isolating the back-end subnet from inbound traffic.
For more information, see the detailed build instructions. These instructions include:
How to build this perimeter network with classic PowerShell scripts.
How to build this perimeter network with an Azure Resource Manager template.
Detailed descriptions of each NSG command.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 2 Build a perimeter network to help protect applications with a firewall and NSGs
Back to Fast start | Detailed build instructions for this example
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with two subnets: “FrontEnd” and “BackEnd”
A Network Security Group that is applied to both subnets
A network virtual appliance, in this case a firewall, connected to the front-end subnet
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
For scripts and an Azure Resource Manager template, see the detailed build instructions.
NSG description
In this example, an NSG group is built and then loaded with six rules.
TIP
Generally speaking, you should create your specific “Allow” rules first, followed by the more generic “Deny” rules. The given
priority dictates which rules are evaluated first. Once traffic is found to apply to a specific rule, no further rules are evaluated.
NSG rules can apply in either the inbound or outbound direction (from the perspective of the subnet).
Declaratively, the following rules are being built for inbound traffic:
1. Internal DNS traffic (port 53) is allowed.
2. RDP traffic (port 3389) from the Internet to any Virtual Machine is allowed.
3. Any Internet traffic (all ports) to the network virtual appliance (firewall) is allowed.
4. Any traffic (all ports) from IIS01 to AppVM1 is allowed.
5. Any traffic (all ports) from the Internet to the entire virtual network (both subnets) is denied.
6. Any traffic (all ports) from the front-end subnet to the back-end subnet is denied.
With these rules bound to each subnet, if an HTTP request was inbound from the Internet to the firewall, both rules
3 (allow) and 5 (deny) would apply. But because rule 3 has a higher priority, only it would apply, and rule 5 would
not come into play. Thus the HTTP request would be allowed to the firewall. If that same traffic was trying to reach
the IIS01 server, even though it’s on the front-end subnet, rule 5 (deny) would apply, and the traffic would not be
allowed to pass to the server. Rule 6 (deny) blocks the front-end subnet from talking to the back-end subnet
(except for allowed traffic in rules 1 and 4). This rule-set protects the back-end network in case an attacker
compromises the web application on the front end. The attacker would have limited access to the back-end
“protected” network (only to resources exposed on the AppVM01 server).
There is a default outbound rule that allows traffic out to the Internet. For this example, we’re allowing outbound
traffic and not modifying any outbound rules. To lock down traffic in both directions, user-defined routing is
required (see example 3).
Firewall rule description
On the firewall, forwarding rules should be created. Since this example only routes Internet traffic in-bound to the
firewall and then to the web server, only one forwarding network address translation (NAT) rule is needed.
The forwarding rule accepts any inbound source address that hits the firewall trying to reach HTTP (port 80 or 443
for HTTPS). It's sent out of the firewall’s local interface and redirected to the web server with the IP Address of
10.0.1.5.
Conclusion
This example is a relatively straightforward way of protecting your application with a firewall and isolating the
back-end subnet from inbound traffic. For more information, see the detailed build instructions. These instructions
include:
How to build this perimeter network with classic PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed descriptions of each NSG command and firewall rule.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 3 Build a perimeter network to help protect networks with a firewall and UDR and NSG
Back to Fast start | Detailed build instructions for this example
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with three subnets: “SecNet”, “FrontEnd”, and “BackEnd”
A network virtual appliance, in this case a firewall, connected to the SecNet subnet
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
For scripts and an Azure Resource Manager template, see the detailed build instructions.
UDR description
By default, the following system routes are defined as:
Effective routes :
Address Prefix Next hop type Next hop IP address Status Source
-------------- ------------- ------------------- ------ ------
{10.0.0.0/16} VNETLocal Active Default
{0.0.0.0/0} Internet Active Default
{10.0.0.0/8} Null Active Default
{100.64.0.0/10} Null Active Default
{172.16.0.0/12} Null Active Default
{192.168.0.0/16} Null Active Default
The VNETLocal is always one or more defined address prefixes that make up the virtual network for that specific
network (that is, it changes from virtual network to virtual network, depending on how each specific virtual
network is defined). The remaining system routes are static and default as indicated in the table.
In this example, two routing tables are created, one each for the front-end and back-end subnets. Each table is
loaded with static routes appropriate for the given subnet. In this example, each table has three routes that direct
all traffic (0.0.0.0/0) through the firewall (Next hop = Virtual Appliance IP address):
1. Local subnet traffic with no Next Hop defined to allow local subnet traffic to bypass the firewall.
2. Virtual network traffic with a Next Hop defined as firewall. This next hop overrides the default rule that allows
local virtual network traffic to route directly.
3. All remaining traffic (0/0) with a Next Hop defined as the firewall.
TIP
Not having the local subnet entry in the UDR breaks local subnet communications.
In our example, 10.0.1.0/24 pointing to VNETLocal is critical! Without it, packet leaving the Web Server (10.0.1.4)
destined to another local server (for example) 10.0.1.25 will fail as they will be sent to the NVA. The NVA will send it to
the subnet, and the subnet will resend it to the NVA in an infinite loop.
The chances of a routing loop are typically higher on appliances with multiple NICs that are connected to separate
subnets, which is often of traditional, on-premises appliances.
Once the routing tables are created, they must be bound to their subnets. The front-end subnet routing table, once
created and bound to the subnet, would look like this output:
Effective routes :
Address Prefix Next hop type Next hop IP address Status Source
-------------- ------------- ------------------- ------ ------
{10.0.1.0/24} VNETLocal Active
{10.0.0.0/16} VirtualAppliance 10.0.0.4 Active
{0.0.0.0/0} VirtualAppliance 10.0.0.4 Active
NOTE
UDR can now be applied to the gateway subnet on which the ExpressRoute circuit is connected.
Examples of how to enable your perimeter network with ExpressRoute or site-to-site networking are shown in examples 3
and 4.
IP Forwarding description
IP Forwarding is a companion feature to UDR. IP Forwarding is a setting on a virtual appliance that allows it to
receive traffic not specifically addressed to the appliance, and then forward that traffic to its ultimate destination.
For example, if AppVM01 makes a request to the DNS01 server, UDR would route this traffic to the firewall. With
IP Forwarding enabled, the traffic for the DNS01 destination (10.0.2.4) is accepted by the appliance (10.0.0.4) and
then forwarded to its ultimate destination (10.0.2.4). Without IP Forwarding enabled on the firewall, traffic would
not be accepted by the appliance even though the route table has the firewall as the next hop. To use a virtual
appliance, it’s critical to remember to enable IP Forwarding along with UDR.
NSG description
In this example, an NSG group is built and then loaded with a single rule. This group is then bound only to the
front-end and back-end subnets (not the SecNet). Declaratively the following rule is being built:
Any traffic (all ports) from the Internet to the entire virtual network (all subnets) is denied.
Although NSGs are used in this example, its main purpose is as a secondary layer of defense against manual
misconfiguration. The goal is to block all inbound traffic from the Internet to either the front-end or back-end
subnets. Traffic should only flow through the SecNet subnet to the firewall (and then, if appropriate, on to the
front-end or back-end subnets). Plus, with the UDR rules in place, any traffic that did make it into the front-end or
back-end subnets would be directed out to the firewall (thanks to UDR). The firewall would see this traffic as an
asymmetric flow and would drop the outbound traffic. Thus there are three layers of security protecting the
subnets:
No Public IP addresses on any FrontEnd or BackEnd NICs.
NSGs denying traffic from the Internet.
The firewall dropping asymmetric traffic.
One interesting point regarding the NSG in this example is that it contains only one rule, which is to deny Internet
traffic to the entire virtual network, including the Security subnet. However, since the NSG is only bound to the
front-end and back-end subnets, the rule isn’t processed on traffic inbound to the Security subnet. As a result,
traffic flows to the Security subnet.
Firewall rules
On the firewall, forwarding rules should be created. Since the firewall is blocking or forwarding all inbound,
outbound, and intra-virtual network traffic, many firewall rules are needed. Also, all inbound traffic hits the
Security Service public IP address (on different ports), to be processed by the firewall. A best practice is to diagram
the logical flows before setting up the subnets and firewall rules, to avoid rework later. The following figure is a
logical view of the firewall rules for this example:
NOTE
Based on the Network Virtual Appliance used, the management ports vary. In this example, a Barracuda NextGen Firewall is
referenced, which uses ports 22, 801, and 807. Consult the appliance vendor documentation to find the exact ports used for
management of the device being used.
TIP
On the second application traffic rule, to simplify this example, any port is allowed. In a real scenario, the most specific port
and address ranges should be used to reduce the attack surface of this rule.
Once the previous rules are created, it’s important to review the priority of each rule to ensure traffic is allowed or
denied as desired. For this example, the rules are in priority order.
Conclusion
This example is a more complex but complete way of protecting and isolating the network than the previous
examples. (Example 2 protects just the application, and Example 1 just isolates subnets). This design allows for
monitoring traffic in both directions, and protects not just the inbound application server but enforces network
security policy for all servers on this network. Also, depending on the appliance used, full traffic auditing and
awareness can be achieved. For more information, see the detailed build instructions. These instructions include:
How to build this example perimeter network with classic PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed descriptions of each UDR, NSG command, and firewall rule.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 4 Add a hybrid connection with a site -to -site, virtual appliance VPN
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using a network virtual appliance (NVA) can be added to any of the perimeter network types
described in examples 1, 2, or 3.
As shown in the previous figure, a VPN connection over the Internet (site-to-site) is used to connect an on-
premises network to an Azure virtual network via an NVA.
NOTE
If you use ExpressRoute with the Azure Public Peering option enabled, a static route should be created. This static route
should route to the NVA VPN IP address out your corporate Internet and not via the ExpressRoute connection. The NAT
required on the ExpressRoute Azure Public Peering option can break the VPN session.
Once the VPN is in place, the NVA becomes the central hub for all networks and subnets. The firewall forwarding
rules determine which traffic flows are allowed, are translated via NAT, are redirected, or are dropped (even for
traffic flows between the on-premises network and Azure).
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 3, and then adding a site-to-site VPN hybrid network connection,
produces the following design:
The on-premises router, or any other network device that is compatible with your NVA for VPN, would be the VPN
client. This physical device would be responsible for initiating and maintaining the VPN connection with your NVA.
Logically to the NVA, the network looks like four separate “security zones” with the rules on the NVA being the
primary director of traffic between these zones:
Conclusion
The addition of a site-to-site VPN hybrid network connection to an Azure virtual network can extend the on-
premises network into Azure in a secure manner. In using a VPN connection, your traffic is encrypted and routes
via the Internet. The NVA in this example provides a central location to enforce and manage the security policy. For
more information, see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
Example 5 Add a hybrid connection with a site -to -site, Azure VPN gateway
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using an Azure VPN gateway can be added to either perimeter network type described in
examples 1 or 2.
As shown in the preceding figure, a VPN connection over the Internet (site-to-site) is used to connect an on-
premises network to an Azure virtual network via an Azure VPN gateway.
NOTE
If you use ExpressRoute with the Azure Public Peering option enabled, a static route should be created. This static route
should route to the NVA VPN IP address out your corporate Internet and not via the ExpressRoute WAN. The NAT required
on the ExpressRoute Azure Public Peering option can break the VPN session.
The following figure shows the two network edges in this example. On the first edge, the NVA and NSGs control
traffic flows for intra-Azure networks and between Azure and the Internet. The second edge is the Azure VPN
gateway, which is a separate and isolated network edge between on-premises and Azure.
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 1, and then adding a site-to-site VPN hybrid network connection,
produces the following design:
Conclusion
The addition of a site-to-site VPN hybrid network connection to an Azure virtual network can extend the on-
premises network into Azure in a secure manner. Using the native Azure VPN gateway, your traffic is IPSec
encrypted and routes via the Internet. Also, using the Azure VPN gateway can provide a lower-cost option (no
additional licensing cost as with third-party NVAs). This option is most economical in example 1, where no NVA is
used. For more information, see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
Example 6 Add a hybrid connection with ExpressRoute
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using an ExpressRoute private peering connection can be added to either perimeter network
type described in examples 1 or 2.
As shown in the preceding figure, ExpressRoute private peering provides a direct connection between your on-
premises network and the Azure virtual network. Traffic transits only the service provider network and the
Microsoft Azure network, never touching the Internet.
TIP
Using ExpressRoute keeps corporate network traffic off the Internet. It also allows for service level agreements from your
ExpressRoute provider. The Azure Gateway can pass up to 10 Gbps with ExpressRoute, whereas with site-to-site VPNs, the
Azure Gateway maximum throughput is 200 Mbps.
As seen in the following diagram, with this option the environment now has two network edges. The NVA and
NSG control traffic flows for intra-Azure networks and between Azure and the Internet, while the gateway is a
separate and isolated network edge between on-premises and Azure.
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 1, and then adding an ExpressRoute hybrid network connection, produces
the following design:
Conclusion
The addition of an ExpressRoute Private Peering network connection can extend the on-premises network into
Azure in a secure, lower latency, higher performing manner. Also, using the native Azure Gateway, as in this
example, provides a lower-cost option (no additional licensing as with third-party NVAs). For more information,
see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
References
Helpful websites and documentation
Access Azure with Azure Resource Manager:
Accessing Azure with PowerShell: https://docs.microsoft.com/powershell/azureps-cmdlets-docs/
Virtual networking documentation: https://docs.microsoft.com/azure/virtual-network/
Network security group documentation: https://docs.microsoft.com/azure/virtual-network/virtual-networks-
nsg
User-defined routing documentation: https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-
overview
Azure virtual gateways: https://docs.microsoft.com/azure/vpn-gateway/
Site-to-Site VPNs: https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-create-site-to-site-rm-
powershell
ExpressRoute documentation (be sure to check out the “Getting Started” and “How To” sections):
https://docs.microsoft.com/azure/expressroute/
1 min to read •
Edit O nline
Azure Data Security and Encryption Best Practices
6/27/2017 • 11 min to read • Edit Online
One of the keys to data protection in the cloud is accounting for the possible states in which your data may occur,
and what controls are available for that state. For the purpose of Azure data security and encryption best practices
the recommendations will be around the following data’s states:
At-rest: This includes all information storage objects, containers, and types that exist statically on physical
media, be it magnetic or optical disk.
In-Transit: When data is being transferred between components, locations or programs, such as over the
network, across a service bus (from on-premises to cloud and vice-versa, including hybrid connections such as
ExpressRoute), or during an input/output process, it is thought of as being in-motion.
In this article we will discuss a collection of Azure data security and encryption best practices. These best practices
are derived from our experience with Azure data security and encryption and the experiences of customers like
yourself.
For each best practice, we’ll explain:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
Possible alternatives to the best practice
How you can learn to enable the best practice
This Azure Data Security and Encryption Best Practices article is based on a consensus opinion, and Azure platform
capabilities and feature sets, as they exist at the time this article was written. Opinions and technologies change
over time and this article will be updated on a regular basis to reflect those changes.
Azure data security and encryption best practices discussed in this article include:
Enforce multi-factor authentication
Use role based access control (RBAC)
Encrypt Azure virtual machines
Use hardware security models
Manage with Secure Workstations
Enable SQL data encryption
Protect data in transit
Enforce file level data encryption
Overview
Azure Storage provides a comprehensive set of security capabilities which together enable developers to build
secure applications. The storage account itself can be secured using Role-Based Access Control and Azure Active
Directory. Data can be secured in transit between an application and Azure by using Client-Side Encryption, HTTPS,
or SMB 3.0. Data can be set to be automatically encrypted when written to Azure Storage using Storage Service
Encryption (SSE). OS and Data disks used by virtual machines can be set to be encrypted using Azure Disk
Encryption. Delegated access to the data objects in Azure Storage can be granted using Shared Access Signatures.
This article will provide an overview of each of these security features that can be used with Azure Storage. Links
are provided to articles that will give details of each feature so you can easily do further investigation on each
topic.
Here are the topics to be covered in this article:
Management Plane Security – Securing your Storage Account
The management plane consists of the resources used to manage your storage account. In this section, we'll
talk about the Azure Resource Manager deployment model and how to use Role-Based Access Control
(RBAC) to control access to your storage accounts. We will also talk about managing your storage account
keys and how to regenerate them.
Data Plane Security – Securing Access to Your Data
In this section, we'll look at allowing access to the actual data objects in your Storage account, such as blobs,
files, queues, and tables, using Shared Access Signatures and Stored Access Policies. We will cover both
service-level SAS and account-level SAS. We'll also see how to limit access to a specific IP address (or range
of IP addresses), how to limit the protocol used to HTTPS, and how to revoke a Shared Access Signature
without waiting for it to expire.
Encryption in Transit
This section discusses how to secure data when you transfer it into or out of Azure Storage. We'll talk about
the recommended use of HTTPS and the encryption used by SMB 3.0 for Azure File shares. We will also take
a look at Client-side Encryption, which enables you to encrypt the data before it is transferred into Storage
in a client application, and to decrypt the data after it is transferred out of Storage.
Encryption at Rest
We will talk about Storage Service Encryption (SSE), and how you can enable it for a storage account,
resulting in your block blobs, page blobs, and append blobs being automatically encrypted when written to
Azure Storage. We will also look at how you can use Azure Disk Encryption and explore the basic differences
and cases of Disk Encryption versus SSE versus Client-Side Encryption. We will briefly look at FIPS
compliance for U.S. Government computers.
Using Storage Analytics to audit access of Azure Storage
This section discusses how to find information in the storage analytics logs for a request. We'll take a look at
real storage analytics log data and see how to discern whether a request is made with the Storage account
key, with a Shared Access signature, or anonymously, and whether it succeeded or failed.
Enabling Browser-Based Clients using CORS
This section talks about how to allow cross-origin resource sharing (CORS). We'll talk about cross-domain
access, and how to handle it with the CORS capabilities built into Azure Storage.
How the Shared Access Signature is authenticated by the Azure Storage Service
When the storage service receives the request, it takes the input query parameters and creates a signature using
the same method as the calling program. It then compares the two signatures. If they agree, then the storage
service can check the storage service version to make sure it's valid, verify that the current date and time are within
the specified window, make sure the access requested corresponds to the request made, etc.
For example, with our URL above, if the URL was pointing to a file instead of a blob, this request would fail because
it specifies that the Shared Access Signature is for a blob. If the REST command being called was to update a blob,
it would fail because the Shared Access Signature specifies that only read access is permitted.
Types of Shared Access Signatures
A service-level SAS can be used to access specific resources in a storage account. Some examples of this are
retrieving a list of blobs in a container, downloading a blob, updating an entity in a table, adding messages to a
queue or uploading a file to a file share.
An account-level SAS can be used to access anything that a service-level SAS can be used for. Additionally, it
can give options to resources that are not permitted with a service-level SAS, such as the ability to create
containers, tables, queues, and file shares. You can also specify access to multiple services at once. For example,
you might give someone access to both blobs and files in your storage account.
Creating an SAS URI
1. You can create an ad hoc URI on demand, defining all of the query parameters each time.
This is really flexible, but if you have a logical set of parameters that are similar each time, using a Stored
Access Policy is a better idea.
2. You can create a Stored Access Policy for an entire container, file share, table, or queue. Then you can use
this as the basis for the SAS URIs you create. Permissions based on Stored Access Policies can be easily
revoked. You can have up to 5 policies defined on each container, queue, table, or file share.
For example, if you were going to have many people read the blobs in a specific container, you could create
a Stored Access Policy that says "give read access" and any other settings that will be the same each time.
Then you can create an SAS URI using the settings of the Stored Access Policy and specifying the expiration
date/time. The advantage of this is that you don't have to specify all of the query parameters every time.
Revocation
Suppose your SAS has been compromised, or you want to change it because of corporate security or regulatory
compliance requirements. How do you revoke access to a resource using that SAS? It depends on how you created
the SAS URI.
If you are using ad hoc URI's, you have three options. You can issue SAS tokens with short expiration policies and
simply wait for the SAS to expire. You can rename or delete the resource (assuming the token was scoped to a
single object). You can change the storage account keys. This last option can have a big impact, depending on how
many services are using that storage account, and probably isn't something you want to do without some
planning.
If you are using a SAS derived from a Stored Access Policy, you can remove access by revoking the Stored Access
Policy – you can just change it so it has already expired, or you can remove it altogether. This takes effect
immediately, and invalidates every SAS created using that Stored Access Policy. Updating or removing the Stored
Access Policy may impact people accessing that specific container, file share, table, or queue via SAS, but if the
clients are written so they request a new SAS when the old one becomes invalid, this will work fine.
Because using a SAS derived from a Stored Access Policy gives you the ability to revoke that SAS immediately, it is
the recommended best practice to always use Stored Access Policies when possible.
Resources
For more detailed information on using Shared Access Signatures and Stored Access Policies, complete with
examples, please refer to the following articles:
These are the reference articles.
Service SAS
This article provides examples of using a service-level SAS with blobs, queue messages, table ranges,
and files.
Constructing a service SAS
Constructing an account SAS
These are tutorials for using the .NET client library to create Shared Access Signatures and Stored Access
Policies.
Using Shared Access Signatures (SAS)
Shared Access Signatures, Part 2: Create and Use a SAS with the Blob Service
This article includes an explanation of the SAS model, examples of Shared Access Signatures, and
recommendations for the best practice use of SAS. Also discussed is the revocation of the permission
granted.
Limiting access by IP Address (IP ACLs)
What is an endpoint Access Control List (ACLs)?
Constructing a Service SAS
This is the reference article for service-level SAS; it includes an example of IP ACLing.
Constructing an Account SAS
This is the reference article for account-level SAS; it includes an example of IP ACLing.
Authentication
Authentication for the Azure Storage Services
Shared Access Signatures Getting Started Tutorial
SAS Getting Started Tutorial
Encryption in Transit
Transport-Level Encryption – Using HTTPS
Another step you should take to ensure the security of your Azure Storage data is to encrypt the data between the
client and Azure Storage. The first recommendation is to always use the HTTPS protocol, which ensures secure
communication over the public Internet.
To have a secure communication channel, you should always use HTTPS when calling the REST APIs or accessing
objects in storage. Also, Shared Access Signatures, which can be used to delegate access to Azure Storage
objects, include an option to specify that only the HTTPS protocol can be used when using Shared Access
Signatures, ensuring that anybody sending out links with SAS tokens will use the proper protocol.
You can enforce the use of HTTPS when calling the REST APIs to access objects in storage accounts by enabling
Secure transfer required for the storage account. Connections using HTTP will be refused once this is enabled.
Using encryption during transit with Azure File shares
Azure File storage supports HTTPS when using the REST API, but is more commonly used as an SMB file share
attached to a VM. SMB 2.1 does not support encryption, so connections are only allowed within the same region in
Azure. However, SMB 3.0 supports encryption, and it's available in Windows Server 2012 R2, Windows 8, Windows
8.1, and Windows 10, allowing cross-region access and even access on the desktop.
Note that while Azure File shares can be used with Unix, the Linux SMB client does not yet support encryption, so
access is only allowed within an Azure region. Encryption support for Linux is on the roadmap of Linux developers
responsible for SMB functionality. When they add encryption, you will have the same ability for accessing an Azure
File share on Linux as you do for Windows.
You can enforce the use of encryption with the Azure Files service by enabling Secure transfer required for the
storage account. If using the REST APIs, HTTPs is required. For SMB, only SMB connections that support encryption
will connect successfully.
Resources
How to use Azure File storage with Linux
This article shows how to mount an Azure File share on a Linux system and upload/download files.
Get started with Azure File storage on Windows
This article gives an overview of Azure File shares and how to mount and use them using PowerShell and
.NET.
Inside Azure File storage
This article announces the general availability of Azure File storage and provides technical details about the
SMB 3.0 encryption.
Using Client-side encryption to secure data that you send to storage
Another option that helps you ensure that your data is secure while being transferred between a client application
and Storage is Client-side Encryption. The data is encrypted before being transferred into Azure Storage. When
retrieving the data from Azure Storage, the data is decrypted after it is received on the client side. Even though the
data is encrypted going across the wire, we recommend that you also use HTTPS, as it has data integrity checks
built in which help mitigate network errors affecting the integrity of the data.
Client-side encryption is also a method for encrypting your data at rest, as the data is stored in its encrypted form.
We'll talk about this in more detail in the section on Encryption at Rest.
Encryption at Rest
There are three Azure features that provide encryption at rest. Azure Disk Encryption is used to encrypt the OS and
data disks in IaaS Virtual Machines. The other two – Client-side Encryption and SSE – are both used to encrypt data
in Azure Storage. Let's look at each of these, and then do a comparison and see when each one can be used.
While you can use Client-side Encryption to encrypt the data in transit (which is also stored in its encrypted form in
Storage), you may prefer to simply use HTTPS during the transfer, and have some way for the data to be
automatically encrypted when it is stored. There are two ways to do this -- Azure Disk Encryption and SSE. One is
used to directly encrypt the data on OS and data disks used by VMs, and the other is used to encrypt data written
to Azure Blob Storage.
Storage Service Encryption (SSE)
SSE allows you to request that the storage service automatically encrypt the data when writing it to Azure Storage.
When you read the data from Azure Storage, it will be decrypted by the storage service before being returned. This
enables you to secure your data without having to modify code or add code to any applications.
This is a setting that applies to the whole storage account. You can enable and disable this feature by changing the
value of the setting. To do this, you can use the Azure portal, PowerShell, the Azure CLI, the Storage Resource
Provider REST API, or the .NET Storage Client Library. By default, SSE is turned off.
At this time, the keys used for the encryption are managed by Microsoft. We generate the keys originally, and
manage the secure storage of the keys as well as the regular rotation as defined by internal Microsoft policy. In the
future, you will get the ability to manage your own encryption keys, and provide a migration path from Microsoft-
managed keys to customer-managed keys.
This feature is available for Standard and Premium Storage accounts created using the Resource Manager
deployment model. SSE applies only to block blobs, page blobs, and append blobs. The other types of data,
including tables, queues, and files, will not be encrypted.
Data is only encrypted when SSE is enabled and the data is written to Blob Storage. Enabling or disabling SSE does
not impact existing data. In other words, when you enable this encryption, it will not go back and encrypt data that
already exists; nor will it decrypt the data that already exists when you disable SSE.
If you want to use this feature with a Classic storage account, you can create a new Resource Manager storage
account and use AzCopy to copy the data to the new account.
Client-side Encryption
We mentioned client-side encryption when discussing the encryption of the data in transit. This feature allows you
to programmatically encrypt your data in a client application before sending it across the wire to be written to
Azure Storage, and to programmatically decrypt your data after retrieving it from Azure Storage.
This does provide encryption in transit, but it also provides the feature of Encryption at Rest. Note that although
the data is encrypted in transit, we still recommend using HTTPS to take advantage of the built-in data integrity
checks which help mitigate network errors affecting the integrity of the data.
An example of where you might use this is if you have a web application that stores blobs and retrieves blobs, and
you want the application and data to be as secure as possible. In that case, you would use client-side encryption.
The traffic between the client and the Azure Blob Service contains the encrypted resource, and nobody can
interpret the data in transit and reconstitute it into your private blobs.
Client-side encryption is built into the Java and the .NET storage client libraries, which in turn use the Azure Key
Vault APIs, making it pretty easy for you to implement. The process of encrypting and decrypting the data uses the
envelope technique, and stores metadata used by the encryption in each storage object. For example, for blobs, it
stores it in the blob metadata, while for queues, it adds it to each queue message.
For the encryption itself, you can generate and manage your own encryption keys. You can also use keys
generated by the Azure Storage Client Library, or you can have the Azure Key Vault generate the keys. You can
store your encryption keys in your on-premises key storage, or you can store them in an Azure Key Vault. Azure
Key Vault allows you to grant access to the secrets in Azure Key Vault to specific users using Azure Active Directory.
This means that not just anybody can read the Azure Key Vault and retrieve the keys you're using for client-side
encryption.
Resources
Encrypt and decrypt blobs in Microsoft Azure Storage using Azure Key Vault
This article shows how to use client-side encryption with Azure Key Vault, including how to create the KEK
and store it in the vault using PowerShell.
Client-Side Encryption and Azure Key Vault for Microsoft Azure Storage
This article gives an explanation of client-side encryption, and provides examples of using the storage client
library to encrypt and decrypt resources from the four storage services. It also talks about Azure Key Vault.
Using Azure Disk Encryption to encrypt disks used by your virtual machines
Azure Disk Encryption is a new feature. This feature allows you to encrypt the OS disks and Data disks used by an
IaaS Virtual Machine. For Windows, the drives are encrypted using industry-standard BitLocker encryption
technology. For Linux, the disks are encrypted using the DM-Crypt technology. This is integrated with Azure Key
Vault to allow you to control and manage the disk encryption keys.
The solution supports the following scenarios for IaaS VMs when they are enabled in Microsoft Azure:
Integration with Azure Key Vault
Standard tier VMs: A, D, DS, G, GS, and so forth series IaaS VMs
Enabling encryption on Windows and Linux IaaS VMs
Disabling encryption on OS and data drives for Windows IaaS VMs
Disabling encryption on data drives for Linux IaaS VMs
Enabling encryption on IaaS VMs that are running Windows client OS
Enabling encryption on volumes with mount paths
Enabling encryption on Linux VMs that are configured with disk striping (RAID) by using mdadm
Enabling encryption on Linux VMs by using LVM for data disks
Enabling encryption on Windows VMs that are configured by using storage spaces
All Azure public regions are supported
The solution does not support the following scenarios, features, and technology in the release:
Basic tier IaaS VMs
Disabling encryption on an OS drive for Linux IaaS VMs
IaaS VMs that are created by using the classic VM creation method
Integration with your on-premises Key Management Service
Azure File storage (shared file system), Network File System (NFS), dynamic volumes, and Windows VMs that
are configured with software-based RAID systems
NOTE
Linux OS disk encryption is currently supported on the following Linux distributions: RHEL 7.2, CentOS 7.2n, and Ubuntu
16.04.
This feature ensures that all data on your virtual machine disks is encrypted at rest in Azure Storage.
Resources
Azure Disk Encryption for Windows and Linux IaaS VMs
Comparison of Azure Disk Encryption, SSE, and Client-Side Encryption
IaaS VMs and their VHD files
For disks used by IaaS VMs, we recommend using Azure Disk Encryption. You can turn on SSE to encrypt the VHD
files that are used to back those disks in Azure Storage, but it only encrypts newly written data. This means if you
create a VM and then enable SSE on the storage account that holds the VHD file, only the changes will be
encrypted, not the original VHD file.
If you create a VM using an image from the Azure Marketplace, Azure performs a shallow copy of the image to
your storage account in Azure Storage, and it is not encrypted even if you have SSE enabled. After it creates the VM
and starts updating the image, SSE will start encrypting the data. For this reason, it's best to use Azure Disk
Encryption on VMs created from images in the Azure Marketplace if you want them fully encrypted.
If you bring a pre-encrypted VM into Azure from on-premises, you will be able to upload the encryption keys to
Azure Key Vault, and continue using the encryption for that VM that you were using on-premises. Azure Disk
Encryption is enabled to handle this scenario.
If you have non-encrypted VHD from on-premises, you can upload it into the gallery as a custom image and
provision a VM from it. If you do this using the Resource Manager templates, you can ask it to turn on Azure Disk
Encryption when it boots up the VM.
When you add a data disk and mount it on the VM, you can turn on Azure Disk Encryption on that data disk. It will
encrypt that data disk locally first, and then the service management layer will do a lazy write against storage so
the storage content is encrypted.
Client-side encryption
Client-side encryption is the most secure method of encrypting your data, because it encrypts it before transit, and
encrypts the data at rest. However, it does require that you add code to your applications using storage, which you
may not want to do. In those cases, you can use HTTPs for your data in transit, and SSE to encrypt the data at rest.
With client-side encryption, you can encrypt table entities, queue messages, and blobs. With SSE, you can only
encrypt blobs. If you need table and queue data to be encrypted, you should use client-side encryption.
Client-side encryption is managed entirely by the application. This is the most secure approach, but does require
you to make programmatic changes to your application and put key management processes in place. You would
use this when you want the extra security during transit, and you want your stored data to be encrypted.
Client-side encryption is more load on the client, and you have to account for this in your scalability plans,
especially if you are encrypting and transferring a lot of data.
Storage Service Encryption (SSE)
SSE is managed by Azure Storage. Using SSE does not provide for the security of the data in transit, but it does
encrypt the data as it is written to Azure Storage. There is no impact on the performance when using this feature.
You can only encrypt block blobs, append blobs, and page blobs using SSE. If you need to encrypt table data or
queue data, you should consider using client-side encryption.
If you have an archive or library of VHD files that you use as a basis for creating new virtual machines, you can
create a new storage account, enable SSE, and then upload the VHD files to that account. Those VHD files will be
encrypted by Azure Storage.
If you have Azure Disk Encryption enabled for the disks in a VM and SSE enabled on the storage account holding
the VHD files, it will work fine; it will result in any newly-written data being encrypted twice.
Storage Analytics
Using Storage Analytics to monitor authorization type
For each storage account, you can enable Azure Storage Analytics to perform logging and store metrics data. This
is a great tool to use when you want to check the performance metrics of a storage account, or need to
troubleshoot a storage account because you are having performance problems.
Another piece of data you can see in the storage analytics logs is the authentication method used by someone
when they access storage. For example, with Blob Storage, you can see if they used a Shared Access Signature or
the storage account keys, or if the blob accessed was public.
This can be really helpful if you are tightly guarding access to storage. For example, in Blob Storage you can set all
of the containers to private and implement the use of an SAS service throughout your applications. Then you can
check the logs regularly to see if your blobs are accessed using the storage account keys, which may indicate a
breach of security, or if the blobs are public but they shouldn't be.
What do the logs look like?
After you enable the storage account metrics and logging through the Azure portal, analytics data will start to
accumulate quickly. The logging and metrics for each service is separate; the logging is only written when there is
activity in that storage account, while the metrics will be logged every minute, every hour, or every day, depending
on how you configure it.
The logs are stored in block blobs in a container named $logs in the storage account. This container is
automatically created when Storage Analytics is enabled. Once this container is created, you can't delete it,
although you can delete its contents.
Under the $logs container, there is a folder for each service, and then there are subfolders for the
year/month/day/hour. Under hour, the logs are simply numbered. This is what the directory structure will look like:
Every request to Azure Storage is logged. Here's a snapshot of a log file, showing the first few fields.
You can see that you can use the logs to track any kind of calls to a storage account.
What are all of those fields for?
There is an article listed in the resources below that provides the list of the many fields in the logs and what they
are used for. Here is the list of fields in order:
We're interested in the entries for GetBlob, and how they are authenticated, so we need to look for entries with
operation-type "Get-Blob", and check the request-status (4th column) and the authorization-type (8th column).
For example, in the first few rows in the listing above, the request-status is "Success" and the authorization-type is
"authenticated". This means the request was validated using the storage account key.
How are my blobs being authenticated?
We have three cases that we are interested in.
1. The blob is public and it is accessed using a URL without a Shared Access Signature. In this case, the
request-status is "AnonymousSuccess" and the authorization-type is "anonymous".
1.0;2015-11-17T02:01:29.0488963Z;GetBlob;AnonymousSuccess;200;124;37;anonymous;;mystorage…
2. The blob is private and was used with a Shared Access Signature. In this case, the request-status is
"SASSuccess" and the authorization-type is "sas".
1.0;2015-11-16T18:30:05.6556115Z;GetBlob;SASSuccess;200;416;64;sas;;mystorage…
3. The blob is private and the storage key was used to access it. In this case, the request-status is "Success"
and the authorization-type is "authenticated".
1.0;2015-11-16T18:32:24.3174537Z;GetBlob;Success;206;59;22;authenticated;mystorage…
You can use the Microsoft Message Analyzer to view and analyze these logs. It includes search and filter
capabilities. For example, you might want to search for instances of GetBlob to see if the usage is what you expect,
i.e. to make sure someone is not accessing your storage account inappropriately.
Resources
Storage Analytics
This article is an overview of storage analytics and how to enable them.
Storage Analytics Log Format
This article illustrates the Storage Analytics Log Format, and details the fields available therein, including
authentication-type, which indicates the type of authentication used for the request.
Monitor a Storage Account in the Azure portal
This article shows how to configure monitoring of metrics and logging for a storage account.
End-to-End Troubleshooting using Azure Storage Metrics and Logging, AzCopy, and Message Analyzer
This article talks about troubleshooting using the Storage Analytics and shows how to use the Microsoft
Message Analyzer.
Microsoft Message Analyzer Operating Guide
This article is the reference for the Microsoft Message Analyzer and includes links to a tutorial, quick start,
and feature summary.
<Cors>
<CorsRule>
<AllowedOrigins>http://www.contoso.com, http://www.fabrikam.com</AllowedOrigins>
<AllowedMethods>PUT,GET</AllowedMethods>
<AllowedHeaders>x-ms-meta-data*,x-ms-meta-target*,x-ms-meta-abc</AllowedHeaders>
<ExposedHeaders>x-ms-meta-*</ExposedHeaders>
<MaxAgeInSeconds>200</MaxAgeInSeconds>
</CorsRule>
<Cors>
In most infrastructure as a service (IaaS) scenarios, Azure virtual machines (VMs) are the main workload for organizations that
use cloud computing. This fact is especially evident in hybrid scenarios where organizations want to slowly migrate workloads to
the cloud. In such scenarios, follow the general security considerations for IaaS, and apply security best practices to all your VMs.
This article discusses various VM security best practices, each derived from our customers' and our own direct experiences with
VMs.
The best practices are based on a consensus of opinion, and they work with current Azure platform capabilities and feature sets.
Because opinions and technologies can change over time, we plan to update this article regularly to reflect those changes.
For each best practice, the article explains:
What the best practice is.
Why it's a good idea to enable it.
How you can learn to enable it.
What might happen if you fail to enable it.
Possible alternatives to the best practice.
The article examines the following VM security best practices:
VM authentication and access control
VM availability and network access
Protect data at rest in VMs by enforcing encryption
Manage your VM updates
Manage your VM security posture
Monitor VM performance
Organizations that don't enforce a strong security posture for their VMs remain unaware of potential attempts by unauthorized
users to circumvent established security controls.
Monitor VM performance
Resource abuse can be a problem when VM processes consume more resources than they should. Performance issues with a VM
can lead to service disruption, which violates the security principle of availability. For this reason, it is imperative to monitor VM
access not only reactively, while an issue is occurring, but also proactively, against baseline performance as measured during
normal operation.
By analyzing Azure diagnostic log files, you can monitor your VM resources and identify potential issues that might compromise
performance and availability. The Azure Diagnostics Extension provides monitoring and diagnostics capabilities on Windows-
based VMs. You can enable these capabilities by including the extension as part of the Azure Resource Manager template.
You can also use Azure Monitor to gain visibility into your resource’s health.
Organizations that don't monitor VM performance are unable to determine whether certain changes in performance patterns are
normal or abnormal. If the VM is consuming more resources than normal, such an anomaly could indicate a potential attack
from an external resource or a compromised process running in the VM.
Security best practices for IaaS workloads in Azure
8/30/2017 • 14 min to read • Edit Online
As you started thinking about moving workloads to Azure infrastructure as a service (IaaS), you probably realized
that some considerations are familiar. You might already have experience securing virtual environments. When you
move to Azure IaaS, you can apply your expertise in securing virtual environments and use a new set of options to
help secure your assets.
Let's start by saying that we should not expect to bring on-premises resources as one-to-one to Azure. The new
challenges and the new options bring an opportunity to reevaluate existing deigns, tools, and processes.
Your responsibility for security is based on the type of cloud service. The following chart summarizes the balance of
responsibility for both Microsoft and you:
We'll discuss some of the options available in Azure that can help you meet your organization’s security
requirements. Keep in mind that security requirements can vary for different types of workloads. Not one of these
best practices can by itself secure your systems. Like anything else in security, you have to choose the appropriate
options and see how the solutions can complement each other by filling gaps.
No additional cost is associated with the usage of DevTest Labs. The creation of labs, policies, templates, and
artifacts is free. You pay for only the Azure resources used in your labs, such as virtual machines, storage accounts,
and virtual networks.
NOTE
You can use either VPN option to reconfigure the ACLs on the NSGs to not allow access to management endpoints from the
Internet.
Another option worth considering is a Remote Desktop Gateway deployment. You can use this deployment to
securely connect to Remote Desktop servers over HTTPS, while applying more detailed controls to those
connections.
Features that you would have access to include:
Administrator options to limit connections to requests from specific systems.
Smart-card authentication or Azure Multi-Factor Authentication.
Control over which systems someone can connect to via the gateway.
Control over device and disk redirection.
Monitor
Security Center provides ongoing evaluation of the security state of your Azure resources to identify potential
security vulnerabilities. A list of recommendations guides you through the process of configuring needed controls.
Examples include:
Provisioning antimalware to help identify and remove malicious software.
Configuring network security groups and rules to control traffic to virtual machines.
Provisioning web application firewalls to help defend against attacks that target your web applications.
Deploying missing system updates.
Addressing OS configurations that do not match the recommended baselines.
The following image shows some of the options that you can enable in Security Center.
Operations Management Suite is a Microsoft cloud-based IT management solution that helps you manage and
protect your on-premises and cloud infrastructure. Because Operations Management Suite is implemented as a
cloud-based service, it can be deployed quickly and with minimal investment in infrastructure resources.
New features are delivered automatically, saving you from ongoing maintenance and upgrade costs. Operations
Management Suite also integrates with System Center Operations Manager. It has different components to help
you better manage your Azure workloads, including a Security and Compliance module.
You can use the security and compliance features in Operations Management Suite to view information about your
resources. The information is organized into four major categories:
Security domains: Further explore security records over time. Access malware assessment, update assessment,
network security information, identity and access information, and computers with security events. Take
advantage of quick access to the Azure Security Center dashboard.
Notable issues: Quickly identify the number of active issues and the severity of these issues.
Detections (preview): Identify attack patterns by visualizing security alerts as they happen against your
resources.
Threat intelligence: Identify attack patterns by visualizing the total number of servers with outbound
malicious IP traffic, the malicious threat type, and a map that shows where these IPs are coming from.
Common security queries: See a list of the most common security queries that you can use to monitor your
environment. When you click one of those queries, the Search blade opens and shows the results for that
query.
The following screenshot shows an example of the information that Operations Management Suite can display.
Next steps
Azure Security Team Blog
Microsoft Security Response Center
Azure security best practices and patterns
Microsoft Antimalware for Azure Cloud Services and
Virtual Machines
6/27/2017 • 10 min to read • Edit Online
The modern threat landscape for cloud environments is extremely dynamic, increasing the pressure on business IT
cloud subscribers to maintain effective protection in order to meet compliance and security requirements.
Microsoft Antimalware for Azure is free real-time protection capability that helps identify and remove viruses,
spyware, and other malicious software, with configurable alerts when known malicious or unwanted software
attempts to install itself or run on your Azure systems.
The solution is built on the same antimalware platform as Microsoft Security Essentials [MSE], Microsoft Forefront
Endpoint Protection, Microsoft System Center Endpoint Protection, Windows Intune, and Windows Defender for
Windows 8.0 and higher. Microsoft Antimalware for Azure is a single-agent solution for applications and tenant
environments, designed to run in the background without human intervention. You can deploy protection based
on the needs of your application workloads, with either basic secure-by-default or advanced custom configuration,
including antimalware monitoring.
When you deploy and enable Microsoft Antimalware for Azure for your applications, the following core features
are available:
Real-time protection - monitors activity in Cloud Services and on Virtual Machines to detect and block
malware execution.
Scheduled scanning - periodically performs targeted scanning to detect malware, including actively running
programs.
Malware remediation - automatically takes action on detected malware, such as deleting or quarantining
malicious files and cleaning up malicious registry entries.
Signature updates - automatically installs the latest protection signatures (virus definitions) to ensure
protection is up-to-date on a pre-determined frequency.
Antimalware Engine updates – automatically updates the Microsoft Antimalware engine.
Antimalware Platform updates – automatically updates the Microsoft Antimalware platform.
Active protection - reports telemetry metadata about detected threats and suspicious resources to Microsoft
Azure to ensure rapid response to the evolving threat landscape, as well as enabling real-time synchronous
signature delivery through the Microsoft Active Protection System (MAPS).
Samples reporting - provides and reports samples to the Microsoft Antimalware service to help refine the
service and enable troubleshooting.
Exclusions – allows application and service administrators to configure certain files, processes, and drives to
exclude them from protection and scanning for performance and/or other reasons.
Antimalware event collection - records the antimalware service health, suspicious activities, and
remediation actions taken in the operating system event log and collects them into the customer’s Azure
Storage account.
NOTE
Microsoft Antimalware can also be deployed using Azure Security Center. Read Install Endpoint Protection in Azure Security
Center for more information.
Architecture
The Microsoft Antimalware for Azure solution includes the Microsoft Antimalware Client and Service, Antimalware
classic deployment model, Antimalware PowerShell cmdlets and Azure Diagnostics Extension. The Microsoft
Antimalware solution is supported on Windows Server 2008 R2, Windows Server 2012, and Windows Server
2012 R2 operating system families. It is not supported on the Windows Server 2008 operating system. Support
for Windows Server 2016 with Defender has been released, you can read more about this update here.
The Microsoft Antimalware Client and Service is installed by default in a disabled state in all supported Azure
guest operating system families in the Cloud Services platform. The Microsoft Antimalware Client and Service is
not installed by default in the Virtual Machines platform and is available as an optional feature through the Azure
portal and Visual Studio Virtual Machine configuration under Security Extensions.
When using Azure Websites, the underlying service that hosts the web app has Microsoft Antimalware enabled on
it. This is used to protect Azure Websites infrastructure and does not run on customer content.
Microsoft antimalware workflow
The Azure service administrator can enable Antimalware for Azure with a default or custom configuration for your
Virtual Machines and Cloud Services using the following options:
Virtual Machines – In the Azure portal, under Security Extensions
Virtual Machines – Using the Visual Studio virtual machines configuration in Server Explorer
Virtual Machines and Cloud Services – Using the Antimalware classic deployment model
Virtual Machines and Cloud Services – Using Antimalware PowerShell cmdlets
The Azure portal or PowerShell cmdlets push the Antimalware extension package file to the Azure system at a pre-
determined fixed location. The Azure Guest Agent (or the Fabric Agent) launches the Antimalware Extension,
applying the Antimalware configuration settings supplied as input. This step enables the Antimalware service with
either default or custom configuration settings. If no custom configuration is provided, then the antimalware
service is enabled with the default configuration settings. Refer to the Antimalware configuration section in the
Microsoft Antimalware for Azure – Code Samples for more details.
Once running, the Microsoft Antimalware client downloads the latest protection engine and signature definitions
from the Internet and loads them on the Azure system. The Microsoft Antimalware service writes service-related
events to the system OS events log under the “Microsoft Antimalware” event source. Events include the
Antimalware client health state, protection and remediation status, new and old configuration settings, engine
updates and signature definitions, and others.
You can enable Antimalware monitoring for your Cloud Service or Virtual Machine to have the Antimalware event
log events written as they are produced to your Azure storage account. The Antimalware Service uses the Azure
Diagnostics extension to collect Antimalware events from the Azure system into tables in the customer’s Azure
Storage account.
The deployment workflow including configuration steps and options supported for the above scenarios are
documented in Antimalware deployment scenarios section of this document.
NOTE
You can however use Powershell/APIs and Azure Resource Manager templates to deploy Virtual Machine Scale Sets with the
Microsoft Anti-Malware extension. For installing an extension on an already running Virtual Machine, you can use the
sample python script vmssextn.py located here. This script gets the existing extension config on the Scale Set and adds an
extension to the list of existing extensions on the VM Scale Sets.
NOTE
By default the Microsoft Antimalware User Interface on Azure Resource Manager is disabled, cleanuppolicy.xml file to bypass
this error message is not supported. For information on how to create a custom policy, read Enabling Microsoft Antimalware
User Interface on Azure Resource Manager VMs Post Deployment.
The following table summarizes the configuration settings available for the Antimalware service. The default
configuration settings are marked under the column labeled “Default” below.
Antimalware Deployment Scenarios
The scenarios to enable and configure antimalware, including monitoring for Azure Cloud Services and Virtual
Machines, are discussed in this section.
Virtual machines - enable and configure antimalware
Deployment using Azure Portal
To enable the Antimalware service, click Add on the Extensions blade, select Microsoft Antimalware on the New
resource blade, click Create on the Microsoft Antimalware blade. Click Create without inputting any configuration
values to enable Antimalware with the default settings, or enter the Antimalware configuration settings for the
Virtual Machine configured as shown in Figure 2 below. Please refer to the tooltips provided with each
configuration setting on the Add Extension blade to see the supported configuration values.
Deployment using the Azure classic portal
To enable and configure Microsoft Antimalware for Azure Virtual Machines using the Azure portal while
provisioning a Virtual Machine, follow the steps below:
1.Log onto the Azure portal at https://portal.azure.com
2.To create a new virtual machine, click New, Compute, Virtual Machine, From Gallery (do not use Quick
Create) as shown below:
3.Select the Microsoft Windows Server image on the Choose an Image page.
4.Click the right arrow and input the Virtual Machine configuration.
5.Check the Microsoft Antimalware checkbox under Security Extensions on the Virtual Machine configuration
page.
6.Click the Submit button to enable and configure Microsoft Antimalware for Azure Virtual Machines with the
default configuration settings.
Deployment Using the Visual Studio virtual machine configuration
To enable and configure the Microsoft Antimalware service using Visual Studio:
1.Connect to Microsoft Azure in Visual Studio.
2.Choose your Virtual Machine in the Virtual Machines node in Server Explorer
5.To customize the default Antimalware configuration, select (highlight) the Antimalware extension in the installed
extensions list and click Configure.
6.Replace the default Antimalware configuration with your custom configuration in supported JSON format in the
public configuration textbox and click OK.
7.Click the Update button to push the configuration updates to your Virtual Machine.
Note: The Visual Studio Virtual Machines configuration for Antimalware supports only JSON format
configuration. The Antimalware JSON configuration settings template is included in the Microsoft Antimalware For
Azure - Code Samples, showing the supported Antimalware configuration settings.
Deployment Using PowerShell cmdlets
An Azure application or service can enable and configure Microsoft Antimalware for Azure Virtual Machines using
PowerShell cmdlets.
To enable and configure Microsoft antimalware using antimalware PowerShell cmdlets:
1. Set up your PowerShell environment - Refer to the documentation at https://github.com/Azure/azure-
powershell
2. Use the Set-AzureVMMicrosoftAntimalwareExtension Antimalware cmdlet to enable and configure Microsoft
Antimalware for your Virtual Machine as documented at
http://msdn.microsoft.com/library/azure/dn771718.aspx
Note: The Azure Virtual Machines configuration for Antimalware supports only JSON format configuration. The
Antimalware JSON configuration settings template is included in the Microsoft Antimalware For Azure - Code
Samples, showing the supported Antimalware configuration settings.
Enable and Configure Antimalware Using PowerShell cmdlets
An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using
PowerShell cmdlets. Note that Microsoft Antimalware is installed in a disabled state in the Cloud Services platform
and requires an action by an Azure application to enable it.
To enable and configure Microsoft Antimalware using PowerShell cmdlets:
1. Set up your PowerShell environment - Refer to the documentation at https://github.com/Azure/azure-sdk-
tools#get-started
2. Use the Set-AzureServiceAntimalwareExtension Antimalware cmdlet to enable and configure Microsoft
Antimalware for your Cloud Service as documented at
http://msdn.microsoft.com/library/azure/dn771718.aspx
The Antimalware XML configuration settings template is included in the Microsoft Antimalware For Azure - Code
Samples, showing the supported Antimalware configuration settings.
Cloud Services and Virtual Machines - Configuration Using PowerShell cmdlets
An Azure application or service can retrieve the Microsoft Antimalware configuration for Cloud Services and
Virtual Machines using PowerShell cmdlets.
To retrieve the Microsoft Antimalware configuration using PowerShell cmdlets:
1. Set up your PowerShell environment - Refer to the documentation at https://github.com/Azure/azure-sdk-
tools#get-started
2. For Virtual Machines: Use the Get-AzureVMMicrosoftAntimalwareExtension Antimalware cmdlet to get the
antimalware configuration as documented at http://msdn.microsoft.com/library/azure/dn771719.aspx
3. For Cloud Services: Use the Get-AzureServiceAntimalwareConfig Antimalware cmdlet to get the Antimalware
configuration as documented at http://msdn.microsoft.com/library/azure/dn771722.aspx
Remove Antimalware Configuration Using PowerShell cmdlets
An Azure application or service can remove the Antimalware configuration and any associated Antimalware
monitoring configuration from the relevant Azure Antimalware and diagnostics service extensions associated with
the Cloud Service or Virtual Machine.
To remove Microsoft Antimalware using PowerShell cmdlets:
1. Set up your PowerShell environment - Refer to the documentation at https://github.com/Azure/azure-sdk-
tools#get-started
2. For Virtual Machines: Use the Remove-AzureVMMicrosoftAntimalwareExtension Antimalware cmdlet as
documented at http://msdn.microsoft.com/library/azure/dn771720.aspx
3. For Cloud Services: Use the Remove-AzureServiceAntimalwareExtension Antimalware cmdlet as documented
at http://msdn.microsoft.com/library/azure/dn771717.aspx
To enable antimalware event collection for a virtual machine using the Azure Preview Portal:
1. Click any part of the Monitoring lens in the Virtual Machine blade
2. Click the Diagnostics command on Metric blade
3. Select Status ON and check the option for Windows event system
4. . You can choose to uncheck all other options in the list, or leave them enabled per your application service
needs.
5. The Antimalware event categories “Error”, “Warning”, “Informational”, etc., are captured in your Azure Storage
account.
Antimalware events are collected from the Windows event system logs to your Azure Storage account. You can
configure the Storage Account for your Virtual Machine to collect Antimalware events by selecting the appropriate
storage account.
NOTE
For more information on how to Diagnostics Logging for Azure Antimalware, read Enabling Diagnostics Logging for Azure
Antimalware.
Microsoft Azure is strongly committed to ensuring your data privacy, data sovereignty and enables you to control
your Azure hosted data through a range of advanced technologies to encrypt, control and manage encryption
keys, control & audit access of data. This provides Azure customers the flexibility to choose the solution that best
meets their business needs. In this paper, we will introduce you to a new technology solution “Azure Disk
Encryption for Windows and Linux IaaS VM’s” to help protect and safeguard your data to meet your
organizational security and compliance commitments. The paper provides detailed guidance on how to use the
Azure disk encryption features including the supported scenarios and the user experiences.
NOTE
Certain recommendations might increase data, network, or compute resource usage, resulting in additional license or
subscription costs.
Overview
Azure Disk Encryption is a new capability that helps you encrypt your Windows and Linux IaaS virtual machine
disks. Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt
feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure
Key Vault to help you control and manage the disk-encryption keys and secrets in your key vault subscription.
The solution also ensures that all data on the virtual machine disks are encrypted at rest in your Azure storage.
Azure disk encryption for Windows and Linux IaaS VMs is now in General Availability in all Azure public
regions and AzureGov regions for Standard VMs and VMs with premium storage.
Encryption scenarios
The Azure Disk Encryption solution supports the following customer scenarios:
Enable encryption on new IaaS VMs created from pre-encrypted VHD and encryption keys
Enable encryption on new IaaS VMs created from the supported Azure Gallery images
Enable encryption on existing IaaS VMs running in Azure
Disable encryption on Windows IaaS VMs
Disable encryption on data drives for Linux IaaS VMs
Enable encryption of managed disk VMs
Update encryption settings of an existing encrypted non-premium storage VM
Backup and restore of encrypted VMs, encrypted with key encryption key
The solution supports the following scenarios for IaaS VMs when they are enabled in Microsoft Azure:
Integration with Azure Key Vault
Standard tier VMs: A, D, DS, G, GS, F, and so forth series IaaS VMs
Enable encryption on Windows and Linux IaaS VMs and managed disk VMs from the supported Azure Gallery
images
Disable encryption on OS and data drives for Windows IaaS VMs and managed disk VMs
Disable encryption on data drives for Linux IaaS VMs and managed disk VMs
Enable encryption on IaaS VMs running Windows Client OS
Enable encryption on volumes with mount paths
Enable encryption on Linux VMs configured with disk striping (RAID) using mdadm
Enable encryption on Linux VMs using LVM for data disks
Enable encryption on Windows VMs configured with Storage Spaces
Update encryption settings of an existing encrypted non-premium storage VM
All Azure Public and AzureGov regions are supported
The solution does not support the following scenarios, features, and technology:
Basic tier IaaS VMs
Disabling encryption on an OS drive for Linux IaaS VMs
Disabling encryption on a data drive if the OS drive is encrypted for Linux Iaas VMs
IaaS VMs that are created by using the classic VM creation method
Enable encryption on Windows and Linux IaaS VMs customer custom images is NOT supported. Enable
enccryption on Linux LVM OS disk is not supported currently. This support will come soon.
Integration with your on-premises Key Management Service
Azure Files (shared file system), Network File System (NFS), dynamic volumes, and Windows VMs that are
configured with software-based RAID systems
Backup and restore of encrypted VMs, encrypted without key encryption key.
Update encryption settings of an existing encrypted premium storage VM.
NOTE
Backup and restore of encrypted VMs is supported only for VMs that are encrypted with the KEK configuration. It is not
supported on VMs that are encrypted without KEK. KEK is an optional parameter that enables VM encryption. This support
is coming soon. Update encryption settings of an existing encrypted premium storage VM are not supported. This support
is coming soon.
Encryption features
When you enable and deploy Azure Disk Encryption for Azure IaaS VMs, the following capabilities are enabled,
depending on the configuration provided:
Encryption of the OS volume to protect the boot volume at rest in your storage
Encryption of data volumes to protect the data volumes at rest in your storage
Disabling encryption on the OS and data drives for Windows IaaS VMs
Disabling encryption on the data drives for Linux IaaS VMs (only if OS drive IS NOT encrypted)
Safeguarding the encryption keys and secrets in your key vault subscription
Reporting the encryption status of the encrypted IaaS VM
Removal of disk-encryption configuration settings from the IaaS virtual machine
Backup and restore of encrypted VMs by using the Azure Backup service
NOTE
Backup and restore of encrypted VMs is supported only for VMs that are encrypted with the KEK configuration. It is not
supported on VMs that are encrypted without KEK. KEK is an optional parameter that enables VM encryption.
Azure Disk Encryption for IaaS VMS for Windows and Linux solution includes:
The disk-encryption extension for Windows.
The disk-encryption extension for Linux.
The disk-encryption PowerShell cmdlets.
The disk-encryption Azure command-line interface (CLI) cmdlets.
The disk-encryption Azure Resource Manager templates.
The Azure Disk Encryption solution is supported on IaaS VMs that are running Windows or Linux OS. For more
information about the supported operating systems, see the "Prerequisites" section.
NOTE
There is no additional charge for encrypting VM disks with Azure Disk Encryption.
Value proposition
When you apply the Azure Disk Encryption-management solution, you can satisfy the following business needs:
IaaS VMs are secured at rest, because you can use industry-standard encryption technology to address
organizational security and compliance requirements.
IaaS VMs boot under customer-controlled keys and policies, and you can audit their usage in your key vault.
Encryption workflow
To enable disk encryption for Windows and Linux VMs, do the following:
1. Choose an encryption scenario from among the preceding encryption scenarios.
2. Opt in to enabling disk encryption via the Azure Disk Encryption Resource Manager template, PowerShell
cmdlets, or CLI command, and specify the encryption configuration.
For the customer-encrypted VHD scenario, upload the encrypted VHD to your storage account and the
encryption key material to your key vault. Then, provide the encryption configuration to enable
encryption on a new IaaS VM.
For new VMs that are created from the Marketplace and existing VMs that are already running in Azure,
provide the encryption configuration to enable encryption on the IaaS VM.
3. Grant access to the Azure platform to read the encryption-key material (BitLocker encryption keys for
Windows systems and Passphrase for Linux) from your key vault to enable encryption on the IaaS VM.
4. Provide the Azure Active Directory (Azure AD) application identity to write the encryption key material to
your key vault. Doing so enables encryption on the IaaS VM for the scenarios mentioned in step 2.
5. Azure updates the VM service model with encryption and the key vault configuration, and sets up your
encrypted VM.
Decryption workflow
To disable disk encryption for IaaS VMs, complete the following high-level steps:
1. Choose to disable encryption (decryption) on a running IaaS VM in Azure via the Azure Disk Encryption
Resource Manager template or PowerShell cmdlets, and specify the decryption configuration.
This step disables encryption of the OS or the data volume or both on the running Windows IaaS VM.
However, as mentioned in the previous section, disabling OS disk encryption for Linux is not supported.
The decryption step is allowed only for data drives on Linux VMs as long as the OS disk is not encrypted.
2. Azure updates the VM service model, and the IaaS VM is marked decrypted. The contents of the VM are no
longer encrypted at rest.
NOTE
The disable-encryption operation does not delete your key vault and the encryption key material (BitLocker encryption
keys for Windows systems or Passphrase for Linux). Disabling OS disk encryption for Linux is not supported. The decryption
step is allowed only for data drives on Linux VMs. Disabling data disk encryption for Linux is not supported if the OS drive
is encrypted.
Prerequisites
Before you enable Azure Disk Encryption on Azure IaaS VMs for the supported scenarios that were discussed in
the "Overview" section, see the following prerequisites:
You must have a valid active Azure subscription to create resources in Azure in the supported regions.
Azure Disk Encryption is supported on the following Windows Server versions: Windows Server 2008 R2,
Windows Server 2012, Windows Server 2012 R2, and Windows Server 2016.
Azure Disk Encryption is supported on the following Windows client versions: Windows 8 client and Windows
10 client.
NOTE
For Windows Server 2008 R2, you must have .NET Framework 4.5 installed before you enable encryption in Azure. You can
install it from Windows Update by installing the optional update Microsoft .NET Framework 4.5.2 for Windows Server 2008
R2 x64-based systems (KB2901983).
Azure Disk Encryption is supported on the following Azure Gallery based Linux server distributions and
versions:
Azure Disk Encryption requires that your key vault and VMs reside in the same Azure region and subscription.
NOTE
Configuring the resources in separate regions causes a failure in enabling the Azure Disk Encryption feature.
To set up and configure your key vault for Azure Disk Encryption, see section Set up and configure your key
vault for Azure Disk Encryption in the Prerequisites section of this article.
To set up and configure Azure AD application in Azure Active directory for Azure Disk Encryption, see section
Set up the Azure AD application in Azure Active Directory in the Prerequisites section of this article.
To set up and configure the key vault access policy for the Azure AD application, see section Set up the key
vault access policy for the Azure AD application in the Prerequisites section of this article.
To prepare a pre-encrypted Windows VHD, see section Prepare a pre-encrypted Windows VHD in the
Appendix.
To prepare a pre-encrypted Linux VHD, see section Prepare a pre-encrypted Linux VHD in the Appendix.
The Azure platform needs access to the encryption keys or secrets in your key vault to make them available to
the virtual machine when it boots and decrypts the virtual machine OS volume. To grant permissions to Azure
platform, set the EnabledForDiskEncryption property in the key vault. For more information, see Set up
and configure your key vault for Azure Disk Encryption in the Appendix.
Your key vault secret and KEK URLs must be versioned. Azure enforces this restriction of versioning. For
valid secret and KEK URLs, see the following examples:
Example of a valid secret URL:
https://contosovault.vault.azure.net/secrets/BitLockerEncryptionSecretWithKek/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Example of a valid KEK URL:
https://contosovault.vault.azure.net/keys/diskencryptionkek/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Azure Disk Encryption does not support specifying port numbers as part of key vault secrets and KEK
URLs. For examples of non-supported and supported key vault URLs, see the following:
Unacceptable key vault URL
https://contosovault.vault.azure.net:443/secrets/contososecret/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Acceptable key vault URL:
https://contosovault.vault.azure.net/secrets/contososecret/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
To enable the Azure Disk Encryption feature, the IaaS VMs must meet the following network endpoint
configuration requirements:
To get a token to connect to your key vault, the IaaS VM must be able to connect to an Azure Active
Directory endpoint, [login.microsoftonline.com].
To write the encryption keys to your key vault, the IaaS VM must be able to connect to the key vault
endpoint.
The IaaS VM must be able to connect to an Azure storage endpoint that hosts the Azure extension
repository and an Azure storage account that hosts the VHD files.
NOTE
If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and
configure a specific rule to allow outbound connectivity to the IPs.
To configure and access Azure Key Vault behind a firewall(https://docs.microsoft.com/en-us/azure/key-vault/key-
vault-access-behind-firewall)
Use the latest version of Azure PowerShell SDK version to configure Azure Disk Encryption. Download the
latest version of Azure PowerShell release
NOTE
Azure Disk Encryption is not supported on Azure PowerShell SDK version 1.1.0. If you are receiving an error related
to using Azure PowerShell 1.1.0, see Azure Disk Encryption Error Related to Azure PowerShell 1.1.0.
To run any Azure CLI command and associate it with your Azure subscription, you must first install Azure
CLI:
To install Azure CLI and associate it with your Azure subscription, see How to install and configure
Azure CLI.
To use Azure CLI for Mac, Linux, and Windows with Azure Resource Manager, see Azure CLI commands
in Resource Manager mode.
When encrypting a managed disk, it is mandatory prerequisite to take a snapshot of the managed disk or a
backup of the disk outside of Azure Disk Encryption prior to enabling encryption. Without a backup in
place, any unexpected failure during encryption may render the disk and VM inaccessible without a
recovery option. Set-AzureRmVMDiskEncryptionExtension does not currently back up managed disks and
will error if used against a managed disk unless the -skipVmBackup parameter has been specified. This
parameter is unsafe to use unless a backup has already been made outside of Azure Disk Encryption.
When the -skipVmBackup parameter is specified, the cmdlet will not make a backup of the managed disk
prior to encryption. For this reason, it is considered a mandatory prerequisite to make sure a backup of the
managed disk VM is in place prior to enabling Azure Disk Encryption in case recovery is later needed.
NOTE
The -skipVmBackup parameter should never be used unless a snapshot or backup has already been made outside
of Azure Disk Encryption.
The Azure Disk Encryption solution uses the BitLocker external key protector for Windows IaaS VMs. For
domain joined VMs, DO NOT push any group policies that enforce TPM protectors. For information about
the group policy for “Allow BitLocker without a compatible TPM,” see BitLocker Group Policy Reference.
Bitlocker policy on domain joined virtual machines with custom group policy must include the following
setting: Configure user storage of bitlocker recovery information -> Allow 256-bit recovery key Azure Disk
Encryption will fail when custom group policy settings for Bitlocker are incompatible. On machines that did
not have the correct policy setting, applying the new policy, forcing the new policy to update (gpupdate.exe
/force), and then restarting may be required.
To create an Azure AD application, create a key vault, or set up an existing key vault and enable encryption, see
the Azure Disk Encryption prerequisite PowerShell script.
To configure disk-encryption prerequisites using the Azure CLI, see this Bash script.
To use the Azure Backup service to back up and restore encrypted VMs, when encryption is enabled with
Azure Disk Encryption, encrypt your VMs by using the Azure Disk Encryption key configuration. The
Backup service supports VMs that are encrypted using KEK configuration only. See How to back up and
restore encrypted virtual machines with Azure Backup encryption.
When encrypting a Linux OS volume, note that a VM restart is currently required at the end of the process.
This can be done via the portal, powershell, or CLI. To track the progress of encryption, periodically poll the
status message returned by Get-AzureRmVMDiskEncryptionStatus https://docs.microsoft.com/en-
us/powershell/module/azurerm.compute/get-azurermvmdiskencryptionstatus. Once encryption is
complete, the the status message returned by this command will indicate this. For example,
"ProgressMessage: OS disk successfully encrypted, please reboot the VM" At this point the VM can be
restarted and used.
Azure Disk Encryption for Linux requires data disks to have a mounted file system in Linux prior to
encryption
Recursively mounted data disks are not supported by the Azure Disk Encryption for Linux. For example, if
the target system has mounted a disk on /foo/bar and then another on /foo/bar/baz, the encryption of
/foo/bar/baz will succeed, but encryption of /foo/bar will fail.
Azure Disk Encryption is only supported on Azure gallery supported images that meet the aforementioned
prerequisites. Customer custom images are not supported due to custom partition schemes and process
behaviors that may exist on these images. Further, even gallery image based VM's that initially met
prerequisites but have been modified after creation may be incompatible. For that reason, the suggested
procedure for encrypting a Linux VM is to start from a clean gallery image, encrypt the VM, and then add
custom software or data to the VM as needed.
NOTE
Backup and restore of encrypted VMs is supported only for VMs that are encrypted with the KEK configuration. It is not
supported on VMs that are encrypted without KEK. KEK is an optional parameter that enables VM.
$aadClientSecret = "yourSecret"
$azureAdApplication = New-AzureRmADApplication -DisplayName "<Your Application Display Name>" -HomePage "
<https://YourApplicationHomePage>" -IdentifierUris "<https://YouApplicationUri>" -Password $aadClientSecret
$servicePrincipal = New-AzureRmADServicePrincipal –ApplicationId $azureAdApplication.ApplicationId
NOTE
$azureAdApplication.ApplicationId is the Azure AD ClientID and $aadClientSecret is the client secret that you should use
later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately.
Se t t i n g u p t h e A z u r e A D c l i e n t I D a n d se c r e t fr o m t h e A z u r e c l a ssi c p o r t a l
You can also set up your Azure AD client ID and secret by using the Azure classic portal. To perform this task, do
the following:
1. Click the Active Directory tab.
2. Click Add Application, and then type the application name.
3. Click the arrow button, and then configure the application properties.
4. Click the check mark in the lower left corner to finish. The application configuration page appears, and the
Azure AD client ID is displayed at the bottom of the page.
5. Save the Azure AD client secret by clicking the Save button. Note the Azure AD client secret in the keys text
box. Safeguard it appropriately.
NOTE
The preceding flow is not supported on the Azure classic portal.
U se a n e x i st i n g a p p l i c a t i o n
To execute the following commands, obtain and use the Azure AD PowerShell module.
NOTE
The following commands must be executed from a new PowerShell window. Do not use Azure PowerShell or the Azure
Resource Manager window to execute the commands. We recommend this approach because these cmdlets are in the
MSOnline module or Azure AD PowerShell.
$clientSecret = ‘<yourAadClientSecret>’
$aadClientID = '<Client ID of your Azure AD application>'
connect-msolservice
New-MsolServicePrincipalCredential -AppPrincipalId $aadClientID -Type password -Value $clientSecret
NOTE
Azure AD certificate-based authentication is currently not supported on Linux VMs.
The sections that follow show how to configure a certificate-based authentication for Azure AD.
Cr eat e an A z u r e A D appl i c at i o n
$cert = New-Object
System.Security.Cryptography.X509Certificates.X509Certificate("C:\certificates\examplecert.pfx",
"yourpassword")
$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
$azureAdApplication = New-AzureRmADApplication -DisplayName "<Your Application Display Name>" -HomePage "
<https://YourApplicationHomePage>" -IdentifierUris "<https://YouApplicationUri>" -KeyValue $keyValue -KeyType
AsymmetricX509Cert
$servicePrincipal = New-AzureRmADServicePrincipal –ApplicationId $azureAdApplication.ApplicationId
After you finish this step, upload a PFX file to your key vault and enable the access policy needed to deploy that
certificate to a VM.
U se a n e x i st i n g A z u r e A D a p p l i c a t i o n
If you are configuring certificate-based authentication for an existing application, use the PowerShell cmdlets
shown here. Be sure to execute them from a new PowerShell window.
$certLocalPath = 'C:\certs\myaadapp.cer'
$aadClientID = '<Client ID of your Azure AD application>'
connect-msolservice
$cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate
$cer.Import($certLocalPath)
$binCert = $cer.GetRawCertData()
$credValue = [System.Convert]::ToBase64String($binCert);
New-MsolServicePrincipalCredential -AppPrincipalId $aadClientID -Type asymmetric -Value $credValue -Usage
verify
After you finish this step, upload a PFX file to your key vault and enable the access policy that's needed to deploy
the certificate to a VM.
U p l o a d a P F X fi l e t o y o u r k e y v a u l t
For a detailed explanation of this process, see The Official Azure Key Vault Team Blog. However, the following
PowerShell cmdlets are all you need for the task. Be sure to execute them from Azure PowerShell console.
NOTE
Replace the following yourpassword string with your secure password, and safeguard the password.
$certLocalPath = 'C:\certs\myaadapp.pfx'
$certPassword = "yourpassword"
$resourceGroupName = ‘yourResourceGroup’
$keyVaultName = ‘yourKeyVaultName’
$keyVaultSecretName = ‘yourAadCertSecretName’
$jsonObject = @"
{
"data": "$filecontentencoded",
"dataType" :"pfx",
"password": "$certPassword"
}
"@
$jsonObjectBytes = [System.Text.Encoding]::UTF8.GetBytes($jsonObject)
$jsonEncoded = [System.Convert]::ToBase64String($jsonObjectBytes)
D e p l o y a c e r t i fi c a t e i n y o u r k e y v a u l t t o a n e x i st i n g V M
After you finish uploading the PFX, deploy a certificate in the key vault to an existing VM with the following:
$resourceGroupName = ‘yourResourceGroup’
$keyVaultName = ‘yourKeyVaultName’
$keyVaultSecretName = ‘yourAadCertSecretName’
$vmName = ‘yourVMName’
$certUrl = (Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name $keyVaultSecretName).Id
$sourceVaultId = (Get-AzureRmKeyVault -VaultName $keyVaultName -ResourceGroupName
$resourceGroupName).ResourceId
$vm = Get-AzureRmVM -ResourceGroupName $resourceGroupName -Name $vmName
$vm = Add-AzureRmVMSecret -VM $vm -SourceVaultId $sourceVaultId -CertificateStore "My" -CertificateUrl
$certUrl
Update-AzureRmVM -VM $vm -ResourceGroupName $resourceGroupName
Set up the key vault access policy for the Azure AD application
Your Azure AD application needs rights to access the keys or secrets in the vault. Use the
Set-AzureKeyVaultAccessPolicy cmdlet to grant permissions to the application, using the client ID (which was
generated when the application was registered) as the –ServicePrincipalName parameter value. To learn more,
see the blog post Azure Key Vault - Step by Step. Here is an example of how to perform this task via PowerShell:
$keyVaultName = '<yourKeyVaultName>'
$aadClientID = '<yourAadAppClientID>'
$rgname = '<yourResourceGroup>'
Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ServicePrincipalName $aadClientID -
PermissionsToKeys 'WrapKey' -PermissionsToSecrets 'Set' -ResourceGroupName $rgname
NOTE
Azure Disk Encryption requires you to configure the following access policies to your Azure AD client application: WrapKey
and Set permissions.
Terminology
To understand some of the common terms used by this technology, use the following terminology table:
TERMINOLOGY DEFINITION
Azure Key Vault Key Vault is a cryptographic, key management service that's
based on Federal Information Processing Standards (FIPS)-
validated hardware security modules, which help safeguard
your cryptographic keys and sensitive secrets. For more
information, see Key Vault documentation.
KEK Key encryption key is the asymmetric key (RSA 2048) that
you can use to protect or wrap the secret. You can provide a
hardware security modules (HSM)-protected key or software-
protected key. For more details, see Azure Key Vault
documentation.
Set up and configure your key vault for Azure Disk Encryption
Azure Disk Encryption helps safeguard the disk-encryption keys and secrets in your key vault. To set up your key
vault for Azure Disk Encryption, complete the steps in each of the following sections.
Create a key vault
To create a key vault, use one of the following options:
"101-Key-Vault-Create" Resource Manager template
Azure PowerShell key vault cmdlets
Azure Resource Manager
How to Secure your key vault
NOTE
If you have already set up a key vault for your subscription, skip to the next section.
Set up a key encryption key (optional)
If you want to use a KEK for an additional layer of security for the BitLocker encryption keys, add a KEK to your
key vault. Use the Add-AzureKeyVaultKey cmdlet to create a key encryption key in the key vault. You can also
import a KEK from your on-premises key management HSM. For more details, see Key Vault Documentation.
You can add the KEK by going to Azure Resource Manager or by using your key vault interface.
You can also set the EnabledForDiskEncryption property by visiting the Azure Resource Explorer.
As mentioned earlier, you must set the EnabledForDiskEncryption property on your key vault. Otherwise, the
deployment will fail.
You can set up access policies for your Azure AD application from the key vault interface, as shown here:
On the Advanced access policies tab, make sure that your key vault is enabled for Azure Disk Encryption:
Disk-encryption deployment scenarios and user experiences
You can enable many disk-encryption scenarios, and the steps may vary according to the scenario. The following
sections cover the scenarios in greater detail.
Enable encryption on new IaaS VMs that are created from the Marketplace
You can enable disk encryption on new IaaS Windows VM from the Marketplace in Azure by using the Resource
Manager template.
1. On the Azure quick-start template, click Deploy to Azure, enter the encryption configuration on the
Parameters blade, and then click OK.
2. Select the subscription, resource group, resource group location, legal terms, and agreement, and then
click Create to enable encryption on a new IaaS VM.
NOTE
This template creates a new encrypted Windows VM that uses the Windows Server 2012 gallery image.
You can enable disk encryption on a new IaaS RedHat Linux 7.2 VM with a 200-GB RAID-0 array by using this
Resource Manager template. After you deploy the template, verify the VM encryption status by using the
Get-AzureRmVmDiskEncryptionStatus cmdlet, as described in Encrypting OS drive on a running Linux VM. When the
machine returns a status of VMRestartPending, restart the VM.
The following table lists the Resource Manager template parameters for new VMs from the Marketplace scenario
using Azure AD client ID:
PARAMETER DESCRIPTION
virtualNetworkName Name of the VNet that the VM NIC should belong to.
subnetName Name of the subnet in the VNet that the VM NIC should
belong to.
keyVaultURL URL of the key vault that the BitLocker key should be
uploaded to. You can get it by using the cmdlet
(Get-AzureRmKeyVault -VaultName,-ResourceGroupName
).VaultURI
.
keyEncryptionKeyURL URL of the key encryption key that's used to encrypt the
generated BitLocker key (optional).
NOTE
KeyEncryptionKeyURL is an optional parameter. You can bring your own KEK to further safeguard the data encryption key
(Passphrase secret) in your key vault.
Enable encryption on new IaaS VMs that are created from customer-encrypted VHD and encryption keys
In this scenario, you can enable encrypting by using the Resource Manager template, PowerShell cmdlets, or CLI
commands. The following sections explain in greater detail the Resource Manager template and CLI commands.
Follow the instructions from one of these sections for preparing pre-encrypted images that can be used in Azure.
After the image is created, you can use the steps in the next section to create an encrypted Azure VM.
Prepare a pre-encrypted Windows VHD
Prepare a pre-encrypted Linux VHD
Using the Resource Manager template
You can enable disk encryption on your encrypted VHD by using the Resource Manager template.
1. On the Azure quick-start template, click Deploy to Azure, enter the encryption configuration on the
Parameters blade, and then click OK.
2. Select the subscription, resource group, resource group location, legal terms, and agreement, and then
click Create to enable encryption on the new IaaS VM.
The following table lists the Resource Manager template parameters for your encrypted VHD:
PARAMETER DESCRIPTION
virtualNetworkName Name of the VNet that the VM NIC should belong to. The
name should already have been created in the same resource
group and same location as the VM.
subnetName Name of the subnet on the VNet that the VM NIC should
belong to.
keyVaultResourceID The ResourceID that identifies the key vault resource in Azure
Resource Manager. You can get it by using the PowerShell
cmdlet
(Get-AzureRmKeyVault -VaultName
<yourKeyVaultName> -ResourceGroupName
<yourResourceGroupName>).ResourceId
.
keyVaultSecretUrl URL of the disk-encryption key that's set up in the key vault.
keyVaultKekUrl URL of the key encryption key for encrypting the generated
disk-encryption key.
4. To enable encryption on a new VM from your encrypted VHD, use the following parameters with the
azure vm create command:
* disk-encryption-key-vault-id <disk-encryption-key-vault-id>
* disk-encryption-key-url <disk-encryption-key-url>
* key-encryption-key-vault-id <key-encryption-key-vault-id>
* key-encryption-key-url <key-encryption-key-url>
PARAMETER DESCRIPTION
keyVaultName Name of the key vault that the BitLocker key should be
uploaded to. You can get it by using the cmdlet
(Get-AzureRmKeyVault -ResourceGroupName
<yourResourceGroupName>). Vaultname
.
keyEncryptionKeyURL URL of the key encryption key that's used to encrypt the
generated BitLocker key. This parameter is optional if you
select nokek in the UseExistingKek drop-down list. If you
select kek in the UseExistingKek drop-down list, you must
enter the keyEncryptionKeyURL value.
4. To enable encryption on a new VM from your encrypted VHD, use the following parameters with the
azure vm create command:
* disk-encryption-key-vault-id <disk-encryption-key-vault-id>
* disk-encryption-key-url <disk-encryption-key-url>
* key-encryption-key-vault-id <key-encryption-key-vault-id>
* key-encryption-key-url <key-encryption-key-url>
PARAMETER DESCRIPTION
keyVaultName Name of the key vault that the BitLocker key should be
uploaded to. You can get it by using the cmdlet
(Get-AzureRmKeyVault -ResourceGroupName
<yourResourceGroupName>). Vaultname
.
keyEncryptionKeyURL URL of the key encryption key that's used to encrypt the
generated BitLocker key. This parameter is optional if you
select nokek in the UseExistingKek drop-down list. If you
select kek in the UseExistingKek drop-down list, you must
enter the keyEncryptionKeyURL value.
NOTE
KeyEncryptionKeyURL is an optional parameter. You can bring your own KEK to further safeguard the data encryption key
(passphrase secret) in your key vault.
CLI commands
You can enable disk encryption on your encrypted VHD by installing and using the CLI command. To enable
encryption on existing or running IaaS Linux VMs in Azure by using CLI commands, do the following:
1. Set access policies in the key vault:
Set the EnabledForDiskEncryption flag:
azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true
* disk-encryption-key-vault-id <disk-encryption-key-vault-id>
* disk-encryption-key-url <disk-encryption-key-url>
* key-encryption-key-vault-id <key-encryption-key-vault-id>
* key-encryption-key-url <key-encryption-key-url>
Get the encryption status of an encrypted (Windows/Linux ) IaaS VM by using the disk-encryption PowerShell cmdlet
You can get the encryption status of the IaaS VM from the disk-encryption PowerShell cmdlet
Get-AzureRmVMDiskEncryptionStatus . To get the encryption settings for your VM, enter the following:
OsVolumeEncrypted : NotEncrypted
DataVolumesEncrypted : Encrypted
OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings
ProgressMessage : https://rheltest1keyvault.vault.azure.net/secrets/bdb6bfb1-5431-4c28-af46-
b18d0025ef2a/abebacb83d864a5fa729508315020f8a
You can inspect the output of Get-AzureRmVMDiskEncryptionStatus for encryption key URLs.
C:\> $status = Get-AzureRmVmDiskEncryptionStatus -ResourceGroupName $ResourceGroupName -VMName
e $VMName -ExtensionName $ExtensionName
C:\> $status.OsVolumeEncryptionSettings
DiskEncryptionKey KeyEncryptionKey
Enabled
----------------- ----------------
-------
Microsoft.Azure.Management.Compute.Models.KeyVaultSecretReference
Microsoft.Azure.Management.Compute.Models.KeyVaultKeyReference True
C:\> $status.OsVolumeEncryptionSettings.DiskEncryptionKey.SecretUrl
https://rheltest1keyvault.vault.azure.net/secrets/bdb6bfb1-5431-4c28-af46-
b18d0025ef2a/abebacb83d864a5fa729508315020f8a
C:\> $status.OsVolumeEncryptionSettings.DiskEncryptionKey
SecretUrl
SourceVault
---------
-----------
https://rheltest1keyvault.vault.azure.net/secrets/bdb6bfb1-5431-4c28-af46-
b18d0025ef2a/abebacb83d864a5fa729508315020f8a Microsoft.Azure.Management....
The OSVolumeEncrypted and DataVolumesEncrypted settings values are set to Encrypted, which shows that both
volumes are encrypted using Azure Disk Encryption. For information about enabling encryption with Azure Disk
Encryption by using PowerShell cmdlets, see the blog posts Explore Azure Disk Encryption with Azure PowerShell
- Part 1 and Explore Azure Disk Encryption with Azure PowerShell - Part 2.
NOTE
On Linux VMs, it takes three to four minutes for the Get-AzureRmVMDiskEncryptionStatus cmdlet to report the
encryption status.
Get the encryption status of the IaaS VM from the disk-encryption CLI command
You can get the encryption status of the IaaS VM by using the disk-encryption CLI command
azure vm show-disk-encryption-status . To get the encryption settings for your VM, enter your Azure CLI session:
The disable-encryption step disables encryption of the OS, the data volume, or both on the running Windows
IaaS VM. You cannot disable the OS volume and leave the data volume encrypted. When the disable-encryption
step is performed, the Azure classic deployment model updates the VM service model, and the Windows IaaS VM
is marked decrypted. The contents of the VM are no longer encrypted at rest. The decryption does not delete your
key vault and the encryption key material (BitLocker encryption keys for Windows and Passphrase for Linux).
Li n u x VM
The disable-encryption step disables encryption of the data volume on the running Linux IaaS VM. This step only
works if the OS disk is not encrypted.
NOTE
Disabling encryption on the OS disk is not allowed on Linux VMs.
D i sa b l e e n c r y p t i o n o n a n e x i st i n g o r r u n n i n g I a a S V M
You can disable disk encryption on running Windows IaaS VMs by using the Resource Manager template.
1. On the Azure quick-start template, click Deploy to Azure, enter the decryption configuration on the
Parameters blade, and then click OK.
2. Select the subscription, resource group, resource group location, legal terms, and agreement, and then
click Create to enable encryption on a new IaaS VM.
For Linux VMs, you can disable encryption by using the Disable encryption on a running Linux VM template.
The following table lists Resource Manager template parameters for disabling encryption on a running IaaS VM:
PARAMETER DESCRIPTION
D i sa b l e e n c r y p t i o n o n a n e x i st i n g o r r u n n i n g I a a S V M
To disable encryption on an existing or running IaaS VM by using the PowerShell cmdlet, see
Disable-AzureRmVMDiskEncryption . This cmdlet supports both Windows and Linux VMs. To disable encryption, it
installs an extension on the virtual machine. If the Name parameter is not specified, an extension with the default
name AzureDiskEncryption for Windows VMs is created.
On Linux VMs, the AzureDiskEncryptionForLinux extension is used.
NOTE
This cmdlet reboots the virtual machine.
NOTE
It is mandatory to snapshot and/or backup a managed disk based VM instance outside of and prior to enabling Azure Disk
Encryption. A snapshot of the managed disk can be taken from the portal, or Azure Backup can be used. Backups ensure
that a recovery option is possible in the case of any unexpected failure during encryption. Once a backup is made, the Set-
AzureRmVMDiskEncryptionExtension cmdlet can be used to encrypt managed disks by specifying the -skipVmBackup
parameter. This command will fail against managed disk based VM's until a backup has been made and this parameter has
been specified.
Appendix
Connect to your subscription
Before you proceed, review the Prerequisites section in this article. After you ensure that all prerequisites have
been met, connect to your subscription by doing the following:
1. Start an Azure PowerShell session, and sign in to your Azure account with the following command:
Login-AzureRmAccount
2. If you have multiple subscriptions and want to specify one to use, type the following to see the
subscriptions for your account:
Get-AzureRmSubscription
6. The following output confirms the Azure Disk Encryption PowerShell installation:
To compress the OS partition and prepare the machine for BitLocker, execute the following command:
NOTE
Prepare the VM with a separate data/resource VHD for getting the external key by using BitLocker.
The VM must be created from the Marketplace image in Azure Resource Manager.
Azure VM with at least 4 GB of RAM (recommended size is 7 GB).
(For RHEL and CentOS) Disable SELinux. To disable SELinux, see "4.4.2. Disabling SELinux" in the SELinux
User's and Administrator's Guide on the VM.
After you disable SELinux, reboot the VM at least once.
St e p s
2. Configure the VM according to your needs. If you are going to encrypt all the (OS + data) drives, the data
drives need to be specified and mountable from /etc/fstab.
NOTE
Use UUID=... to specify data drives in /etc/fstab instead of specifying the block device name (for example,
/dev/sdb1). During encryption, the order of drives changes on the VM. If your VM relies on a specific order of block
devices, it will fail to mount them after encryption.
NOTE
All user-space processes that are not running as systemd services should be killed with a SIGKILL . Reboot the
VM. When you enable OS disk encryption on a running VM, plan on VM downtime.
5. Periodically monitor the progress of encryption by using the instructions in the next section.
6. After Get-AzureRmVmDiskEncryptionStatus shows "VMRestartPending," restart your VM either by signing
in to it or by using the portal, PowerShell, or CLI.
C:\> Get-AzureRmVmDiskEncryptionStatus -ResourceGroupName $ResourceGroupName -VMName $VMName
-ExtensionName $ExtensionName
OsVolumeEncrypted : VMRestartPending
DataVolumesEncrypted : NotMounted
OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings
ProgressMessage : OS disk successfully encrypted, reboot the VM
Before you reboot, we recommend that you save boot diagnostics of the VM.
Monitoring OS encryption progress
You can monitor OS encryption progress in three ways:
Use the Get-AzureRmVmDiskEncryptionStatus cmdlet and inspect the ProgressMessage field:
OsVolumeEncrypted : EncryptionInProgress
DataVolumesEncrypted : NotMounted
OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings
ProgressMessage : OS disk encryption started
After the VM reaches "OS disk encryption started," it takes about 40 to 50 minutes on a Premium-storage
backed VM.
Because of issue #388 in WALinuxAgent, OsVolumeEncrypted and DataVolumesEncrypted show up as
Unknown in some distributions. With WALinuxAgent version 2.1.5 and later, this issue is fixed
automatically. If you see Unknown in the output, you can verify disk-encryption status by using the Azure
Resource Explorer.
Go to Azure Resource Explorer, and then expand this hierarchy in the selection panel on left:
|-- subscriptions
|-- [Your subscription]
|-- resourceGroups
|-- [Your resource group]
|-- providers
|-- Microsoft.Compute
|-- virtualMachines
|-- [Your virtual machine]
|-- InstanceView
In the InstanceView, scroll down to see the encryption status of your drives.
Look at boot diagnostics. Messages from the ADE extension should be prefixed with
[AzureDiskEncryption] .
Sign in to the VM via SSH, and get the extension log from:
/var/log/azure/Microsoft.Azure.Security.AzureDiskEncryptionForLinux
We recommend that you do not sign in to the VM while OS encryption is in progress. Copy the logs only
when the other two methods have failed.
Prepare a pre-encrypted Linux VHD
Ubuntu 16
2. Create a separate boot drive, which must not be encrypted. Encrypt your root drive.
3. Provide a passphrase. This is the passphrase that you upload to the key vault.
4. Finish partitioning.
5. When you boot the VM and are asked for a passphrase, use the passphrase you provided in step 3.
6. Prepare the VM for uploading into Azure using these instructions. Do not run the last step (deprovisioning
the VM) yet.
Configure encryption to work with Azure by doing the following:
1. Create a file under /usr/local/sbin/azure_crypt_key.sh, with the content in the following script. Pay
attention to the KeyFileName, because it is the passphrase file name used by Azure.
#!/bin/sh
MountPoint=/tmp-keydisk-mount
KeyFileName=LinuxPassPhraseFileName
echo "Trying to get the key from disks ..." >&2
mkdir -p $MountPoint
modprobe vfat >/dev/null 2>&1
modprobe ntfs >/dev/null 2>&1
sleep 2
OPENED=0
cd /sys/block
for DEV in sd*; do
3. If you are editing azure_crypt_key.sh in Windows and you copied it to Linux, run
dos2unix /usr/local/sbin/azure_crypt_key.sh .
chmod +x /usr/local/sbin/azure_crypt_key.sh
3. Prepare the VM for uploading to Azure by following the instructions in Prepare a SLES or openSUSE
virtual machine for Azure. Do not run the last step (deprovisioning the VM) yet.
To configure encryption to work with Azure, do the following:
1. Edit the /etc/dracut.conf, and add the following line: add_drivers+=" vfat ntfs nls_cp437 nls_iso8859-1"
2. Comment out these lines by the end of the file /usr/lib/dracut/modules.d/90crypt/module-setup.sh:
# inst_multiple -o \
# $systemdutildir/system-generators/systemd-cryptsetup-generator \
# $systemdutildir/systemd-cryptsetup \
# $systemdsystemunitdir/systemd-ask-password-console.path \
# $systemdsystemunitdir/systemd-ask-password-console.service \
# $systemdsystemunitdir/cryptsetup.target \
# $systemdsystemunitdir/sysinit.target.wants/cryptsetup.target \
# systemd-ask-password systemd-tty-ask-password-agent
# inst_script "$moddir"/crypt-run-generator.sh /sbin/crypt-run-generator
DRACUT_SYSTEMD=0
if [ -z "$DRACUT_SYSTEMD" ]; then
to:
if [ 1 ]; then
MountPoint=/tmp-keydisk-mount
KeyFileName=LinuxPassPhraseFileName
echo "Trying to get the key from disks ..." >&2
mkdir -p $MountPoint >&2
modprobe vfat >/dev/null >&2
modprobe ntfs >/dev/null >&2
for SFS in /dev/sd*; do
echo "> Trying device:$SFS..." >&2
mount ${SFS}1 $MountPoint -t vfat -r >&2 ||
mount ${SFS}1 $MountPoint -t ntfs -r >&2
if [ -f $MountPoint/$KeyFileName ]; then
echo "> keyfile got..." >&2
cp $MountPoint/$KeyFileName /tmp-keyfile >&2
luksfile=/tmp-keyfile
umount $MountPoint >&2
break
fi
done
4. When you boot the VM and are asked for a passphrase, use the passphrase you provided in step 3.
5. Prepare the VM for uploading into Azure by using the "CentOS 7.0+" instructions in Prepare a CentOS-
based virtual machine for Azure. Do not run the last step (deprovisioning the VM) yet.
6. Now you can deprovision the VM and upload your VHD into Azure.
To configure encryption to work with Azure, do the following:
1. Edit the /etc/dracut.conf, and add the following line:
# inst_multiple -o \
# $systemdutildir/system-generators/systemd-cryptsetup-generator \
# $systemdutildir/systemd-cryptsetup \
# $systemdsystemunitdir/systemd-ask-password-console.path \
# $systemdsystemunitdir/systemd-ask-password-console.service \
# $systemdsystemunitdir/cryptsetup.target \
# $systemdsystemunitdir/sysinit.target.wants/cryptsetup.target \
# systemd-ask-password systemd-tty-ask-password-agent
# inst_script "$moddir"/crypt-run-generator.sh /sbin/crypt-run-generator
DRACUT_SYSTEMD=0
if [ -z "$DRACUT_SYSTEMD" ]; then
to
if [ 1 ]; then
4. Edit /usr/lib/dracut/modules.d/90crypt/cryptroot-ask.sh and append this after the “# Open LUKS device”:
MountPoint=/tmp-keydisk-mount KeyFileName=LinuxPassPhraseFileName echo "Trying to get the key from disks
..." >&2 mkdir -p $MountPoint >&2 modprobe vfat >/dev/null >&2 modprobe ntfs >/dev/null >&2 for SFS in
/dev/sd*; do echo "> Trying device:$SFS..." >&2 mount ${SFS}1 $MountPoint -t vfat -r >&2 || mount ${SFS}1
$MountPoint -t ntfs -r >&2 if [ -f $MountPoint/$KeyFileName ]; then echo "> keyfile got..." >&2 cp
$MountPoint/$KeyFileName /tmp-keyfile >&2 luksfile=/tmp-keyfile umount $MountPoint >&2 break fi done
5. Run the “/usr/sbin/dracut -f -v” to update the initrd.
Upload the disk-encryption secret for the pre -encrypted VM to your key vault
The disk-encryption secret that you obtained previously must be uploaded as a secret in your key vault. The key
vault needs to have disk encryption and permissions enabled for your Azure AD client.
$AadClientId = "YourAADClientId"
$AadClientSecret = "YourAADClientSecret"
# This is the passphrase that was provided for encryption during the distribution installation
$passphrase = "contoso-password"
Use the $secretUrl in the next step for attaching the OS disk without using KEK.
Disk encryption secret encrypted with a KEK
Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the
wrap API to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64
URL encoded string, which you can then upload as a secret by using the Set-AzureKeyVaultSecret cmdlet.
# This is the passphrase that was provided for encryption during the distribution installation
$passphrase = "contoso-password"
$apiversion = "2015-06-01"
##############################
# Get Auth URI
##############################
$response = try { Invoke-RestMethod -Method GET -Uri $uri -Headers $headers } catch { $_.Exception.Response }
$authHeader = $response.Headers["www-authenticate"]
$authUri = [regex]::match($authHeader, 'authorization="(.*?)"').Groups[1].Value
##############################
# Get Auth Token
##############################
$uri = $authUri + "/oauth2/token"
$body = "grant_type=client_credentials"
$body += "&client_id=" + $AadClientId
$body += "&client_secret=" + [Uri]::EscapeDataString($AadClientSecret)
$body += "&resource=" + [Uri]::EscapeDataString("https://vault.azure.net")
$headers = @{}
$response = Invoke-RestMethod -Method POST -Uri $uri -Headers $headers -Body $body
$access_token = $response.access_token
##############################
# Get KEK info
##############################
$keyid = $response.key.kid
##############################
# Encrypt passphrase using KEK
##############################
$passphraseB64 = [Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($Passphrase))
$uri = $keyid + "/encrypt?api-version=" + $apiversion
$headers = @{"Authorization" = "Bearer " + $access_token; "Content-Type" = "application/json"}
$bodyObj = @{"alg" = "RSA-OAEP"; "value" = $passphraseB64}
$body = $bodyObj | ConvertTo-Json
$response = Invoke-RestMethod -Method POST -Uri $uri -Headers $headers -Body $body
$wrappedSecret = $response.value
##############################
# Store secret
##############################
$secretName = [guid]::NewGuid().ToString()
$uri = $KeyVault.VaultUri + "/secrets/" + $secretName + "?api-version=" + $apiversion
$secretAttributes = @{"enabled" = $true}
$secretTags = @{"DiskEncryptionKeyEncryptionAlgorithm" = "RSA-OAEP"; "DiskEncryptionKeyFileName" =
"LinuxPassPhraseFileName"}
$headers = @{"Authorization" = "Bearer " + $access_token; "Content-Type" = "application/json"}
$bodyObj = @{"value" = $wrappedSecret; "attributes" = $secretAttributes; "tags" = $secretTags}
$body = $bodyObj | ConvertTo-Json
$response = Invoke-RestMethod -Method PUT -Uri $uri -Headers $headers -Body $body
$secretUrl = $response.id
Use $KeyEncryptionKey and $secretUrl in the next step for attaching the OS disk using KEK.
Specify a secret URL when you attach an OS disk
Without using a KEK
While you are attaching the OS disk, you need to pass $secretUrl . The URL was generated in the "Disk-
encryption secret not encrypted with a KEK" section.
Set-AzureRmVMOSDisk `
-VM $VirtualMachine `
-Name $OSDiskName `
-SourceImageUri $VhdUri `
-VhdUri $OSDiskUri `
-Linux `
-CreateOption FromImage `
-DiskEncryptionKeyVaultId $KeyVault.ResourceId `
-DiskEncryptionKeyUrl $SecretUrl
Using a KEK
When you attach the OS disk, pass $KeyEncryptionKey and $secretUrl . The URL was generated in the "Disk-
encryption secret not encrypted with a KEK" section.
Set-AzureRmVMOSDisk `
-VM $VirtualMachine `
-Name $OSDiskName `
-SourceImageUri $CopiedTemplateBlobUri `
-VhdUri $OSDiskUri `
-Linux `
-CreateOption FromImage `
-DiskEncryptionKeyVaultId $KeyVault.ResourceId `
-DiskEncryptionKeyUrl $SecretUrl `
-KeyEncryptionKeyVaultId $KeyVault.ResourceId `
-KeyEncryptionKeyURL $KeyEncryptionKey.Id
This article provides answers to frequently asked questions (FAQ) about Azure Disk Encryption for Windows and
Linux IaaS VMs. For more information about this service, see Azure Disk Encryption for Windows and Linux IaaS
VMs.
General questions
Q: Where is Azure Disk Encryption in general availability (GA)?
A: Azure Disk Encryption for Windows and Linux IaaS VMs is in general availability in all Azure public regions.
Q: What user experiences are available with Azure Disk Encryption?
A: Azure Disk Encryption GA supports Azure Resource Manager templates, Azure PowerShell, and Azure CLI. This
gives you a lot of flexibility. You have three different options for enabling disk encryption for your IaaS VMs. For
more information on the user experience and step-by-step guidance available in Azure Disk Encryption, see Azure
Disk Encryption deployment scenarios and experiences.
Q: How much does Azure Disk Encryption cost?
A: There is no charge for encrypting VM disks with Azure Disk Encryption.
Q: Which virtual machine tiers does Azure Disk Encryption support?
A: Azure Disk Encryption is available on standard tier VMs including A, D, DS, G, GS, and F series IaaS VMs. It is also
available for VMs with premium storage. It is not available on basic tier VMs.
Q: What Linux distributions does Azure Disk Encryption support?
A: Azure Disk Encryption is supported on the following Linux server distributions and versions:
NOTE
The Linux Azure disk encryption preview extension is deprecated. For details, see Deprecating Azure disk encryption preview
extension for Linux IaaS VMs.
Next steps
In this document, you learned more about the most frequent questions related to Azure Disk Encryption. For more
information about this service and its capabilities, see the following articles:
Apply disk encryption in Azure Security Center
Encrypt an Azure virtual machine
Azure data encryption at rest
Azure Disk Encryption troubleshooting guide
8/31/2017 • 5 min to read • Edit Online
This guide is for IT professionals, information security analysts, and cloud administrators whose organizations use
Azure Disk Encryption and need guidance to troubleshoot disk-encryption-related problems.
Unable to encrypt
In some cases, the Linux disk encryption appears to be stuck at "OS disk encryption started" and SSH is disabled.
The encryption process can take between 3-16 hours to finish on a stock gallery image. If multi-terabyte-sized data
disks are added, the process might take days.
The Linux OS disk encryption sequence unmounts the OS drive temporarily. It then performs block-by-block
encryption of the entire OS disk, before it remounts it in its encrypted state. Unlike Azure Disk Encryption on
Windows, Linux Disk Encryption does not allow for concurrent use of the VM while the encryption is in progress.
The performance characteristics of the VM can make a significant difference in the time required to complete
encryption. These characteristics include the size of the disk and whether the storage account is standard or
premium (SSD) storage.
To check the encryption status, poll the ProgressMessage field returned from the Get-
AzureRmVmDiskEncryptionStatus command. While the OS drive is being encrypted, the VM enters a servicing state,
and disables SSH to prevent any disruption to the ongoing process. The EncryptionInProgress message reports
for the majority of the time while the encryption is in progress. Several hours later, a VMRestartPending message
prompts you to restart the VM. For example:
After you are prompted to reboot the VM, and after the VM restarts, you must wait 2-3 minutes for the reboot and
for the final steps to be performed on the target. The status message changes when the encryption is finally
complete. After this message is available, the encrypted OS drive is expected to be ready for use and the VM is
ready to be used again.
In the following cases, we recommend that you restore the VM back to the snapshot or backup taken immediately
before encryption:
If the reboot sequence described previously does not happen.
If the boot information, progress message, or other error indicators report that OS encryption has failed in the
middle of this process. An example of a message is the "failed to unmount" error that is described in this guide.
Prior to the next attempt, reevaluate the characteristics of the VM and ensure that all of the prerequisites are
satisfied.
\windows\system32\bdehdcfg.exe
\windows\system32\bdehdcfglib.dll
\windows\system32\en-US\bdehdcfglib.dll.mui
\windows\system32\en-US\bdehdcfg.exe.mui
Next steps
In this document, you learned more about some common problems in Azure Disk Encryption and how to
troubleshoot those problems. For more information about this service and its capabilities, see the following articles:
Apply disk encryption in Azure Security Center
Encrypt an Azure virtual machine
Azure data encryption at rest
Encrypt an Azure Virtual Machine
6/27/2017 • 10 min to read • Edit Online
Azure Security Center will alert you if you have virtual machines that are not encrypted. These alerts will show as
High Severity and the recommendation is to encrypt these virtual machines.
NOTE
The information in this document applies to encrypting virtual machines without using a Key Encryption Key (which is
required for backing up virtual machines using Azure Backup). Please see the article Azure Disk Encryption for Windows and
Linux Azure Virtual Machines for information on how to use a Key Encryption Key to support Azure Backup for encrypted
Azure Virtual Machines.
To encrypt Azure Virtual Machines that have been identified by Azure Security Center as needing encryption, we
recommend the following steps:
Install and configure Azure PowerShell. This will enable you to run the PowerShell commands required to set up
the prerequisites required to encrypt Azure Virtual Machines.
Obtain and run the Azure Disk Encryption Prerequisites Azure PowerShell script
Encrypt your virtual machines
The goal of this document is to enable you to encrypt your virtual machines, even if you have little or no
background in Azure PowerShell. This document assumes you are using Windows 10 as the client machine from
which you will configure Azure Disk Encryption.
There are many approaches that can be used to setup the prerequisites and to configure encryption for Azure
Virtual Machines. If you are already well-versed in Azure PowerShell or Azure CLI, then you may prefer to use
alternate approaches.
NOTE
To learn more about alternate approaches to configuring encryption for Azure virtual machines, please see Azure Disk
Encryption for Windows and Linux Azure Virtual Machines.
NOTE
If you’re curious as to why you need to create an Azure Active Directory application, please see Register an application with
Azure Active Directory section in the article Getting Started with Azure Key Vault.
Perform the following steps to encrypt an Azure Virtual Machine:
1. If you closed the PowerShell ISE, open an elevated instance of the PowerShell ISE. Follow the instructions earlier
in this article if the PowerShell ISE is not already open. If you closed the script, then open the
ADEPrereqScript.ps1 clicking File, then Open and selecting the script from the c:\AzureADEScript folder. If
you have followed this article from the start, then just move on to the next step.
2. In the console of the PowerShell ISE (the bottom pane of the PowerShell ISE), change the focus to the local of
the script by typing cd c:\AzureADEScript and press ENTER.
3. Set the execution policy on your machine so that you can run the script. Type Set-ExecutionPolicy
Unrestricted in the console and then press ENTER. If you see a dialog box telling about the effects of the
change to execution policy, click either Yes to all or Yes (if you see Yes to all, select that option – if you do not
see Yes to all, then click Yes).
4. Log into your Azure account. In the console, type Login-AzureRmAccount and press ENTER. A dialog box will
appear in which you enter your credentials (make sure you have rights to change the virtual machines – if you
do not have rights, you will not be able to encrypt them. If you are not sure, ask your subscription owner or
administrator). You should see information about your Environment, Account, TenantId, SubscriptionId and
CurrentStorageAccount. Copy the SubscriptionId to Notepad. You will need to use this in step #6.
5. Find what subscription your virtual machine belongs to and its location. Go to https://portal.azure.com and
log in. On the left side of the page, click Virtual Machines. You will see a list of your virtual machines and
the subscriptions they belong to.
6. Return to the PowerShell ISE. Set the subscription context in which the script will be run. In the console, type
Select-AzureRmSubscription –SubscriptionId (replace < your_subscription_Id > with your actual
Subscription ID) and press ENTER. You will see information about the Environment, Account, TenantId,
SubscriptionId and CurrentStorageAccount.
7. You are now ready to run the script. Click the Run Script button or press F5 on the keyboard.
8. The script asks for resourceGroupName: - enter the name of Resource Group you want to use, then press
ENTER. If you don’t have one, enter a name you want to use for a new one. If you already have a Resource
Group that you want to use (such as the one that your virtual machine is in), enter the name of the existing
Resource Group.
9. The script asks for keyVaultName: - enter the name of the Key Vault you want to use, then press ENTER. If you
don’t have one, enter a name you want to use for a new one. If you already have a Key Vault that you want to
use, enter the name of the existing Key Vault.
10. The script asks for location: - enter the name of the location in which the VM you want to encrypt is located,
then press ENTER. If you don’t remember the location, go back to step #5.
11. The script asks for aadAppName: - enter the name of the Azure Active Directory application you want to use,
then press ENTER. If you don’t have one, enter a name you want to use for a new one. If you already have an
Azure Active Directory application that you want to use, enter the name of the existing Azure Active Directory
application.
12. A log in dialog box will appear. Provide your credentials (yes, you have logged in once, but now you need to do
it again).
13. The script runs and when complete it will ask you to copy the values of the aadClientID, aadClientSecret,
diskEncryptionKeyVaultUrl, and keyVaultResourceId. Copy each of these values to the clipboard and paste
them into Notepad.
14. Return to the PowerShell ISE and place the cursor at the end of the last line, and press ENTER.
The output of the script should look something like the screen below:
Encryption steps
First, you need to tell PowerShell the name of the virtual machine you want to encrypt. In the console, type:
$vmName = <’your_vm_name’>
Replace <’your_vm_name’> with the name of your VM (make sure the name is surrounded by a single quote)
and then press ENTER.
To confirm that the correct VM name was entered, type:
$vmName
Press ENTER. You should see the name of the virtual machine you want to encrypt. For example:
There are two methods to run the encryption command to encrypt all drives on the virtual machine. The first
method is to type the following command in the PowerShell ISE console:
Regardless of the method you use, a dialog box will appear informing you that it will take 10-15 minutes for the
operation to complete. Click Yes.
While the encryption process is taking place, you can return to the Azure Portal and see the status of the virtual
machine. On the left side of the page, click Virtual Machines, then in the Virtual Machines blade, click the name
of the virtual machine you’re encrypting. In the blade that appears, you’ll notice that the Status says that it’s
Updating. This demonstrates that encryption is in process.
Return to the PowerShell ISE. When the script completes, you’ll see what appears in the figure below.
To demonstrate that the virtual machine is now encrypted, return to the Azure Portal and click Virtual Machines
on the left side of the page. Click the name of the virtual machine you encrypted. In the Settings blade, click Disks.
Azure Operational Security refers to the services, controls, and features available to users for protecting their data,
applications, and other assets in Microsoft Azure. Azure Operational Security is built on a framework that
incorporates the knowledge gained through various capabilities that are unique to Microsoft, including the
Microsoft Security Development Lifecycle (SDL), the Microsoft Security Response Center program, and deep
awareness of the cybersecurity threat landscape.
In this article, we discuss a collection of Azure database security best practices. These best practices are derived
from our experience with Azure database security and the experiences of customers like yourself.
For each best practice, we explain:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
How you can learn to enable the best practice
This Azure Operational Security Best Practices article is based on a consensus opinion, and Azure platform
capabilities and feature sets, as they exist at the time this article was written. Opinions and technologies change
over time and this article will be updated on a regular basis to reflect those changes.
Azure Operational Security best practices discussed in this article include:
Monitor, manage, and protect cloud infrastructure
Manage identity and implement single sign-on (SSO)
Trace requests, analyze usage trends, and diagnose issues
Monitoring services with a centralized monitoring solution
Prevent, detect, and respond to threats
End-to-end scenario-based network monitoring
Secure deployment using proven DevOps tools
Monitoring services
Cloud applications are complex with many moving parts. Monitoring provides data to ensure that your application
stays up and running in a healthy state. It also helps you to stave off potential problems or troubleshoot past ones.
In addition, you can use monitoring data to gain deep insights about your application. That knowledge can help you
to improve application performance or maintainability, or automate actions that would otherwise require manual
intervention.
Monitor Azure resources
Azure Monitor is the platform service that provides a single source for monitoring Azure resources. With Azure
Monitor, you can visualize, query, route, archive, and take action on the metrics and logs coming from resources in
Azure. You can work with this data using the Monitor portal blade, Monitor PowerShell Cmdlets, Cross-Platform
CLI, or Azure Monitor REST APIs.
Enable Autoscale with Azure monitor
Enable Azure Monitor Autoscale applies only to virtual machine scale sets (VMSS), cloud services, app service plans,
and app service environments.
Manage Roles Permissions and Security
Many teams need to strictly regulate access to monitoring data and settings. For example, if you have team
members who work exclusively on monitoring (support engineers, devops engineers) or if you use a managed
service provider, you may want to grant them access to only monitoring data while restricting their ability to create,
modify, or delete resources.
This shows how to quickly apply a built-in monitoring RBAC role to a user in Azure or build your own custom role
for a user who needs limited monitoring permissions. It then discusses security considerations for your Azure
Monitor-related resources and how you can limit access to the data they contain.
Next steps
Learn more about Azure Operational Security.
To Learn more Operations Management Suite | Security & Compliance.
Getting started with Operations Management Suite Security and Audit Solution.
Security management in Azure
6/27/2017 • 20 min to read • Edit Online
Azure subscribers may manage their cloud environments from multiple devices, including management
workstations, developer PCs, and even privileged end-user devices that have task-specific permissions. In some
cases, administrative functions are performed through web-based consoles such as the Azure portal. In other cases,
there may be direct connections to Azure from on-premises systems over Virtual Private Networks (VPNs),
Terminal Services, client application protocols, or (programmatically) the Azure Service Management API (SMAPI).
Additionally, client endpoints can be either domain joined or isolated and unmanaged, such as tablets or
smartphones.
Although multiple access and management capabilities provide a rich set of options, this variability can add
significant risk to a cloud deployment. It can be difficult to manage, track, and audit administrative actions. This
variability may also introduce security threats through unregulated access to client endpoints that are used for
managing cloud services. Using general or personal workstations for developing and managing infrastructure
opens unpredictable threat vectors such as web browsing (for example, watering hole attacks) or email (for
example, social engineering and phishing).
The potential for attacks increases in this type of environment because it is challenging to construct security
policies and mechanisms to appropriately manage access to Azure interfaces (such as SMAPI) from widely varied
endpoints.
Remote management threats
Attackers often attempt to gain privileged access by compromising account credentials (for example, through
password brute forcing, phishing, and credential harvesting), or by tricking users into running harmful code (for
example, from harmful websites with drive-by downloads or from harmful email attachments). In a remotely
managed cloud environment, account breaches can lead to an increased risk due to anywhere, anytime access.
Even with tight controls on primary administrator accounts, lower-level user accounts can be used to exploit
weaknesses in one’s security strategy. Lack of appropriate security training can also lead to breaches through
accidental disclosure or exposure of account information.
When a user workstation is also used for administrative tasks, it can be compromised at many different points.
Whether a user is browsing the web, using 3rd-party and open-source tools, or opening a harmful document file
that contains a trojan.
In general, most targeted attacks that result in data breaches can be traced to browser exploits, plug-ins (such as
Flash, PDF, Java), and spear phishing (email) on desktop machines. These machines may have administrative-level
or service-level permissions to access live servers or network devices for operations when used for development or
management of other assets.
Operational security fundamentals
For more secure management and operations, you can minimize a client’s attack surface by reducing the number
of possible entry points. This can be done through security principles: “separation of duties” and “segregation of
environments.”
Isolate sensitive functions from one another to decrease the likelihood that a mistake at one level leads to a breach
in another. Examples:
Administrative tasks should not be combined with activities that might lead to a compromise (for example,
malware in an administrator’s email that then infects an infrastructure server).
A workstation used for high-sensitivity operations should not be the same system used for high-risk purposes
such as browsing the Internet.
Reduce the system’s attack surface by removing unnecessary software. Example:
Standard administrative, support, or development workstation should not require installation of an email client
or other productivity applications if the device’s main purpose is to manage cloud services.
Client systems that have administrator access to infrastructure components should be subjected to the strictest
possible policy to reduce security risks. Examples:
Security policies can include Group Policy settings that deny open Internet access from the device and use of a
restrictive firewall configuration.
Use Internet Protocol security (IPsec) VPNs if direct access is needed.
Configure separate management and development Active Directory domains.
Isolate and filter management workstation network traffic.
Use antimalware software.
Implement multi-factor authentication to reduce the risk of stolen credentials.
Consolidating access resources and eliminating unmanaged endpoints also simplifies management tasks.
Providing security for Azure remote management
Azure provides security mechanisms to aid administrators who manage Azure cloud services and virtual machines.
These mechanisms include:
Authentication and role-based access control.
Monitoring, logging, and auditing.
Certificates and encrypted communications.
A web management portal.
Network packet filtering.
With client-side security configuration and datacenter deployment of a management gateway, it is possible to
restrict and monitor administrator access to cloud applications and data.
NOTE
Certain recommendations in this article may result in increased data, network, or compute resource usage, and may increase
your license or subscription costs.
Security guidelines
In general, helping to secure administrator workstations for use with the cloud is similar to the practices used for
any workstation on-premises—for example, minimized build and restrictive permissions. Some unique aspects of
cloud management are more akin to remote or out-of-band enterprise management. These include the use and
auditing of credentials, security-enhanced remote access, and threat detection and response.
Authentication
You can use Azure logon restrictions to constrain source IP addresses for accessing administrative tools and audit
access requests. To help Azure identify management clients (workstations and/or applications), you can configure
both SMAPI (via customer-developed tools such as Windows PowerShell cmdlets) and the Azure portal to require
client-side management certificates to be installed, in addition to SSL certificates. We also recommend that
administrator access require multi-factor authentication.
Some applications or services that you deploy into Azure may have their own authentication mechanisms for both
end-user and administrator access, whereas others take full advantage of Azure AD. Depending on whether you are
federating credentials via Active Directory Federation Services (AD FS), using directory synchronization or
maintaining user accounts solely in the cloud, using Microsoft Identity Manager (part of Azure AD Premium) helps
you manage identity lifecycles between the resources.
Connectivity
Several mechanisms are available to help secure client connections to your Azure virtual networks. Two of these
mechanisms, site-to-site VPN (S2S) and point-to-site VPN (P2S), enable the use of industry standard IPsec (S2S) or
the Secure Socket Tunneling Protocol (SSTP) (P2S) for encryption and tunneling. When Azure is connecting to
public-facing Azure services management such as the Azure portal, Azure requires Hypertext Transfer Protocol
Secure (HTTPS).
A stand-alone hardened workstation that does not connect to Azure through an RD Gateway should use the SSTP-
based point-to-site VPN to create the initial connection to the Azure Virtual Network, and then establish RDP
connection to individual virtual machines from with the VPN tunnel.
Management auditing vs. policy enforcement
Typically, there are two approaches for helping to secure management processes: auditing and policy enforcement.
Doing both provides comprehensive controls, but may not be possible in all situations. In addition, each approach
has different levels of risk, cost, and effort associated with managing security, particularly as it relates to the level of
trust placed in both individuals and system architectures.
Monitoring, logging, and auditing provide a basis for tracking and understanding administrative activities, but it
may not always be feasible to audit all actions in complete detail due to the amount of data generated. Auditing the
effectiveness of the management policies is a best practice, however.
Policy enforcement that includes strict access controls puts programmatic mechanisms in place that can govern
administrator actions, and it helps ensure that all possible protection measures are being used. Logging provides
proof of enforcement, in addition to a record of who did what, from where, and when. Logging also enables you to
audit and crosscheck information about how administrators follow policies, and it provides evidence of activities
Client configuration
We recommend three primary configurations for a hardened workstation. The biggest differentiators between
them are cost, usability, and accessibility, while maintaining a similar security profile across all options. The
following table provides a short analysis of the benefits and risks to each. (Note that “corporate PC” refers to a
standard desktop PC configuration that would be deployed for all domain users, regardless of roles.)
Stand-alone hardened workstation Tightly controlled workstation higher cost for dedicated desktops
Windows to go with BitLocker drive Compatibility with most PCs Asset tracking
encryption
It is important that the hardened workstation is the host and not the guest, with nothing between the host
operating system and the hardware. Following the “clean source principle” (also known as “secure origin”) means
that the host should be the most hardened. Otherwise, the hardened workstation (guest) is subject to attacks on the
system on which it is hosted.
You can further segregate administrative functions through dedicated system images for each hardened
workstation that have only the tools and permissions needed for managing select Azure and cloud applications,
with specific local AD DS GPOs for the necessary tasks.
For IT environments that have no on-premises infrastructure (for example, no access to a local AD DS instance for
GPOs because all servers are in the cloud), a service such as Microsoft Intune can simplify deploying and
maintaining workstation configurations.
Stand-alone hardened workstation for management
With a stand-alone hardened workstation, administrators have a PC or laptop that they use for administrative tasks
and another, separate PC or laptop for non-administrative tasks. A workstation dedicated to managing your Azure
services does not need other applications installed. Additionally, using workstations that support a Trusted
Platform Module (TPM) or similar hardware-level cryptography technology aids in device authentication and
prevention of certain attacks. TPM can also support full volume protection of the system drive by using BitLocker
Drive Encryption.
In the stand-alone hardened workstation scenario (shown below), the local instance of Windows Firewall (or a non-
Microsoft client firewall) is configured to block inbound connections, such as RDP. The administrator can log on to
the hardened workstation and start an RDP session that connects to Azure after establishing a VPN connect with an
Azure Virtual Network, but cannot log on to a corporate PC and use RDP to connect to the hardened workstation
itself.
Best practices
Consider the following additional guidelines when you are managing applications and data in Azure.
Dos and don'ts
Don't assume that because a workstation has been locked down that other common security requirements do not
need to be met. The potential risk is higher because of elevated access levels that administrator accounts generally
possess. Examples of risks and their alternate safe practices are shown in the table below.
DON'T DO
Don't email credentials for administrator access or other Maintain confidentiality by delivering account names and
secrets (for example, SSL or management certificates) passwords by voice (but not storing them in voice mail),
perform a remote installation of client/server certificates (via
an encrypted session), download from a protected network
share, or distribute by hand via removable media.
Don't store account passwords unencrypted or un-hashed in Establish security management principles and system
application storage (such as in spreadsheets, SharePoint sites, hardening policies, and apply them to your development
or file shares). environment.
Don't share accounts and passwords between administrators, Create a dedicated Microsoft account to manage your Azure
or reuse passwords across multiple user accounts or services, subscription—an account that is not used for personal email.
particularly those for social media or other nonadministrative
activities.
DON'T DO
Don't email configuration files. Configuration files and profiles should be installed from a
trusted source (for example, an encrypted USB flash drive), not
from a mechanism that can be easily compromised, such as
email.
Don't use weak or simple logon passwords. Enforce strong password policies, expiration cycles (changeon-
first-use), console timeouts, and automatic account lockouts.
Use a client password management system with multi-factor
authentication for password vault access.
Don't expose management ports to the Internet. Lock down Azure ports and IP addresses to restrict
management access. For more information, see the Azure
Network Security white paper.
Azure operations
Within Microsoft’s operation of Azure, operations engineers and support personnel who access Azure’s production
systems use hardened workstation PCs with VMs provisioned on them for internal corporate network access and
applications (such as e-mail, intranet, etc.). All management workstation computers have TPMs, the host boot drive
is encrypted with BitLocker, and they are joined to a special organizational unit (OU) in Microsoft’s primary
corporate domain.
System hardening is enforced through Group Policy, with centralized software updating. For auditing and analysis,
event logs (such as security and AppLocker) are collected from management workstations and saved to a central
location.
In addition, dedicated jump-boxes on Microsoft’s network that require two-factor authentication are used to
connect to Azure’s production network.
Summary
Using a hardened workstation configuration for administering your Azure cloud services, Virtual Machines, and
applications can help you avoid numerous risks and threats that can come from remotely managing critical IT
infrastructure. Both Azure and Windows provide mechanisms that you can employ to help protect and control
communications, authentication, and client behavior.
Next steps
The following resources are available to provide more general information about Azure and related Microsoft
services, in addition to specific items referenced in this paper:
Securing Privileged Access – get the technical details for designing and building a secure administrative
workstation for Azure management
Microsoft Trust Center - learn about Azure platform capabilities that protect the Azure fabric and the workloads
that run on Azure
Microsoft Security Response Center -- where Microsoft security vulnerabilities, including issues with Azure, can
be reported or via email to [email protected]
Azure Security Blog – keep up to date on the latest in Azure Security
Introduction to Azure Security Center
9/13/2017 • 3 min to read • Edit Online
Learn about Azure Security Center, its key capabilities, and how to get started.
Get started
To get started with Security Center, you need a subscription to Microsoft Azure. Security Center is enabled with
your Azure subscription. If you do not have a subscription, you can sign up for a free trial.
You access Security Center from the Azure portal. See the portal documentation to learn more.
Getting started with Azure Security Center quickly guides you through the security-monitoring and policy-
management components of Security Center.
Next steps
In this document, you were introduced to Security Center, its key capabilities, and how to get started. To learn more,
see the following resources:
Planning and operations guide - Learn how to optimize your use of Security Center based on your
organization's security requirements and cloud management model.
Setting security policies — Learn how to configure security policies for your Azure subscriptions and resource
groups.
Managing security recommendations — Learn how recommendations help you protect your Azure resources.
Security health monitoring — Learn how to monitor the health of your Azure resources.
Managing and responding to security alerts — Learn how to manage and respond to security alerts.
Monitoring and processing security events - Learn how to monitor and process security events collected over
time.
Monitoring partner solutions — Learn how to monitor the health status of your partner solutions.
Azure Security Center FAQ — Find frequently asked questions about using the service.
Azure Security blog — Get the latest Azure security news and information.
Introduction to Microsoft Azure log integration
8/10/2017 • 3 min to read • Edit Online
Learn about Azure log integration, its key capabilities, and how it works.
Overview
Azure log integration is a free solution that enables you to integrate raw logs from your Azure resources in to your
on-premises Security Information and Event Management (SIEM) systems.
Azure log integration collects Windows events from Windows Event Viewer logs, Azure Activity Logs, Azure
Security Center alerts, and Azure Diagnostic logs from Azure resources. This integration helps your SIEM solution
provide a unified dashboard for all your assets, on-premises or in the cloud, so that you can aggregate, correlate,
analyze, and alert for security events.
NOTE
At this time, the only supported clouds are Azure commercial and Azure Government. Other clouds are not supported.
Community assistance is available through the Azure Log Integration MSDN Forum. The forum provides the AzLog
community the ability to support each other with questions, answers, tips, and tricks on how to get the most out of
Azure Log Integration. In addition, the Azure Log Integration team monitors this forum and will help whenever we
can.
You can also open a support request. To do this, select Log Integration as the service for which you are
requesting support.
Next steps
In this document, you were introduced to Azure log integration. To learn more about Azure log integration and the
types of logs supported, see the following:
Microsoft Azure Log Integration – Download Center for details, system requirements, and install instructions on
Azure log integration.
Get started with Azure log integration - This tutorial walks you through installation of Azure log integration and
integrating logs from Azure WAD storage, Azure Activity Logs, Azure Security Center alerts and Azure Active
Directory audit logs.
Partner configuration steps – This blog post shows you how to configure Azure log integration to work with
partner solutions Splunk, HP ArcSight, and IBM QRadar. This blog represents our current position on
configuring the partner solutions. In all cases, please refer to partner solution documentation first.
Activity and ASC alerts over syslog to QRadar – This blog post provides the steps to send Activity and Azure
Security Center alerts over syslog to QRadar
Azure log Integration frequently asked questions (FAQ) - This FAQ answers questions about Azure log
integration.
Integrating Security Center alerts with Azure log Integration – This document shows you how to sync Azure
Security Center alerts with Azure Log Integration.
Azure log integration with Azure Diagnostics
Logging and Windows Event Forwarding
7/31/2017 • 10 min to read • Edit Online
Azure Log Integration (AzLog) enables you to integrate raw logs from your Azure resources into your on-premises
Security Information and Event Management (SIEM) systems. This integration makes it possible to have a unified
security dashboard for all your assets, on-premises or in the cloud, so that you can aggregate, correlate, analyze,
and alert for security events associated with your applications.
NOTE
For more information on Azure Log Integration, you can review the Azure Log Integration overview.
This article will help you get started with Azure Log Integration by focusing on the installation of the Azlog service
and integrating the service with Azure Diagnostics. The Azure Log Integration service will then be able to collect
Windows Event Log information from the Windows Security Event Channel from virtual machines deployed in
Azure IaaS. This is very similar to “Event Forwarding” that you may have used on-premises.
NOTE
The ability to bring the output of Azure log integration in to the SIEM is provided by the SIEM itself. Please see the article
Integrating Azure Log Integration with your On-premises SIEM for more information.
To be very clear - the Azure Log Integration service runs on a physical or virtual computer that is using the
Windows Server 2008 R2 or above operating system (Windows Server 2012 R2 or Windows Server 2016 are
preferred).
The physical computer can run on-premises (or on a hoster site). If you choose to run the Azure Log Integration
service on a virtual machine, that virtual machine can be located on-premises or in a public cloud, such as
Microsoft Azure.
The physical or virtual machine running the Azure Log Integration service requires network connectivity to the
Azure public cloud. Steps in this article provide details on the configuration.
Prerequisites
At a minimum, the installation of AzLog requires the following items:
An Azure subscription. If you do not have one, you can sign up for a free account.
A storage account that can be used for Windows Azure diagnostic logging (you can use a pre-configured
storage account, or create a new one – will we demonstrate how to configure the storage account later in this
article) >[!NOTE] Depending on your scenario a storage account may not be required. For the Azure diagnostics
scenario covered in this article one is needed.
Two systems: a machine that will run the Azure Log Integration service, and a machine that will be monitored
and have its logging information sent to the Azlog service machine.
A machine you want to monitor – this is a VM running as an Azure Virtual Machine
A machine that will run the Azure log integration service; this machine will collect all the log information
that will later be imported into your SIEM.
This system can be on-premises or in Microsoft Azure.
This system can be on-premises or in Microsoft Azure.
It needs to be running an x64 version of Windows server 2008 R2 SP1 or higher and have .NET
4.5.1 installed. You can determine the .NET version installed by following the article titled How to:
Determine Which .NET Framework Versions Are Installed
It must have connectivity to the Azure storage account used for Azure diagnostic logging. We will
provide instructions later in this article on how you can confirm this connectivity
For a quick demonstration of the process of a creating a virtual machine using the Azure portal take a look at the
video below.
Deployment considerations
While you are testing Azure Log Integration, you can use any system that meets the minimum operating system
requirements. However, for a production environment the load may require you to plan for scaling up or out.
You can run multiple instances of the Azure Log Integration service (one instance per physical or virtual machine) if
event volume is high. In addition, you can load balance Azure Diagnostics storage accounts for Windows (WAD)
and the number of subscriptions to provide to the instances should be based on your capacity.
NOTE
At this time we do not have specific recommendations for when to scale out instances of Azure log integration machines (i.e.,
machines that are running the Azure log integration service), or for storage accounts or subscriptions. Scaling decisions
should be based on your performance observations in each of these areas.
You also have the option to scale up the Azure Log Integration service to help improve performance. The following
performance metrics can help you in sizing the machines that you choose to run the Azure log integration service:
On an 8-processor (core) machine, a single instance of Azlog Integrator can process about 24 million events
per day (~1M/hour).
On a 4-processor (core) machine, a single instance of Azlog Integrator can process about 1.5 million events
per day (~62.5K/hour).
NOTE
We recommend that you allow Microsoft to collect telemetry data. You can turn off collection of telemetry data by
unchecking this option.
The Azure Log Integration service collects telemetry data from the machine on which it is installed.
Telemetry data collected is:
Exceptions that occur during execution of Azure log integration
Metrics about the number of queries and events processed
Statistics about which Azlog.exe command-line options are being used
The installation process is covered in the video below.
NOTE
When the command succeeds, you will not receive any feedback. If you want to use the US Government Azure cloud,
you would use AzureUSGovernment (for the -Name variable) for the USA government cloud. Other Azure clouds
are not supported at this time.
4. Before you can monitor a system you will need the name of the storage account in use for Azure
Diagnostics. In the Azure portal navigate to Virtual machines and look for the virtual machine that you will
monitor. In the Properties section, choose Diagnostic Settings. Click on Agent and make note of the
storage account name specified. You will need this account name for a later step.
NOTE
If Monitoring was not enabled during virtual machine creation you will be given the option to enable it as shown
above.
5. Now we’ll switch our attention back to the Azure log integration machine. We need to verify that you have
connectivity to the Storage Account from the system where you installed Azure Log Integration. The physical
computer or virtual machine running the Azure Log Integration service needs access to the storage account to
retrieve information logged by Azure Diagnostics as configured on each of the monitored systems.
a. You can download Azure Storage Explorer here.
b. Run through the setup routine
c. Once the installation completes click Next and leave the check box Launch Microsoft Azure Storage
Explorer checked.
d. Log in to Azure.
e. Verify that you can see the storage account that you configured for Azure Diagnostics.
f. Notice that there are a few options under storage accounts. One of them is Tables. Under Tables you
should see one called WADWindowsEventLogsTable.
Integrate Azure Diagnostic logging
In this step, you will configure the machine running the Azure Log Integration service to connect to the storage
account that contains the log files. To complete this step we will need a few things up front.
FriendlyNameForSource: This is a friendly name that you can apply to the storage account that you've
configured the virtual machine to store information from Azure Diagnostics
StorageAccountName: This is the name of the storage account that you specified when you configured Azure
diagnotics.
StorageKey: This is the storage key for the storage account where the Azure Diagnostics information is stored
for this virtual machine.
Perform the following steps to obtain the storage key:
1. Browse to the Azure portal.
2. In the navigation pane of the Azure console, scroll to the bottom and click More services.
3. Enter Storage in the Filter text box. Click Storage accounts (this will appear after you enter Storage)
4. A list of storage accounts will appear, double click on the account that you assigned to Log storage.
5. Click on Access keys in the Settings section.
6. Copy key1 and put it in a secure location that you can access for the next step.
7. On the server that you installed Azure Log Integration, open an elevated Command Prompt (note that we’re
using an elevated Command Prompt window here, not an elevated PowerShell console).
8. Navigate to c:\Program Files\Microsoft Azure Log Integration
9. Run Azlog source add <FriendlyNameForTheSource> WAD <StorageAccountName> <StorageKey>
For example
Azlog source add Azlogtest WAD Azlog9414
fxxxFxxxxxxxxywoEJK2xxxxxxxxxixxxJ+xVJx6m/X5SQDYc4Wpjpli9S9Mm+vXS2RVYtp1mes0t9H5cuqXEw==
If you would like the subscription ID to show up in the event XML, append the subscription ID to the friendly
name: Azlog source add <FriendlyNameForTheSource>.<SubscriptionID> WAD <StorageAccountName> <StorageKey> or
for example,
Azlog source add Azlogtest.YourSubscriptionID WAD Azlog9414
fxxxFxxxxxxxxywoEJK2xxxxxxxxxixxxJ+xVJx6m/X5SQDYc4Wpjpli9S9Mm+vXS2RVYtp1mes0t9H5cuqXEw==
NOTE
Wait up to 60 minutes, then view the events that are pulled from the storage account. To view, open Event Viewer >
Windows Logs > Forwarded Events on the Azlog Integrator.
Here you can see a video going over the steps covered above.
Next steps
To learn more about Azure Log Integration, see the following documents:
Microsoft Azure Log Integration for Azure logs – Download Center for details, system requirements, and install
instructions on Azure log integration.
Introduction to Azure log integration – This document introduces you to Azure log integration, its key
capabilities, and how it works.
Partner configuration steps – This blog post shows you how to configure Azure log integration to work with
partner solutions Splunk, HP ArcSight, and IBM QRadar. This is our current guidance on how to configure the
SIEM components. Please check with your SIEM vendor first for additional details.
Azure log Integration frequently asked questions (FAQ) - This FAQ answers questions about Azure log
integration.
Integrating Security Center alerts with Azure log Integration – This document shows you how to sync Security
Center alerts, along with virtual machine security events collected by Azure Diagnostics and Azure Activity Logs,
with your log analytics or SIEM solution.
New features for Azure diagnostics and Azure Audit Logs – This blog post introduces you to Azure Audit Logs
and other features that help you gain insights into the operations of your Azure resources.
Integrate Azure Active Directory audit logs
8/22/2017 • 2 min to read • Edit Online
Azure Active Directory (Azure AD) audit events help you identify privileged actions that occurred in Azure Active
Directory. You can see the types of events that you can track by reviewing Azure Active Directory audit report
events.
NOTE
Before you attempt the steps in this article, you must review the Get started article and complete the steps there.
This command prompts you for your Azure login. The command then creates an Azure Active Directory
service principal in the Azure AD tenants that host the Azure subscriptions in which the logged-in user is an
administrator, a co-administrator, or an owner. The command will fail if the logged-in user is only a guest
user in the Azure AD tenant. Authentication to Azure is done through Azure AD. Creating a service principal
for Azure Log Integration creates the Azure AD identity that is given access to read from Azure subscriptions.
3. Run the following command to provide your tenant ID. You need to be member of the tenant admin role to
run the command.
Azlog.exe authorizedirectoryreader tenantId
Example:
AZLOG.exe authorizedirectoryreader ba2c0000-d24b-4f4e-92b1-48c4469999
4. Check the following folders to confirm that the Azure Active Directory audit log JSON files are created in
them:
C:\Users\azlog\AzureActiveDirectoryJson
C:\Users\azlog\AzureActiveDirectoryJsonLD
The following video demonstrates the steps covered in this article:
NOTE
For specific instructions on bringing the information in the JSON files into your security information and event management
(SIEM) system, contact your SIEM vendor.
Community assistance is available through the Azure Log Integration MSDN Forum. This forum enables people in
the Azure Log Integration community to support each other with questions, answers, tips, and tricks. In addition, the
Azure Log Integration team monitors this forum and helps whenever it can.
You can also open a support request. Select Log Integration as the service for which you are requesting support.
Next steps
To learn more about Azure Log Integration, see:
Microsoft Azure Log Integration for Azure logs: This Download Center page gives details, system requirements,
and installation instructions for Azure Log Integration.
Introduction to Azure Log Integration: This article introduces you to Azure Log Integration, its key capabilities,
and how it works.
Partner configuration steps: This blog post shows you how to configure Azure Log Integration to work with
partner solutions Splunk, HP ArcSight, and IBM QRadar.
Azure Log Integration FAQ: This article answers questions about Azure Log Integration.
Integrating Security Center alerts with Azure Log Integration: This article shows you how to sync Security Center
alerts, along with virtual machine security events collected by Azure Diagnostics and Azure audit logs, with your
log analytics or SIEM solution.
New features for Azure Diagnostics and Azure audit logs: This blog post introduces you to Azure audit logs and
other features that help you gain insights into the operations of your Azure resources.
How to get your Security Center alerts in Azure log
integration
8/30/2017 • 2 min to read • Edit Online
This article provides the steps required to enable the Azure Log Integration service to pull Security Alert information
generated by Azure Security Center. You must have successfully completed the steps in the Get started article
before performing the steps in this article.
Detailed steps
The following steps will create the required Azure Active Directory Service Principal and assign the Service Principle
read permissions to the subscription:
1. Open the command prompt and navigate to c:\Program Files\Microsoft Azure Log Integration
2. Run the command azlog createazureid
This command prompts you for your Azure login. The command then creates an Azure Active Directory
Service Principal in the Azure AD Tenants that host the Azure subscriptions in which the logged in user is an
Administrator, a Co-Administrator, or an Owner. The command will fail if the logged in user is only a Guest
user in the Azure AD Tenant. Authentication to Azure is done through Azure Active Directory (AD). Creating a
service principal for Azlog Integration creates the Azure AD identity that will be given access to read from
Azure subscriptions.
3. Next you will run a command that assigns reader access on the subscription to the service principal you just
created. If you don’t specify a SubscriptionID, then the command will attempt to assign the service principal
reader role to all subscriptions to which you have any access.
azlog authorize <SubscriptionID>
for example
azlog authorize 0ee55555-0bc4-4a32-a4e8-c29980000000
NOTE
You may see warnings if you run the authorize command immediately after the createazureid command. There is
some latency between when the Azure AD account is created and when the account is available for use. If you wait
about 60 seconds after running the createazureid command to run the authorize command, then you should not
see these warnings.
4. Check the following folders to confirm that the Audit log JSON files are there:
c:\Users\azlog\AzureResourceManagerJson
c:\Users\azlog\AzureResourceManagerJsonLD
5. Check the following folders to confirm that Security Center alerts exist in them:
c:\Users\azlog\AzureSecurityCenterJson
c:\Users\azlog\AzureSecurityCenterJsonLD
If you run into any issues during the installation and configuration, please open a support request, select Log
Integration as the service for which you are requesting support.
Next steps
To learn more about Azure Log Integration, see the following documents:
Microsoft Azure Log Integration for Azure logs – Download Center for details, system requirements, and install
instructions on Azure log integration.
Introduction to Azure log integration – This document introduces you to Azure log integration, its key
capabilities, and how it works.
Partner configuration steps – This blog post shows you how to configure Azure log integration to work with
partner solutions Splunk, HP ArcSight, and IBM QRadar.
Azure log Integration frequently asked questions (FAQ) - This FAQ answers questions about Azure log
integration.
New features for Azure diagnostics and Azure Audit Logs – This blog post introduces you to Azure Audit Logs
and other features that help you gain insights into the operations of your Azure resources.
Azure Log Integration tutorial: Process Azure Key
Vault events by using Event Hubs
8/22/2017 • 6 min to read • Edit Online
You can use Azure Log Integration to retrieve logged events and make them available to your security information
and event management (SIEM) system. This tutorial shows an example of how Azure Log Integration can be used to
process logs that are acquired through Azure Event Hubs.
Use this tutorial to get acquainted with how Azure Log Integration and Event Hubs work together by following the
example steps and understanding how each step supports the solution. Then you can take what you’ve learned here
to create your own steps to support your company’s unique requirements.
WARNING
The steps and commands in this tutorial are not intended to be copied and pasted. They're examples only. Do not use the
PowerShell commands “as is” in your live environment. You must customize them based on your unique environment.
This tutorial walks you through the process of taking Azure Key Vault activity logged to an event hub and making it
available as JSON files to your SIEM system. You can then configure your SIEM system to process the JSON files.
NOTE
Most of the steps in this tutorial involve configuring key vaults, storage accounts, and event hubs. The specific Azure Log
Integration steps are at the end of this tutorial. Do not perform these steps in a production environment. They are intended
for a lab environment only. You must customize the steps before using them in production.
Information provided along the way helps you understand the reasons behind each step. Links to other articles give
you more detail on certain topics.
For more information about the services that this tutorial mentions, see:
Azure Key Vault
Azure Event Hubs
Azure Log Integration
Initial setup
Before you can complete the steps in this article, you need the following:
1. An Azure subscription and account on that subscription with administrator rights. If you don't have a
subscription, you can create a free account.
2. A system with access to the internet that meets the requirements for installing Azure Log Integration. The
system can be on a cloud service or hosted on-premises.
3. Azure Log Integration installed. To install it:
a. Use Remote Desktop to connect to the system mentioned in step 2.
b. Copy the Azure Log Integration installer to the system. You can download the installation files.
c. Start the installer and accept the Microsoft Software License Terms.
d. If you will provide telemetry information, leave the check box selected. If you'd rather not send usage
information to Microsoft, clear the check box.
For more information about Azure Log Integration and how to install it, see Azure Log Integration with Azure
Diagnostics logging and Windows Event Forwarding.
4. The latest PowerShell version.
If you have Windows Server 2016 installed, then you have at least PowerShell 5.0. If you're using any other
version of Windows Server, you might have an earlier version of PowerShell installed. You can check the
version by entering get-host in a PowerShell window. If you don't have PowerShell 5.0 installed, you can
download it.
After you have at least PowerShell 5.0, you can proceed to install the latest version:
a. In a PowerShell window, enter the Install-Module Azure command. Complete the installation steps.
b. Enter the Install-Module AzureRM command. Complete the installation steps.
For more information, see Install Azure PowerShell.
3. Enter the Login-AzureRmAccount command. In the login window, enter the credential information for the
subscription that you will use for this tutorial.
NOTE
If this is the first time that you're logging in to Azure from this machine, you will see a message about allowing
Microsoft to collect PowerShell usage data. We recommend that you enable this data collection because it will be used
to improve Azure PowerShell.
4. After successful authentication, you're logged in and you see the information in the following screenshot.
Take note of the subscription ID and subscription name, because you'll need them to complete later steps.
5. Create variables to store values that will be used later. Enter each of the following PowerShell lines. You might
need to adjust the values to match your environment.
$subscriptionName = ‘Visual Studio Ultimate with MSDN’ (Your subscription name might be different. You
can see it as part of the output of the previous command.)
$location = 'West US' (This variable will be used to pass the location where resources should be created.
You can change this variable to be any location of your choosing.)
$random = Get-Random
$name = 'azlogtest' + $random (The name can be anything, but it should include only lowercase letters
and numbers.)
$storageName = $name (This variable will be used for the storage account name.)
$rgname = $name (This variable will be used for the resource group name.)
$eventHubNameSpaceName = $name (This is the name of the event hub namespace.)
6. Specify the subscription that you will be working with:
Select-AzureRmSubscription -SubscriptionName $subscriptionName
If you enter $rg at this point, you should see output similar to this screenshot:
8. Create a storage account that will be used to keep track of state information:
$storage = New-AzureRmStorageAccount -ResourceGroupName $rgname -Name $storagename -Location $location -
SkuName Standard_LRS
9. Create the event hub namespace. This is required to create an event hub.
$eventHubNameSpace = New-AzureRmEventHubNamespace -ResourceGroupName $rgname -NamespaceName
$eventHubnamespaceName -Location $location
10. Get the rule ID that will be used with the insights provider:
$sbruleid = $eventHubNameSpace.Id +'/authorizationrules/RootManageSharedAccessKey'
11. Get all possible Azure locations and add the names to a variable that can be used in a later step:
a. $locationObjects = Get-AzureRMLocation
b. $locations = @('global') + $locationobjects.location
If you enter $locations at this point, you see the location names without the additional information returned
by Get-AzureRmLocation.
12. Create an Azure Resource Manager log profile:
Add-AzureRmLogProfile -Name $name -ServiceBusRuleId $sbruleid -Locations $locations
For more information about the Azure log profile, see Overview of the Azure Activity Log.
NOTE
You might get an error message when you try to create a log profile. You can then review the documentation for Get-
AzureRmLogProfile and Remove-AzureRmLogProfile. If you run Get-AzureRmLogProfile, you see information about the log
profile. You can delete the existing log profile by entering the Remove-AzureRmLogProfile -name 'Log Profile Name'
command.
3. Display the keys again and see that key2 holds a different value:
Get-AzureRmStorageAccountKey -Name $storagename -ResourceGroupName $rgname | ft -a
4. Set and read a secret to generate additional log entries:
a.
Set-AzureKeyVaultSecret -VaultName $name -Name TestSecret -SecretValue (ConvertTo-SecureString -String
'Hi There!' -AsPlainText -Force)
b. (Get-AzureKeyVaultSecret -VaultName $name -Name TestSecret).SecretValueText
After a minute or so of running the last two commands, you should see JSON files being generated. You can
confirm that by monitoring the directory C:\users\AzLog\EventHubJson.
Next steps
Azure Log Integration FAQ
Get started with Azure Log Integration
Integrate logs from Azure resources into your SIEM systems
Azure Log Integration FAQ
8/22/2017 • 4 min to read • Edit Online
This article answers frequently asked questions (FAQ) about Azure Log Integration.
Azure Log Integration is a Windows operating system service that you can use to integrate raw logs from your
Azure resources into your on-premises security information and event management (SIEM) systems. This
integration provides a unified dashboard for all your assets, on-premises or in the cloud. You can then aggregate,
correlate, analyze, and alert for security events associated with your applications.
How can I see the storage accounts from which Azure Log Integration
is pulling Azure VM logs?
Run the command azlog source list.
How can I tell which subscription the Azure Log Integration logs are
from?
In the case of audit logs that are placed in the AzureResourcemanagerJson directories, the subscription ID is in
the log file name. This is also true for logs in the AzureSecurityCenterJson folder. For example:
20170407T070805_2768037.0000000023.1111e5ee-1111-111b-a11e-1e111e1111dc.json
Azure Active Directory audit logs include the tenant ID as part of the name.
Diagnostic logs that are read from an event hub do not include the subscription ID as part of the name. Instead,
they include the friendly name specified as part of the creation of the event hub source.
The event XML has the following metadata, including the subscription ID:
Error messages
When I run the command azlog createazureid, why do I get the following error?
Error:
Failed to create AAD Application - Tenant 72f988bf-86f1-41af-91ab-2d7cd011db37 - Reason = 'Forbidden' -
Message = 'Insufficient privileges to complete the operation.'
The azlog createazureid command attempts to create a service principal in all the Azure AD tenants for the
subscriptions that the Azure login has access to. If your Azure login is only a guest user in that Azure AD tenant,
the command fails with "Insufficient privileges to complete the operation." Ask the tenant admin to add your
account as a user in the tenant.
When I run the command azlog authorize , why do I get the following error?
Error:
Warning creating Role Assignment - AuthorizationFailed: The client [email protected]' with object id
'fe9e03e4-4dad-4328-910f-fd24a9660bd2' does not have authorization to perform action
'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/70d95299-d689-4c97-b971-
0d8ff0000000'.
The azlog authorize command assigns the role of reader to the Azure AD service principal (created with azlog
createazureid) to the provided subscriptions. If the Azure login is not a co-administrator or an owner of the
subscription, it fails with an "Authorization Failed" error message. Azure Role-Based Access Control (RBAC) of co-
administrator or owner is needed to complete this action.
Where can I find the definition of the properties in the audit log?
See:
Audit operations with Azure Resource Manager
List the management events in a subscription in the Azure Monitor REST API
The following example modifies the Azure Diagnostics configuration. In this configuration, only event ID 4624 and
event ID 4625 are collected from the security event log. Microsoft Antimalware for Azure events are collected from
the system event log. For details on the use of XPath expressions, see Consuming Events.
<WindowsEventLog scheduledTransferPeriod="PT1M">
<DataSource name="Security!*[System[(EventID=4624 or EventID=4625)]]" />
<DataSource name="System!*[System[Provider[@Name='Microsoft Antimalware']]]"/>
</WindowsEventLog>
$diagnosticsconfig_path = "d:\WADConfig.xml"
Set-AzureRmVMDiagnosticsExtension -ResourceGroupName AzLog-Integration -VMName AzlogClient -
DiagnosticsConfigurationPath $diagnosticsconfig_path -StorageAccountName log3121 -StorageAccountKey <storage
key>
After you make changes, check the storage account to ensure that the correct events are collected.
If you have any issues during the installation and configuration, please open a support request. Select Log
Integration as the service for which you are requesting support.
Deploying an application on Azure is fast, easy, and cost-effective. Before you deploy your cloud application into
production, review our list of essential and recommended best practices for implementing secure clusters in your
application.
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable
and reliable microservices. Service Fabric also addresses the significant challenges in developing and managing
cloud applications. Developers and administrators can avoid complex infrastructure problems and focus on
implementing mission-critical, demanding workloads that are scalable, reliable, and manageable.
For each best practice, we explain:
What the best practice is.
Why you should implement the best practice.
What might happen if you don't implement the best practice.
How you can learn to implement the best practice.
We recommend the following Azure Service Fabric security best practices:
Use Azure Resource Manager templates and the Service Fabric PowerShell module to create secure clusters.
Use X.509 certificates.
Configure security policies.
Implement the Reliable Actors security configuration.
Configure SSL for Azure Service Fabric.
Use network isolation and security with Azure Service Fabric.
Configure Azure Key Vault for security.
Assign users to roles.
NOTE
Security recommendation for Azure clusters: Use Azure AD security to authenticate clients and certificates for node-to-
node security.
To configure a standalone Windows cluster, see Configure settings for a standalone Windows cluster.
Use Azure Resource Manager templates and the Service Fabric PowerShell module to create a secure cluster. For
step-by-step instructions to create a secure Service Fabric cluster by using Azure Resource Manager templates, see
Creating a Service Fabric cluster.
Use the Azure Resource Manager template:
Customize your cluster by using the template to configure managed storage for VM virtual hard disks (VHDs).
Drive changes to your resource group by using the template for easy configuration management and auditing.
Treat your cluster configuration as code:
Be thorough when checking your deployment configurations.
Avoid using implicit commands to directly modify your resources.
Many aspects of the Service Fabric application lifecycle can be automated. The Service Fabric PowerShell module
automates common tasks for deploying, upgrading, removing, and testing Azure Service Fabric applications.
Managed APIs and HTTP APIs for application management are also available.
NOTE
You cannot obtain an SSL certificate from a CA for the cloudapp.net domain.
NOTE
For more information about using roles in Service Fabric, see Role-Based Access Control for Service Fabric clients.
Azure Service Fabric supports two access control types for clients that are connected to a Service Fabric cluster:
administrator and user. The cluster administrator can use access control to limit access to certain cluster operations
for different groups of users. Access control makes the cluster more secure.
Next steps
Set up your Service Fabric development environment.
Learn about Service Fabric support options.
Azure Service Fabric security checklist
8/9/2017 • 2 min to read • Edit Online
This article provides an easy-to-use checklist that will help you secure your Azure Service Fabric environment.
Introduction
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable
and reliable microservices. Service Fabric also addresses the significant challenges in developing and managing
cloud applications. Developers and administrators can avoid complex infrastructure problems and focus on
implementing mission-critical, demanding workloads that are scalable, reliable, and manageable.
Checklist
Use the following checklist to help you make sure that you haven’t overlooked any important issues in
management and configuration of a secure Azure Service Fabric solution.
Role based access control (RBAC) Access control allows the cluster administrator to limit
access to certain cluster operations for different groups
of users, making the cluster more secure.
Administrators have full access to management
capabilities (including read/write capabilities).
Users, by default, have only read access to
management capabilities (for example, query
capabilities), and the ability to resolve applications and
services.
X.509 certificates and Service Fabric Certificates used in clusters running production
workloads should be created by using a correctly
configured Windows Server certificate service or
obtained from an approved Certificate Authority (CA).
Never use any temporary or test certificates in
production that are created with tools such as
MakeCert.exe.
You can use a self-signed certificate but, should only
do so for test clusters and not in production.
ClientCertificateCommonNames Set the common name of the first client certificate for
the CertificateCommonName. The
CertificateIssuerThumbprint is the thumbprint for the
issuer of this certificate.
Next steps
Service Fabric Cluster upgrade process and expectations from you
Managing your Service Fabric applications in Visual Studio.
Service Fabric Health model introduction.
Azure Identity Management and access control
security best practices
6/27/2017 • 10 min to read • Edit Online
Many consider identity to be the new boundary layer for security, taking over that role from the traditional
network-centric perspective. This evolution of the primary pivot for security attention and investments come from
the fact that network perimeters have become increasingly porous and that perimeter defense cannot be as
effective as they once were prior to the explosion of BYOD devices and cloud applications.
In this article we will discuss a collection of Azure identity management and access control security best practices.
These best practices are derived from our experience with Azure AD and the experiences of customers like yourself.
For each best practice, we’ll explain:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
Possible alternatives to the best practice
How you can learn to enable the best practice
This Azure identity management and access control security best practices article is based on a consensus opinion
and Azure platform capabilities and feature sets, as they exist at the time this article was written. Opinions and
technologies change over time and this article will be updated on a regular basis to reflect those changes.
Azure identity management and access control security best practices discussed in this article include:
Centralize your identity management
Enable Single Sign-On (SSO)
Deploy password management
Enforce multi-factor authentication (MFA) for users
Use role based access control (RBAC)
Control locations where resources are created using resource manager
Guide developers to leverage identity capabilities for SaaS apps
Actively monitor for suspicious activities
NOTE
the decision to use SSO will impact how you integrate your on-premises directory with your cloud directory. If you want SSO,
you will need to use federation, because directory synchronization will only provide same sign-on experience.
Organizations that do not enforce SSO for their users and applications are more exposed to scenarios where users
will have multiple passwords which directly increases the likelihood of users reusing passwords or using weak
passwords.
You can learn more about Azure AD SSO by reading the article AD FS management and customization with Azure
AD Connect.
NOTE
this is not the same as RBAC, it actually leverages RBAC to authenticate the users that have privilege to create those
resources.
Leverage Azure Resource Manager to create custom policies also for scenarios where the organization wants to
allow operations only when the appropriate cost center is associated; otherwise, they will deny the request.
Organizations that are not controlling how resources are created are more susceptible to users that may abuse the
service by creating more resources than they need. Hardening the resource creation process is an important step to
secure a multi-tenant scenario.
You can learn more about creating policies with Azure Resource Manager by reading the article Use Policy to
manage resources and control access.
Organizations are able to improve their threat detection and response times by using a provider’s cloud-based
security capabilities and cloud intelligence. By shifting responsibilities to the cloud provider, organizations can get
more security coverage, which enables them to reallocate security resources and budget to other business
priorities.
Division of responsibility
It’s important to understand the division of responsibility between you and Microsoft. On-premises, you own the
whole stack but as you move to the cloud some responsibilities transfer to Microsoft. The following responsibility
matrix shows the areas of the stack in a SaaS, PaaS, and IaaS deployment that you are responsible for and
Microsoft is responsible for.
For all cloud deployment types, you own your data and identities. You are responsible for protecting the security of
your data and identities, on-premises resources, and the cloud components you control (which varies by service
type).
Responsibilities that are always retained by you, regardless of the type of deployment, are:
Data
Endpoints
Account
Access management
Starting at the bottom of the stack, the physical infrastructure, Microsoft mitigates common risks and
responsibilities. Because the Microsoft cloud is continually monitored by Microsoft, it is hard to attack. It doesn’t
make sense for an attacker to pursue the Microsoft cloud as a target. Unless the attacker has lots of money and
resources, the attacker is likely to move on to another target.
In the middle of the stack, there is no difference between a PaaS deployment and on-premises. At the application
layer and the account and access management layer, you have similar risks. In the next steps section of this article,
we will guide you to best practices for eliminating or minimizing these risks.
At the top of the stack, data governance and rights management, you take on one risk that can be mitigated by key
management. (Key management is covered in best practices.) While key management is an additional
responsibility, you have areas in a PaaS deployment that you no longer have to manage so you can shift resources
to key management.
The Azure platform also provides you strong DDoS protection by using various network-based technologies.
However, all types of network-based DDoS protection methods have their limits on a per-link and per-datacenter
basis. To help avoid the impact of large DDoS attacks, you can take advantage of Azure’s core cloud capability of
enabling you to quickly and automatically scale out to defend against DDoS attacks. We'll go into more detail on
how you can do this in the recommended practices articles.
Next steps
In this article, we focused on security advantages of an Azure PaaS deployment. Next, learn recommended
practices for securing your PaaS web and mobile solutions. We’ll start with Azure App Service, Azure SQL
Database, and Azure SQL Data Warehouse. As articles on recommended practices for other Azure services become
available, links will be provided in the following list:
Azure App Service
Azure SQL Database and Azure SQL Data Warehouse
Azure Storage
Azure REDIS Cache
Azure Service Bus
Web Application Firewalls
Securing PaaS web and mobile applications using
Azure App Services
7/18/2017 • 2 min to read • Edit Online
In this article, we discuss a collection of Azure App Services security best practices for securing your PaaS web and
mobile applications. These best practices are derived from our experience with Azure and the experiences of
customers like yourself.
Best practices
When using App Services, follow these best practices:
Authenticate through Azure Active Directory (AD). App Services provides an OAuth 2.0 service for your identity
provider. OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for Web
applications, desktop applications, and mobile phones. Azure AD uses OAuth 2.0 to enable you to authorize
access to mobile and web applications.
Restrict access based on the need to know and least privilege security principles. Restricting access is imperative
for organizations that want to enforce security policies for data access. Role-Based Access Control (RBAC) can
be used to assign permissions to users, groups, and applications at a certain scope. To learn more about
granting users access to applications, see get started with access management.
Protect your keys. It doesn't matter how good your security is if you lose your subscription keys. Azure Key
Vault helps safeguard cryptographic keys and secrets used by cloud applications and services. By using Key
Vault, you can encrypt keys and secrets (such as authentication keys, storage account keys, data encryption keys,
.PFX files, and passwords) by using keys that are protected by hardware security modules (HSMs). For added
assurance, you can import or generate keys in HSMs. See Azure Key Vault to learn more. You can also use Key
Vault to manage your TLS certificates with auto-renewal.
Restrict incoming source IP addresses. App Services Environment has a virtual network integration feature that
helps you restrict incoming source IP addresses through network security groups (NSGs). If you are unfamiliar
with Azure Virtual Networks (VNETs), this is a capability that allows you to place many of your Azure resources
in a non-internet, routable network that you control access to. To learn more, see Integrate your app with an
Azure Virtual Network.
Next steps
This article introduced you to a collection of App Services security best practices for securing your PaaS web and
mobile applications. To learn more about securing your PaaS deployments, see:
Securing PaaS deployments
Securing PaaS web and mobile applications using Azure SQL Database and SQL Data Warehouse
Securing PaaS databases in Azure
7/18/2017 • 5 min to read • Edit Online
In this article, we discuss a collection of Azure SQL Database and SQL Data Warehouse security best practices for
securing your PaaS web and mobile applications. These best practices are derived from our experience with Azure
and the experiences of customers like yourself.
Best practices
Use a centralized identity repository for authentication and authorization
Azure SQL databases can be configured to use one of two types of authentication:
SQL Authentication uses a username and password. When you created the logical server for your
database, you specified a "server admin" login with a username and password. Using these credentials, you
can authenticate to any database on that server as the database owner.
Azure Active Directory Authentication uses identities managed by Azure Active Directory and is
supported for managed and integrated domains. To use Azure Active Directory Authentication, you must
create another server admin called the "Azure AD admin," which is allowed to administer Azure AD users
and groups. This admin can also perform all operations that a regular server admin can.
Azure Active Directory authentication is a mechanism of connecting to Azure SQL Database and SQL Data
Warehouse by using identities in Azure Active Directory (AD). Azure AD provides an alternative to SQL Server
authentication so you can stop the proliferation of user identities across database servers. Azure AD authentication
enables you to centrally manage the identities of database users and other Microsoft services in one central
location. Central ID management provides a single place to manage database users and simplifies permission
management.
Benefits of using Azure AD authentication instead of SQL authentication include:
Allows password rotation in a single place.
Manages database permissions using external Azure AD groups.
Eliminates storing passwords by enabling integrated Windows authentication and other forms of authentication
supported by Azure AD.
Uses contained database users to authenticate identities at the database level.
Supports token-based authentication for applications connecting to SQL Database.
Supports ADFS (domain federation) or native user/password authentication for a local Azure AD without
domain synchronization.
Supports connections from SQL Server Management Studio that use Active Directory Universal Authentication,
which includes Multi-Factor Authentication (MFA). MFA includes strong authentication with a range of easy
verification options — phone call, text message, smart cards with pin, or mobile app notification. For more
information, see SSMS support for Azure AD MFA with SQL Database and SQL Data Warehouse.
To learn more about Azure AD authentication, see:
Connecting to SQL Database or SQL Data Warehouse By Using Azure Active Directory Authentication
Authentication to Azure SQL Data Warehouse
Token-based authentication support for Azure SQL DB using Azure AD authentication
NOTE
To ensure that Azure Active Directory is a good fit for your environment, see Azure AD features and limitations, specifically
the additional considerations.
Security is a top concern when managing databases, and it has always been a priority for Azure SQL Database. Your
databases can be tightly secured to help satisfy most regulatory or security requirements, including HIPAA, ISO
27001/27002, and PCI DSS Level 1, among others. A current list of security compliance certifications is available at
the Microsoft Trust Center site. You also can choose to place your databases in specific Azure datacenters based on
regulatory requirements.
In this article, we will discuss a collection of Azure database security best practices. These best practices are derived
from our experience with Azure database security and the experiences of customers like yourself.
For each best practice, we explain:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
How you can learn to enable the best practice
This Azure Database Security Best Practices article is based on a consensus opinion and Azure platform capabilities
and feature sets as they exist at the time this article was written. Opinions and technologies change over time and
this article will be updated on a regular basis to reflect those changes.
Azure database security best practices discussed in this article include:
Use firewall rules to restrict database access
Enable database authentication
Protect your data using encryption
Protect data in transit
Enable database auditing
Enable database threat detection
NOTE
For more information about firewall rules in SQL Database, see SQL Database firewall rules.
NOTE
However, SQL Server Authentication cannot use Kerberos security protocol.
Conclusion
Azure Database is a robust database platform, with a full range of security features that meet many organizational
and regulatory compliance requirements. You can help protect data by controlling the physical access to your data,
and using a variety of options for data security at the file-, column-, or row level with Transparent Data Encryption,
Cell-Level Encryption, or Row-Level Security. Always Encrypted also enables operations against encrypted data,
simplifying the process of application updates. In turn, access to auditing logs of SQL Database activity provides you
with the information you need, allowing you to know how and when data is accessed.
Next steps
To learn more about firewall rules, see Firewall rules.
To learn about users and logins, see Manage logins.
For a tutorial, see Secure your Azure SQL Database.
Azure database security checklist
8/1/2017 • 2 min to read • Edit Online
To help improve security, Azure Database includes a number of built-in security controls that you can use to limit
and control access.
These include:
A firewall that enables you to create firewall rules limiting connectivity by IP address,
Server-level firewall accessible from the Azure portal
Database-level firewall rules accessible from SSMS
Secure connectivity to your database using secure connection strings
Use access management
Data encryption
SQL Database auditing
SQL Database threat detection
Introduction
Cloud computing requires new security paradigms that are unfamiliar to many application users, database
administrators, and programmers. As a result, some organizations are hesitant to implement a cloud infrastructure
for data management due to perceived security risks. However, much of this concern can be alleviated through a
better understanding of the security features built into Microsoft Azure and Microsoft Azure SQL Database.
Checklist
We recommend that you read the Azure Database Security Best Practices article prior to reviewing this checklist.
You will be able to get the most out of this checklist after you understand the best practices. You can then use this
checklist to make sure that you’ve addressed the important issues in Azure database security.
Protect Data
Control Access
CHECKLIST CATEGORY DESCRIPTION
Proactive Monitoring
Conclusion
Azure Database is a robust database platform, with a full range of security features that meet many organizational
and regulatory compliance requirements. You can easily protect data by controlling the physical access to your data,
and using a variety of options for data security at the file-, column-, or row-level with Transparent Data Encryption,
Cell-Level Encryption, or Row-Level Security. Always Encrypted also enables operations against encrypted data,
simplifying the process of application updates. In turn, access to auditing logs of SQL Database activity provides you
with the information you need, allowing you to know how and when data is accessed.
Next steps
You can improve the protection of your database against malicious users or unauthorized access with just a few
simple steps. In this tutorial you learn to:
Set up firewall rules for your sever and or database.
Protect your data with encryption.
Enable SQL Database auditing.
Azure operational security checklist
8/1/2017 • 4 min to read • Edit Online
Deploying an application on Azure is fast, easy, and cost-effective. Before deploying cloud application in production
useful to have a checklist to assist in evaluating your application against a list of essential and recommended
operational security actions for you to consider.
Introduction
Azure provides a suite of infrastructure services that you can use to deploy your applications. Azure Operational
Security refers to the services, controls, and features available to users for protecting their data, applications, and
other assets in Microsoft Azure.
To get the maximum benefit out of the cloud platform, we recommend that you leverage Azure services and
follow the checklist.
Organizations that invest time and resources assessing the operational readiness of their applications before
launch have a much higher rate of satisfaction than those who don’t. When performing this work, checklists can
be an invaluable mechanism to ensure that applications are evaluated consistently and holistically.
The level of operational assessment will varies depending on the organization’s cloud maturity level and the
application’s development phase, availability needs, and data sensitivity requirements.
Checklist
This checklist is intended to help enterprises think through various operational security considerations as they
deploy sophisticated enterprise applications on Azure. It can also be used to help you build a secure cloud migration
and operation strategy for your organization.
Conclusion
Many organizations have successfully deployed and operated their cloud applications on Azure. The checklists
provided highlight several checklists that is essential and help you to increase the likelihood of successful
deployments and frustration-free operations. We highly recommend these operational and strategic considerations
for your existing and new application deployments on Azure.
Next steps
In this document, you were introduced to OMS Security and Audit solution. To learn more about OMS Security, see
the following articles:
Operations Management Suite (OMS) overview.
Design and operational security.
Azure Security Center planning and operations.
Securing PaaS web and mobile applications using
Azure Storage
8/23/2017 • 7 min to read • Edit Online
In this article, we discuss a collection of Azure Storage security best practices for securing your PaaS web and
mobile applications. These best practices are derived from our experience with Azure and the experiences of
customers like yourself.
The Azure Storage security guide is a great source for detailed information about Azure Storage and security. This
article addresses at a high level some of the concepts found in the security guide and links to the security guide, as
well as other sources, for more information.
Azure Storage
Azure makes it possible to deploy and use storage in ways not easily achievable on-premises. With Azure storage,
you can reach high levels of scalability and availability with relatively little effort. Not only is Azure storage the
foundation for Windows and Linux Azure Virtual Machines, it can also support large distributed applications.
Azure storage provides the following four services: Blob storage, Table storage, Queue storage, and File storage. To
learn more, see Introduction to Microsoft Azure Storage.
Best practices
This article addresses the following best practices:
Access protection:
Shared Access Signatures (SAS)
Managed disk
Role-Based Access Control (RBAC)
Storage encryption:
Client side encryption for high value data
Azure Disk Encryption for virtual machines (VMs)
Storage Service Encryption
Access protection
Use Shared Access Signature instead of a storage account key
In an IaaS solution, usually running Windows Server or Linux virtual machines, files are protected from disclosure
and tampering threats using access control mechanisms. On Windows you’d use access control lists (ACL) and on
Linux you’d probably use chmod. Essentially, this is exactly what you would do if you were protecting files on a
server in your own data center today.
PaaS is different. One of the most common ways to store files in Microsoft Azure is to use Azure Blob storage. A
difference between Blob storage and other file storage is the file I/O, and the protection methods that come with file
I/O.
Access control is critical. To help you control access to Azure storage, the system will generate two 512-bit storage
account keys (SAKs) when you create a storage account. The level of key redundancy makes it possible for you to
avoid service interrupt during routine key rotation.
Storage access keys are high priority secrets and should only be accessible to those responsible for storage access
control. If the wrong people get access to these keys, they will have complete control of storage and could replace,
delete or add files to storage. This includes malware and other types of content that can potentially compromise
your organization or your customers.
You still need a way to provide access to objects in storage. To provide more granular access you can take
advantage of Shared Access Signature (SAS). The SAS makes it possible for you to share specific objects in storage
for a pre-defined time-interval and with specific permissions. A Shared Access Signature allows you to define:
The interval over which the SAS is valid, including the start time and the expiry time.
The permissions granted by the SAS. For example, a SAS on a blob might grant a user read and write
permissions to that blob, but not delete permissions.
An optional IP address or range of IP addresses from which Azure Storage accepts the SAS. For example, you
might specify a range of IP addresses belonging to your organization. This provides another measure of security
for your SAS.
The protocol over which Azure Storage accepts the SAS. You can use this optional parameter to restrict access to
clients using HTTPS.
SAS allows you to share content the way you want to share it without giving away your Storage Account Keys.
Always using SAS in your application is a secure way to share your storage resources without compromising your
storage account keys.
To learn more, see Using Shared Access Signatures (SAS). To learn more about potential risks and
recommendations to mitigate those risks, see Best practices when using SAS.
Use managed disks for VMs
When you choose Azure Managed Disks, Azure manages the storage accounts that you use for your VM disks. All
you need to do is choose the type of disk (Premium or Standard) and the disk size; Azure storage will do the rest.
You don’t have to worry about scalability limits that might have otherwise required to you to multiple storage
accounts.
To learn more, see Frequently Asked Questions about managed and unmanaged premium disks.
Use Role -Based Access Control
Earlier we discussed using Shared Access Signature (SAS) to grant limited access to objects in your storage account
to other clients, without exposing your account storage account key. Sometimes the risks associated with a
particular operation against your storage account outweigh the benefits of SAS. Sometimes it's simpler to manage
access in other ways.
Another way to manage access is to use Azure Role-Based Access Control (RBAC). With RBAC, you focus on giving
employees the exact permissions they need, based on the need to know and least privilege security principles. Too
many permissions can expose an account to attackers. Too few permissions means that employees can't get their
work done efficiently. RBAC helps address this problem by offering fine-grained access management for Azure. This
is imperative for organizations that want to enforce security policies for data access.
You can leverage built-in RBAC roles in Azure to assign privileges to users. Consider using Storage Account
Contributor for cloud operators that need to manage storage accounts and Classic Storage Account Contributor
role to manage classic storage accounts. For cloud operators that need to manage VMs but not the virtual network
or storage account to which they are connected, consider adding them to the Virtual Machine Contributor role.
Organizations that do not enforce data access control by leveraging capabilities such as RBAC may be giving more
privileges than necessary for their users. This can lead to data compromise by allowing some users access to data
they shouldn’t have in the first place.
To learn more about RBAC see:
Azure Role-Based Access Control
Built-in roles for Azure role-based access control
Azure Storage Security Guide for detail on how to secure your storage account with RBAC
Storage encryption
Use client-side encryption for high value data
Client-side encryption enables you to programmatically encrypt data in transit before uploading to Azure Storage
and programmatically decrypt data when retrieving it from storage. This provides encryption of data in transit but it
also provides encryption of data at rest. Client-side encryption is the most secure method of encrypting your data
but it does require you to make programmatic changes to your application and put key management processes in
place.
Client-side encryption also enables you to have sole control over your encryption keys. You can generate and
manage your own encryption keys. Client-side encryption uses an envelope technique where the Azure storage
client library generates a content encryption key (CEK) that is then wrapped (encrypted) using the key encryption
key (KEK). The KEK is identified by a key identifier and can be an asymmetric key pair or a symmetric key and can be
managed locally or stored in Azure Key Vault.
Client-side encryption is built into the Java and the .NET storage client libraries. See Client-Side Encryption and
Azure Key Vault for Microsoft Azure Storage for information on encrypting data within client applications and
generating and managing your own encryption keys.
Azure Disk Encryption for VMs
Azure Disk Encryption is a capability that helps you encrypt your Windows and Linux IaaS virtual machine disks.
Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of
Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to
help you control and manage the disk-encryption keys and secrets in your key vault subscription. The solution also
ensures that all data on the virtual machine disks are encrypted at rest in your Azure storage.
See Azure Disk Encryption for Windows and Linux IaaS VMs.
Storage Service Encryption
When Storage Service Encryption for File storage is enabled, the data is encrypted automatically using AES-256
encryption. Microsoft handles all the encryption, decryption, and key management. This feature is available for LRS
and GRS redundancy types.
Next steps
This article introduced you to a collection of Azure Storage security best practices for securing your PaaS web and
mobile applications. To learn more about securing your PaaS deployments, see:
Securing PaaS deployments
Securing PaaS web and mobile applications using Azure App Services
Securing PaaS databases in Azure
Secure your IoT deployment
8/24/2017 • 7 min to read • Edit Online
This article provides the next level of detail for securing the Azure IoT-based Internet of Things (IoT) infrastructure.
It links to implementation level details for configuring and deploying each component. It also provides
comparisons and choices between various competing methods.
Securing the Azure IoT deployment can be divided into the following three security areas:
Device Security: Securing the IoT device while it is deployed in the wild.
Connection Security: Ensuring all data transmitted between the IoT device and IoT Hub is confidential and
tamper-proof.
Cloud Security: Providing a means to secure data while it moves through, and is stored in the cloud.
Conclusion
This article provides overview of implementation level details for designing and deploying an IoT infrastructure
using Azure IoT. Configuring each component to be secure is key in securing the overall IoT infrastructure. The
design choices available in Azure IoT provide some level of flexibility and choice; however, each choice may have
security implications. It is recommended that each of these choices be evaluated through a risk/cost assessment.
See also
You can also explore some of the other features and capabilities of the IoT Suite preconfigured solutions:
Predictive maintenance preconfigured solution overview
Frequently asked questions for IoT Suite
You can read about IoT Hub security in Control access to IoT Hub in the IoT Hub developer guide.
Internet of Things security best practices
7/3/2017 • 6 min to read • Edit Online
To secure an Internet of Things (IoT) infrastructure requires a rigorous security-in-depth strategy. This strategy
requires you to secure data in the cloud, protect data integrity while in transit over the public internet, and securely
provision devices. Each layer builds greater security assurance in the overall infrastructure.
See also
To learn more about securing your IoT solution, see:
IoT security architecture
Secure your IoT deployment
You can also explore some of the other features and capabilities of the IoT Suite preconfigured solutions:
Predictive maintenance preconfigured solution overview
Frequently asked questions for Azure IoT Suite
You can read about IoT Hub security in Control access to IoT Hub in the IoT Hub developer guide.
1 min to read •
Edit O nline
Microsoft Trust Center
6/27/2017 • 1 min to read • Edit Online
The Azure Security Information site on Azure.com gives you the information you need to plan, design, deploy,
configure, and manage your cloud solutions securely. With the Microsoft Trust center, you also have the
information you need to be confident that the Azure platform on which you run your services is secure.
We know that when you entrust your applications and data to Azure, you’re going to have questions. Where is it?
Who can access it? What is Microsoft doing to protect it? How can you verify that Microsoft is doing what it says?
And we have answers. Because it’s your data, you decide who has access, and you work with us to decide where it is
located. To safeguard your data, we use state-of-the-art security technology and world-class cryptography. Our
compliance is independently audited, and we’re transparent on many levels—from how we handle legal demands
for your customer data to the security of our code.
Here's what you find at the Microsoft Trust Center:
Security – Learn how all the Microsoft Cloud services are secured.
Privacy – Understand how Microsoft ensures privacy of your Data in the Microsoft cloud.
Compliance – Discover how Microsoft helps organizations comply with national, regional, and industry-specific
requirements governing the collection and use of individuals’ data.
Transparency – View how Microsoft believes that you control your data in the cloud and how Microsoft helps
you know as much as possible about how that data is handled.
Products and Services – See all the Microsoft Cloud products and services in one place
Service Trust Portal – Obtain copies of independent audit reports of Microsoft cloud services, risk assessments,
security best practices, and related materials.
What’s New – Find out what’s new in Microsoft Cloud Trust
Resources – Investigate white papers, videos, and case studies on Microsoft Trusted Cloud
The Microsoft Trust Center has what you need to understand what we do to secure the Microsoft Cloud.
Microsoft Security Response Center
6/27/2017 • 1 min to read • Edit Online
The Microsoft Security Response Center (MSRC) is led by some of the world’s most experienced security experts.
These experts identify, monitor, respond to and resolve security incidents and on-premises and cloud vulnerabilities
around the clock, each day of the year.
In addition to the continuous work the MSRC does in the background, the MSRC team has a number of resources
available to you so that you can understand how to secure your Azure assets and deployments more effectively.
White Papers
The MSRC has published a number of white papers that will help you understand what they do and how they do it.
Some provide insights into how we secure the Microsoft cloud and include useful information on how you can
employ the same security configurations.
One of the great things about using Microsoft Azure for application testing and deployment is that you don’t need
to put together an on-premises infrastructure to develop, test and deploy your applications. All the infrastructure is
taken care of by the Microsoft Azure platform services. You don’t have to worry about requisitioning, acquiring, and
“racking and stacking” your own on-premises hardware.
This is great – but you still need to make sure you perform your normal security due diligence. One of the things
you need to do is penetration test the applications you deploy in Azure.
You might already know that Microsoft performs penetration testing of our Azure environment. This helps us
improve our platform and guides our actions in terms of improving security controls, introducing new security
controls, and improving our security processes.
We don’t pen test your application for you, but we do understand that you will want and need to perform pen
testing on your own applications. That’s a good thing, because when you enhance the security of your applications,
you help make the entire Azure ecosystem more secure.
When you pen test your applications, it might look like an attack to us. We continuously monitor for attack patterns
and will initiate an incident response process if we need to. It doesn’t help you and it doesn’t help us if we trigger
an incident response due to your own due diligence pen testing.
What to do?
When you’re ready to pen test your Azure-hosted applications, you have an option to let us know. Once we know
that you’re going to be performing specific tests, we won’t inadvertently shut you down (such as blocking the IP
address that you’re testing from), as long as your tests conform to the Azure pen testing terms and conditions
described in Microsoft Cloud Unified Penetration Testing Rules of Engagement. Standard tests you can perform
include:
Tests on your endpoints to uncover the Open Web Application Security Project (OWASP) top 10 vulnerabilities
Fuzz testing of your endpoints
Port scanning of your endpoints
One type of test that you can’t perform is any kind of Denial of Service (DoS) attack. This includes initiating a DoS
attack itself, or performing related tests that might determine, demonstrate or simulate any type of DoS attack.
Are you ready to get started with pen testing your applications hosted in Microsoft Azure? If so, then head on over
to the Penetration Test Overview page (and click the Create a Testing Request button at the bottom of the page.
You’ll also find more information on the pen testing terms and conditions and helpful links on how you can report
security flaws related to Azure or any other Microsoft service.
Introduction to Azure Security Center
9/13/2017 • 3 min to read • Edit Online
Learn about Azure Security Center, its key capabilities, and how to get started.
Get started
To get started with Security Center, you need a subscription to Microsoft Azure. Security Center is enabled with
your Azure subscription. If you do not have a subscription, you can sign up for a free trial.
You access Security Center from the Azure portal. See the portal documentation to learn more.
Getting started with Azure Security Center quickly guides you through the security-monitoring and policy-
management components of Security Center.
Next steps
In this document, you were introduced to Security Center, its key capabilities, and how to get started. To learn
more, see the following resources:
Planning and operations guide - Learn how to optimize your use of Security Center based on your
organization's security requirements and cloud management model.
Setting security policies — Learn how to configure security policies for your Azure subscriptions and resource
groups.
Managing security recommendations — Learn how recommendations help you protect your Azure resources.
Security health monitoring — Learn how to monitor the health of your Azure resources.
Managing and responding to security alerts — Learn how to manage and respond to security alerts.
Monitoring and processing security events - Learn how to monitor and process security events collected over
time.
Monitoring partner solutions — Learn how to monitor the health status of your partner solutions.
Azure Security Center FAQ — Find frequently asked questions about using the service.
Azure Security blog — Get the latest Azure security news and information.
What is Azure Key Vault?
7/21/2017 • 3 min to read • Edit Online
Azure Key Vault is available in most regions. For more information, see the Key Vault pricing page.
Introduction
Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud applications and services. By
using Key Vault, you can encrypt keys and secrets (such as authentication keys, storage account keys, data
encryption keys, .PFX files, and passwords) by using keys that are protected by hardware security modules
(HSMs). For added assurance, you can import or generate keys in HSMs. If you choose to do this, Microsoft
processes your keys in FIPS 140-2 Level 2 validated HSMs (hardware and firmware).
Key Vault streamlines the key management process and enables you to maintain control of keys that access and
encrypt your data. Developers can create keys for development and testing in minutes, and then seamlessly
migrate them to production keys. Security administrators can grant (and revoke) permission to keys, as needed.
Use the following table to better understand how Key Vault can help to meet the needs of developers and
security administrators.
Developer for an Azure application “I want to write an application for √ Keys are stored in a vault and
Azure that uses keys for signing and invoked by URI when needed.
encryption, but I want these keys to be
external from my application so that √ Keys are safeguarded by Azure, using
the solution is suitable for an industry-standard algorithms, key
application that is geographically lengths, and hardware security
distributed. modules (HSMs).
I also want these keys and secrets to √ Keys are processed in HSMs that
be protected, without having to write reside in the same Azure datacenters
the code myself. I also want these keys as the applications. This provides better
and secrets to be easy for me to use reliability and reduced latency than if
from my applications, with optimal the keys reside in a separate location,
performance.” such as on-premises.
Developer for Software as a Service “I don’t want the responsibility or √ Customers can import their own
(SaaS) potential liability for my customers’ keys into Azure, and manage them.
tenant keys and secrets. When a SaaS application needs to
perform cryptographic operations by
I want the customers to own and using their customers’ keys, Key Vault
manage their keys so that I can does these operations on behalf of the
concentrate on doing what I do best, application. The application does not
which is providing the core software see the customers’ keys.
features.”
ROLE PROBLEM STATEMENT SOLVED BY AZURE KEY VAULT
Chief security officer (CSO) “I want to know that our applications √ HSMs are FIPS 140-2 Level 2
comply with FIPS 140-2 Level 2 HSMs validated.
for secure key management.
√ Key Vault is designed so that
I want to make sure that my Microsoft does not see or extract your
organization is in control of the key life keys.
cycle and can monitor key usage.
√ Near real-time logging of key usage.
And although we use multiple Azure
services and resources, I want to √ The vault provides a single interface,
manage the keys from a single location regardless of how many vaults you
in Azure.” have in Azure, which regions they
support, and which applications use
them.
Anybody with an Azure subscription can create and use key vaults. Although Key Vault benefits developers and
security administrators, it could be implemented and managed by an organization’s administrator who manages
other Azure services for an organization. For example, this administrator would sign in with an Azure
subscription, create a vault for the organization in which to store keys, and then be responsible for operational
tasks, such as:
Create or import a key or secret
Revoke or delete a key or secret
Authorize users or applications to access the key vault, so they can then manage or use its keys and secrets
Configure key usage (for example, sign or encrypt)
Monitor key usage
This administrator would then provide developers with URIs to call from their applications, and provide their
security administrator with key usage logging information.
Developers can also manage the keys directly, by using APIs. For more information, see the Key Vault developer's
guide.
Next Steps
For a getting started tutorial for an administrator, see Get Started with Azure Key Vault.
For more information about usage logging for Key Vault, see Azure Key Vault Logging.
For more information about using keys and secrets with Azure Key Vault, see About Keys, Secrets, and
Certificates.
What is Log Analytics?
7/10/2017 • 4 min to read • Edit Online
Log Analytics is a service in Operations Management Suite (OMS) that monitors your cloud and on-premises
environments to maintain their availability and performance. It collects data generated by resources in your cloud
and on-premises environments and from other monitoring tools to provide analysis across multiple sources. This
article provides a brief discussion of the value that Log Analytics provides, an overview of how it operates, and links
to more detailed content so you can dig further.
To get a quick graphical view of the health of your overall environment, you can add visualizations for saved log
searches to your dashboard.
In order to analyze data outside of Log Analytics, you can export the data from the OMS repository into tools such
as Power BI or Excel. You can also leverage the Log Search API to build custom solutions that leverage Log
Analytics data or to integrate with other systems.
Solutions are available for a variety of functions, and additional solutions are consistently being added. You can
easily browse available solutions and add them to your OMS workspace from the Solutions Gallery or Azure
Marketplace. Many will be automatically deployed and start working immediately while others may require
moderate configuration.
Two-step verification is a method of authentication that requires more than one verification method and adds a
critical second layer of security to user sign-ins and transactions. It works by requiring any two or more of the
following verification methods:
Something you know (typically a password)
Something you have (a trusted device that is not easily duplicated, like a phone)
Something you are (biometrics)
Azure Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. Azure MFA helps
safeguard access to data and applications while meeting user demand for a simple sign-in process. It delivers
strong authentication via a range of verification methods, including phone call, text message, or mobile app
verification.
Easy to Use - Azure Multi-Factor Authentication is simple to set up and use. The extra protection that comes
with Azure Multi-Factor Authentication allows users to manage their own devices. Best of all, in many
instances it can be set up with just a few simple clicks.
Scalable - Azure Multi-Factor Authentication uses the power of the cloud and integrates with your on-
premises AD and custom apps. This protection is even extended to your high-volume, mission-critical
scenarios.
Always Protected - Azure Multi-Factor Authentication provides strong authentication using the highest
industry standards.
Reliable - We guarantee 99.9% availability of Azure Multi-Factor Authentication. The service is considered
unavailable when it is unable to receive or process verification requests for the two-step verification.
Next steps
Learn about how Azure Multi-Factor Authentication works
Read about the different versions and consumption methods for Azure Multi-Factor Authentication
What is Azure Active Directory?
7/26/2017 • 5 min to read • Edit Online
Azure Active Directory (Azure AD) is Microsoft’s multi-tenant, cloud based directory and identity management
service. Azure AD combines core directory services, advanced identity governance, and application access
management. Azure AD also offers a rich, standards-based platform that enables developers to deliver access
control to their applications, based on centralized policy and rules.
For IT Admins, Azure AD provides an affordable, easy to use solution to give employees and business partners
single sign-on (SSO) access to thousands of cloud SaaS Applications like Office365, Salesforce.com, DropBox, and
Concur.
For application developers, Azure AD lets you focus on building your application by making it fast and simple to
integrate with a world class identity management solution used by millions of organizations around the world.
Azure AD also includes a full suite of identity management capabilities including multi-factor authentication,
device registration, self-service password management, self-service group management, privileged account
management, role based access control, application usage monitoring, rich auditing and security monitoring and
alerting. These capabilities can help secure cloud based applications, streamline IT processes, cut costs and help
ensure that corporate compliance goals are met.
Additionally, with just four clicks, Azure AD can be integrated with an existing Windows Server Active Directory,
giving organizations the ability to leverage their existing on-premises identity investments to manage access to
cloud based SaaS applications.
If you are an Office 365, Azure or Dynamics CRM Online customer, you might not realize that you are already
using Azure AD. Every Office 365, Azure and Dynamics CRM tenant is actually already an Azure AD tenant.
Whenever you want you can start using that tenant to manage access to thousands of other cloud applications
Azure AD integrates with!
NOTE
For the pricing options of these editions, see Azure Active Directory Pricing. Azure Active Directory Premium P1, Premium
P2, and Azure Active Directory Basic are not currently supported in China. Please contact us at the Azure Active Directory
Forum for more information.
Azure Active Directory Basic - Designed for task workers with cloud-first needs, this edition provides cloud
centric application access and self-service identity management solutions. With the Basic edition of Azure
Active Directory, you get productivity enhancing and cost reducing features like group-based access
management, self-service password reset for cloud applications, and Azure Active Directory Application Proxy
(to publish on-premises web applications using Azure Active Directory), all backed by an enterprise-level SLA of
99.9 percent uptime.
Azure Active Directory Premium P1 - Designed to empower organizations with more demanding identity
and access management needs, Azure Active Directory Premium edition adds feature-rich enterprise-level
identity management capabilities and enables hybrid users to seamlessly access on-premises and cloud
capabilities. This edition includes everything you need for information worker and identity administrators in
hybrid environments across application access, self-service identity and access management (IAM), identity
protection and security in the cloud. It supports advanced administration and delegation resources like dynamic
groups and self-service group management. It includes Microsoft Identity Manager (an on-premises identity
and access management suite) and provides cloud write-back capabilities enabling solutions like self-service
password reset for your on-premises users.
Azure Active Directory Premium P2 - Designed with advanced protection for all your users and
administrators, this new offering includes all the capabilities in Azure AD Premium P1 as well as our new
Identity Protection and Privileged Identity Management. Azure Active Directory Identity Protection leverages
billions of signals to provide risk-based conditional access to your applications and critical company data. We
also help you manage and protect privileged accounts with Azure Active Directory Privileged Identity
Management so you can discover, restrict and monitor administrators and their access to resources and
provide just-in-time access when needed.
NOTE
A number of Azure Active Directory capabilities are available through "pay as you go" editions:
Active Directory B2C is the identity and access management solution for your consumer-facing applications. For more
details, see Azure Active Directory B2C
Azure Multi-Factor Authentication can be used through per user or per authentication providers. For more details, see
What is Azure Multi-Factor Authentication?
How can I get started?
If you are an IT admin:
Try it out! - you can sign up for a free 30 day trial today and deploy your first cloud solution in under 5
minutes using this link
Read Getting started with Azure AD for tips and tricks on getting an Azure AD tenant up and running fast
If you are a developer:
Check out our Developers Guide to Azure Active Directory
Start a trial – sign up for a free 30 day trial today and start integrating your apps with Azure AD
Next steps
Learn more about the fundamentals of Azure identity and access management
Getting started with Operations Management Suite
Security and Audit Solution
7/28/2017 • 11 min to read • Edit Online
This document helps you get started quickly with Operations Management Suite (OMS) Security and Audit solution
capabilities by guiding you through each option.
What is OMS?
Microsoft Operations Management Suite (OMS) is Microsoft's cloud-based IT management solution that helps you
manage and protect your on-premises and cloud infrastructure. For more information about OMS, read the article
Operations Management Suite.
If you are accessing this dashboard for the first time and you don’t have devices monitored by OMS, the tiles will
not be populated with data obtained from the agent. Once you install the agent, it can take some time to populate,
therefore what you see initially may be missing some data as they are still uploading to the cloud. In this case, it is
normal to see some tiles without tangible information. Read Connect Windows computers directly to OMS for
more information on how to install OMS agent in a Windows system and Connect Linux computers to OMS for
more information on how to perform this task in a Linux system.
NOTE
The agent collects the information based on the current events that are enabled, for instance computer name, IP address and
user name. However no document/files, database name or private data are collected.
Solutions are a collection of logic, visualization, and data acquisition rules that address key customer challenges.
Security and Audit is one solution, others can be added separately. Read the article Add solutions for more
information on how to add a new solution.
The OMS Security and Audit dashboard is organized in four major categories:
Security Domains: in this area you will be able to further explore security records over time, access malware
assessment, update assessment, network security, identity and access information, computers with security
events and quickly have access to Azure Security Center dashboard.
Notable Issues: this option will allow you to quickly identify the number of active issues and the severity of
these issues.
Detections (Preview): enables you to identify attack patterns by visualizing security alerts as they take place
against your resources.
Threat Intelligence: enables you to identify attack patterns by visualizing the total number of servers with
outbound malicious IP traffic, the malicious threat type and a map that shows where these IPs are coming from.
Common security queries: this option provides you a list of the most common security queries that you can
use to monitor your environment. When you click in one of those queries, it opens the Search blade with the
results for that query.
NOTE
for more information on how OMS keeps your data secure, read How OMS secures your data.
Security domains
When monitoring resources, it is important to be able to quickly access the current state of your environment.
However it is also important to be able to track back events that occurred in the past that can lead to a better
understanding of what’s happening in your environment at certain point in time.
NOTE
data retention is according to the OMS pricing plan. For more information visit the Microsoft Operations Management Suite
pricing page.
Incident response and forensics investigation scenarios will directly benefit from the results available in the
Security Records over Time tile.
When you click on this tile, the Search blade will open, showing a query result for Security Events
(Type=SecurityEvents) with data based on the last seven days, as shown below:
NOTE
If your workspace has been upgraded to the new Log Analytics query language, then the following queries need to be
converted. You can use the language converter to perform this translation.
The search result is divided in two panes: the left pane gives you a breakdown of the number of security events that
were found, the computers in which these events were found, the number of accounts that were discovered in
these computers and the types of activities. The right pane provides you the total results and a chronological view
of the security events with the computer’s name and event activity. You can also click Show More to view more
details about this event, such as the event data, the event ID and the event source.
NOTE
for more information about OMS search query, read OMS search reference.
Antimalware assessment
This option enables you to quickly identify computers with insufficient protection and computers that are
compromised by a piece of malware. Malware assessment status and detected threats on the monitored servers
are read, and then the data is sent to the OMS service in the cloud for processing. Servers with detected threats and
servers with insufficient protection are shown in the malware assessment dashboard, which is accessible after you
click in the Antimalware Assessment tile.
Just like any other live tile available in OMS Dashboard, when you click on it, the Search blade will open with the
query result. For this option, if you click in the Not Reporting option under Protection Status, you will have the
query result that shows this single entry that contains the computer’s name and its rank, as shown below:
NOTE
rank is a grade giving to reflect the status of the protection (on, off, updated, etc.) and threats that are found. Having that as
a number helps to make aggregations.
If you click in the computer’s name, you will have the chronological view of the protection status for this computer.
This is very useful for scenarios in which you need to understand if the antimalware was once installed and at some
point it was removed.
Update assessment
This option enables you to quickly determine the overall exposure to potential security problems, and whether or
how critical these updates are for your environment. OMS Security and Audit solution only provide the
visualization of these updates, the real data comes from Update Management Solutions, which is a different
module within OMS. Here an example of the updates:
NOTE
For more information about Update Management solution, read Update Management solution in OMS.
NOTE
currently the data is based only on Security Events login data (event ID 4624) in the future Office365 logins and Azure AD
data will also be included.
By monitoring your identity activities you will be able to take proactive actions before an incident takes place or
reactive actions to stop an attack attempt. The Identity and Access dashboard provides you an overview of your
identity state, including the number of failed attempts to log on, the user’s account that were used during those
attempts, accounts that were locked out, accounts with changed or reset password and currently number of
accounts that are logged in.
When you click in the Identity and Access tile you will see the following dashboard:
The information available in this dashboard can immediately assist you to identify a potential suspicious activity.
For example, there are 338 attempts to log on as Administrator and 100% of these attempts failed. This can be
caused by a brute force attack against this account. If you click on this account you will obtain more information
that can assist you to determine the target resource for this potential attack:
The detailed report provides important information about this event, including: the target computer, the type of
logon (in this case Network logon), the activity (in this case event 4625) and a comprehensive timeline of each
attempt.
Computers
This tile can be used to access all computers that actively have security events. When you click in this tile you will
see the list of computers with security events and the number of events on each computer:
You can continue your investigation by clicking on each computer and review the security events that were flagged.
Threat Intelligence
By using the Threat Intelligence option available in OMS Security and Audit, IT administrators can identify security
threats against the environment, for example, identify if a particular computer is part of a botnet. Computers can
become nodes in a botnet when attackers illicitly install malware that secretly connects this computer to the
command and control. It can also identify potential threats coming from underground communication channels,
such as darknet. Learn more about Threat Intelligence by reading Monitoring and responding to security alerts in
Operations Management Suite Security and Audit Solution article.
In some scenarios, you may notice a potential malicious IP that was accessed from one monitored computer:
This alert and others within the same category, are generated via OMS Security by leveraging Microsoft Threat
Intelligence. The Threat Intelligence data is collected by Microsoft as well as purchased from leading threat
intelligence providers. This data is updated frequently and adapted to fast-moving threats. Due to its nature, it
should be combined with other sources of security information while investigating a security alert.
Baseline Assessment
Microsoft, together with industry and government organizations worldwide, defines a Windows configuration that
represents highly secure server deployments. This configuration is a set of registry keys, audit policy settings, and
security policy settings along with Microsoft’s recommended values for these settings. This set of rules is known as
Security baseline. Read Baseline Assessment in Operations Management Suite Security and Audit Solution for
more information about this option.
Azure Security Center
This tile is basically a shortcut to access Azure Security Center dashboard. Read Getting started with Azure Security
Center for more information about this solution.
Notable issues
The main intent of this group of options is to provide a quick view of the issues that you have in your environment,
by categorizing them in Critical, Warning and Informational. The Active issue type tile it’s a visualization of these
issues, but it doesn’t allow you to explore more details about them, for that you need to use the lower part of this
tile that has the name of the issue (NAME), how many objects had this happen (COUNT) and how critical it is
(SEVERITY).
You can see that these issues were already covered in different areas of the Security Domains group, which
reinforces the intent of this view: visualize the most important issues in your environment from a single place.
Detections (Preview)
The main intent of this option is to allow IT to quickly identify potential threats to their environment via and the
severity of this threat.
This option can also be used during an incident response investigation to perform the assessment and obtain more
information about the attack.
NOTE
For more information on how to use OMS for Incident Response, watch this video: How to Leverage the Azure Security
Center & Microsoft Operations Management Suite for an Incident Response.
Threat Intelligence
The new threat intelligence section of the Security and Audit solution visualizes the possible attack patterns in
several ways: the total number of servers with outbound malicious IP traffic, the malicious threat type and a map
that shows where these IPs are coming from. You can interact with the map and click on the IPs for more
information.
Yellow pushpins on the map indicate incoming traffic from malicious IPs. It is not uncommon for servers that are
exposed to the internet to see incoming malicious traffic, but we recommend reviewing these attempts to make
sure none of them was successful. These indicators are based on IIS logs, WireData and Windows Firewall logs.
See also
In this document, you were introduced to OMS Security and Audit solution. To learn more about OMS Security, see
the following articles:
Operations Management Suite (OMS) overview
Monitoring and Responding to Security Alerts in Operations Management Suite Security and Audit Solution
Monitoring Resources in Operations Management Suite Security and Audit Solution
Azure Security MVP Program
6/27/2017 • 1 min to read • Edit Online
Microsoft Most Valuable Professionals (MVPs) are community leaders who’ve demonstrated an exemplary
commitment to helping others get the most out of their experience with Microsoft technologies. They share their
exceptional passion, real-world knowledge, and technical expertise with the community and with Microsoft.
We are happy to announce that Microsoft Azure now recognizes community experts with special expertise in Azure
security. Microsoft MVPs can be awarded the MVP in Microsoft Azure in the Azure Security contribution area.
While there is no benchmark for becoming an MVP, in part because it varies by technology and its life-cycle, some
of the criteria we evaluate include the impact of a nominee’s contributions to online forums such as Microsoft
Answers, TechNet and MSDN; wikis and online content; conferences and user groups; podcasts, Web sites, blogs
and social media; and articles and books.
Are you an expert in Azure security? Do you know someone who is? Then Nominate yourself or someone else to
become an Azure security MVP today!
Microsoft Services in Cybersecurity
6/27/2017 • 1 min to read • Edit Online
Microsoft Services provides a comprehensive approach to security, identity and cybersecurity. Microsoft Services
provides an array of Security and Identity services across strategy, planning, implementation, and ongoing support
which can help our Enterprise customers implement holistic security solutions that align with their strategic goals.
With direct access to product development teams, we can create solutions that integrate, and enhance the latest
security and identity capabilities of our products to help protect our customer’s business and drive innovation.
Entrusted with helping protect and enable the world’s largest organizations, our diverse group of technical
professionals consists of highly trained experts who offer a wealth of security and identity experience.
Learn more about services provided by Microsoft Services:
Security Risk Assessment
Dynamic Identity Framework Assessment
Offline Assessment for Active Directory Services
Enhanced Security Administration Environment
Azure AD Implementation Services
Securing Against Lateral Account Movement
Microsoft Threat Detection Services
Incident Response and Recovery
Learn more about Microsoft Services Security consulting services.
Manage personal data in Microsoft Azure
8/30/2017 • 4 min to read • Edit Online
This article provides guidance on how to correct, update, delete, and export personal data in Azure Active Directory
and Azure SQL Database.
Scenario
A Dublin-based company provides one-stop shopping for high end destination weddings in Ireland and around the
world for both a local and international customer base. They have offices, customers, employees, and vendors
located around the world to fully service the venues they offer.
Among many other items, the company keeps track of RSVPs that include food allergies and dietary preferences.
Wedding guests can register for various activities such as horseback riding, surfing, boat rides, etc., and even
interact with one another on a central web page during the months leading up to the event. The company collects
personal information from employees, vendors, customers, and wedding guests. Because of the international
nature of the business the company must comply with multiple levels of regulation.
Problem statement
Data admins must be able to correct inaccurate personal information and update incomplete or changing
personal information.
Data admins must be able to delete personal information upon the request of a data subject.
Data admins need to export data and provide it to a data subject in a common, structured format upon his or
her request.
Company goals
Inaccurate or incomplete customer, wedding guest, employee, and vendor personal information must be
corrected or updated in Azure Active Directory and Azure SQL Database.
Personal information must be deleted in Azure Active Directory and Azure SQL Database upon the request
of a data subject.
Personal data must be exported in a common, structured format upon the request of a data subject.
Solutions
Azure Active Directory: rectify/correct inaccurate or incomplete personal data and erase/delete personal
data/user profiles
Azure Active Directory is Microsoft’s cloud-based, multi-tenant directory and identity management service. You can
correct, update, or delete customer and employee user profiles and user work information that contain personal
data, such as a user’s name, work title, address, or phone number, in your Azure Active Directory (AAD)
environment by using the Azure portal.
You must sign in with an account that’s a global admin for the directory.
How do I correct or update user profile and work information in Azure Active Directory?
1. Sign in to the Azure portal with an account that's a global admin for the directory.
2. Select More services, enter Users and groups in the text box, and then select Enter.
3. On the Users and groups blade, select Users.
4. On the Users and groups - Users blade, select a user from the list, and then, on the blade for the selected
user, select Profile to view the user profile information that needs to be corrected or updated.
5. Correct or update the information, and then, in the command bar, select Save.
6. On the blade for the selected user, select Work Info to view user work information that needs to be
corrected or updated.
7. Correct or update the user work information, and then, in the command bar, select Save.
How do I delete a user profile in Azure Active Directory?
1. Sign in to the Azure portal with an account that's a global admin for the directory.
2. Select More services, enter Users and groups in the text box, and then select Enter.
SQL Database: rectify/correct inaccurate or incomplete personal data; erase/delete personal data; export
personal data
Azure SQL Database is a cloud database that helps developers build and maintain applications.
Personal data can be updated in Azure SQL Database using standard SQL queries, and it can also be deleted.
Additionally, personal data can be exported from SQL Database using a variety of methods, including the Azure
SQL Server import and export wizard, and in a variety of formats, including a BACPAC file.
How do I correct, update, or erase personal data in SQL Database?
To learn how to correct or update personal data in SQL Database, visit the Update (Transact-SQL), Update Text,
Update with Common Table Expression, or Update Write Text documentation.
To learn how to delete personal data in SQL Database, visit the Delete (Transact-SQL) documentation.
How do I export personal data to a BACPAC file in SQL Database?
A BACPAC file includes the SQL Database data and metadata and is a zip file with a BACPAC extension. This can be
done using the Azure portal, the SQLPackage command-line utility, SQL Server Management Studio (SSMS), or
PowerShell.
To learn how to export data to a BACPAC file, visit the Export an Azure SQL database to a BACPAC file page, which
includes detailed instructions for each method listed above.
How do I export personal data from SQL Database with the SQL Server Import and Export Wizard?
This wizard helps you copy data from a source to a destination. For an introduction to the wizard, including how to
get it, permissions information, and how to get help with the tool, visit the Import and Export Data with the SQL
Server Import and Export Wizard web page.
For an overview of steps for the wizard, visit the Steps in the SQL Server Import and Export Wizard web page.
Next Steps:
Azure SQL Database
Azure Active Directory
Discover, identify, and classify personal data in
Microsoft Azure
8/25/2017 • 9 min to read • Edit Online
This article provides guidance on how to discover, identify, and classify personal data in several Azure tools and
services, including using Azure Data Catalog, Azure Active Directory, SQL Database, Power Query for Hadoop
clusters in Azure HDInsight, Azure Information Protection, Azure Search, and SQL queries for Azure Cosmos DB.
NOTE
Azure CLI is commonly used by Linux admins and developers. Some users find it easier and more intuitive than PowerShell,
which is your third option.
Finally, you can create a SQL database using PowerShell, which is a command line/script tool used to create and
manage Azure and other resources. In this tutorial, you launch the tool, define script variables, create a resource
group and logical server, and configure a server firewall rule. Then you’ll create a database with sample data.
The tutorial requires the Azure PowerShell module version 4.0 or later. Run Get-Module -ListAvailable AzureRM to
find your version. If you need to install or upgrade, see Install Azure PowerShell module.
To learn how to create your database this way, visit the Create a single Azure SQL database using Powershell
tutorial.
NOTE
Windows admins tend to use PowerShell, but some of them prefer Azure CLI.
How do I search for personal data in SQL database in the Azure portal?**
You can use the built-in query editor tool inside the Azure portal to search for personal data. You’ll log in to the tool
using your SQL server admin login and password, and then enter a query.
Step 5 of the tutorial shows an example query in the query editor pane, but it doesn’t focus on personal or sensitive
information(it also combines data from two tables and creates aliases for the source column in the data set being
returned). The following screenshot shows the query from Step 5 as well as the results pane that’s returned:
If your database was called MyTable, a sample query for personal information might include name, Social Security
number and ID number and would look like this:
“SELECT Name, SSN, ID number FROM MyTable”
You’d run the query and then see the results in the Results pane.
For more information on how to query a SQL database in the Azure portal, visit the Query the SQL database section
of the tutorial.
How do I search for data across multiple databases?
SQL elastic query (preview) enables you to perform cross-database and multiple database queries and return a
single result. The tutorial overview includes a detailed description of scenarios and explains the difference between
vertical and horizontal database partitioning. Horizontal partitioning is called “sharding.”
To get started, visit the Azure SQL Database elastic query overview (preview) page.
Power Query (for importing Azure HDInsight Hadoop clusters ): data discovery for large data sets
Hadoop is an open source Apache storage and processing service for large data sets, which are analyzed and stored
in Hadoop clusters. Azure HDInsight allows users to work with Hadoop clusters in Azure. Power Query is an Excel
add-in that, among other things, helps users discover data from different sources.
Personal data associated with Hadoop clusters in Azure HDInsight can be imported to Excel with Power Query.
Once the data is in Excel you can use a query to identify it.
How do I use Excel Power Query to import Hadoop clusters in Azure HDInsight into Excel?
An HDInsight tutorial will walk you through this entire process. It explains prerequisites, and includes a link to a Get
started with Azure HDInsight tutorial. Instructions cover Excel 2016 as well as 2013 and 2010 (steps are slightly
different for the older versions of Excel). If you don’t have the Excel Power Query add-in, the tutorial shows you
how to get it. You’ll start the tutorial in Excel and will need to have an Azure Blob storage account associated with
your cluster.
To learn how to do this, visit the Connect Excel to Hadoop by using Power Query tutorial.
Source: Connect Excel to Hadoop by using Power Query
Next steps
Azure SQL Database
What is SQL Database?
SQL Database Query Editor available in Azure portal
What is Azure Information Protection?
What is Azure Rights Management?
Azure Information Protection: Ready, set, protect!
Protect personal data in Microsoft Azure
8/30/2017 • 1 min to read • Edit Online
This article introduces a series of articles that help you use Azure security technologies and services to protect
personal data. This is a key requirement for many corporate and industry compliance and governance initiatives.
The scenario, problem statement and company goals are included here.
Company goals
Data sources that contain personal data are encrypted when residing in cloud storage.
Personal data that is transferred from one location to another is encrypted while in-transit. This is true if the
data is traveling across the virtual network or across the Internet between the corporate datacenter and the
Azure cloud.
Confidentiality and integrity of personal data is protected from unauthorized access by strong identity
management and access control technologies.
Personal data is protected from exposure via data breach via monitoring for vulnerabilities and threats.
The security state of Azure services that store or transmit personal data is assessed to identify opportunities
to better protect personal data.
This article will help you understand how to use Azure Security Center to protect personal data from breaches and
attacks.
Scenario
A large cruise company, headquartered in the United States, is expanding its operations to offer itineraries in the
Mediterranean, and Baltic seas, as well as the British Isles. To help in those efforts, it has acquired several smaller
cruise lines based in Italy, Germany, Denmark, and the U.K.
The company uses Microsoft Azure to store corporate data in the cloud. This includes personal identifiable
information such as names, addresses, phone numbers, and credit card information. It also includes Human
Resources information such as:
Addresses
Phone numbers
Tax identification numbers
Medical information
The cruise line also maintains a large database of reward and loyalty program members. Corporate employees
access the network from the company’s remote offices and travel agents located around the world have access to
some company resources. Personal data travels across the network between these locations and the Microsoft data
center.
Problem statement
The company is concerned about the threat of attacks on their Azure resources. They want to prevent exposure of
customers’ and employees’ personal data to unauthorized persons. They want guidance on both prevention and
response/remediation, as well as an effective way to monitor the ongoing security of their cloud resources. They
need a strong line of defense against today’s sophisticated and organized attackers.
Company goal
One of the company’s goals is to ensure the privacy of customers’ and employees’ personal data by protecting it
from threats. One of their goals is to respond immediately to signs of breach to mitigate the impact. It requires a
way to assess the current state of security, identify vulnerable configurations, and remediate them.
Solutions
Microsoft Azure Security Center (ASC) provides an integrated security monitoring and policy management
solution. It delivers easy-to-use and effective threat prevention, detection, and response capabilities.
Prevention
ASC helps you prevent breaches by enabling you to set security policies, provide just-in-time access, and
implement security recommendations.
A security policy defines the set of controls recommended for resources within the specified subscription. Just in
time access can be used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks. Security
recommendations are created by ASC after analyzing the security state of your Azure resources.
How do I set security policies in ASC?
You can configure security policies for each subscription. To modify a security policy, you must be an owner or
contributor of that subscription. In the Azure portal, do the following:
1. Select Policy in the ASC dashboard.
2. Select the subscription on which you want to enable the policy.
3. Choose Prevention policy to configure policies per subscription. Collect data from virtual machines
should be set to On.
4. In the Prevention policy options, select On to enable the security recommendations that are relevant for
the subscription.
For more detailed instructions and an explanation of each of the policy recommendations that can be enabled, see
Set security policies in Azure Security Center.
How do I configure Just in Time Access (JIT)?
When JIT is enabled, Security Center locks down inbound traffic to your Azure VMs by creating an NSG rule. You
select the ports on the VM to which inbound traffic will be locked down. To use JIT access, do the following:
1. Select the Just in time VM access tile on the ASC blade.
2. Select the Recommended tab.
3. Under VMs, select the VMs that you want to enable. This puts a checkmark next to a VM.
4. Select Enable JIT on VMs.
5. Select Save.
Then you can see the default ports that ASC recommends being enabled for JIT. You can also add and configure a
new port on which you want to enable the just in time solution. The Just in time VM access tile in the Security
Center shows the number of VMs configured for JIT access. It also shows the number of approved access requests
made in the last week.
For instructions on how to do this, and additional information about Just in Time access, see Manage virtual
machine access using just in time.
How do I implement ASC security recommendations?
When Security Center identifies potential security vulnerabilities, it creates recommendations. The
recommendations guide you through the process of configuring the needed controls.
1. Select the Recommendations tile on the ASC dashboard.
2. View the recommendations, which are shown in a table format where each line represents one
recommendation.
3. To filter recommendations, select Filter and select the severity and state values you wish to see.
4. To dismiss a recommendation that is not applicable, you can right click and select Dismiss.
5. Evaluate which recommendation should be applied first.
6. Apply the recommendations in order of priority.
For a list of possible recommendations and walk-throughs on how to apply each, see Managing security
recommendations in Azure Security Center.
Detection and Response
Detection and response go together, as you want to respond as quickly as possible after a threat is detected. ASC
threat detection works by automatically collecting security information from your Azure resources, the network,
and connected partner solutions. ASC can rapidly update its detection algorithms as attackers release new and
increasingly sophisticated exploits. For more detailed information on how ASC’s threat detection works, see Azure
Security Center detection capabilities.
How do I manage and respond to security alerts?
A list of prioritized security alerts is shown in Security Center along with the information you need to quickly
investigate the problem. Security Center also includes recommendations for how to remediate an attack. To
manage your security alerts, do the following:
1. Select the Security alerts tile in the ASC dashboard. This shows details for each alert.
2. To filter alerts based on date, state, and severity, select Filter and then select the values you want to see.
3. To respond to an alert, select it and review the information, then select the resource that was attacked.
4. In the Description field, you’ll see details, including recommended remediation.
For more detailed instructions on responding to security alerts, see Managing and responding to security alerts in
Azure Security Center.
For further help in investigating security alerts, the company can integrate ASC alerts with its own SIEM solution,
using Azure Log Integration.
How do I manage security incidents?
In ASC, a security incident is an aggregation of all alerts for a resource that align with kill chain patterns. An
Incident will reveal the list of related alerts, which enables you to obtain more information about each occurrence.
Incidents appear in the Security Alerts tile and blade.
To review and manage security incidents, do the following:
1. Select the Security alerts tile. if a security incident is detected, it will appear under the security alerts graph.
It will have an icon that’s different from other alerts.
2. Select the incident to see more details about this security incident. Additional details include its full
description, its severity, its current state, the attacked resource, the remediation steps for the incident, and
the alerts that were included in this incident.
You can filter to see incidents only, alerts only, or both.
How do I access the Threat Intelligence Report?
ASC analyzes information from multiple sources to identify threats. To assist incident response teams investigate
and remediate threats, Security Center includes a threat intelligence report that contains information about the
threat that was detected.
Security Center has three types of threat reports, which can vary per attack. The reports available are:
Activity Group Report: provides deep dives into attackers, their objectives and tactics.
Campaign Report: focuses on details of specific attack campaigns.
Threat Summary Report: covers all items in the previous two reports.
This type of information is very useful during the incident response process, where there is an ongoing
investigation to understand the source of the attack, the attacker’s motivations, and what to do to mitigate this
issue moving forward.
1. To access the threat intelligence report, do the following:
2. Select the Security alerts tile on the ASC dashboard.
3. Select the security alert for which you want to obtain more information.
4. In the Reports field, click the link to the threat intelligence report.
5. This will open the PDF file, which you can download.
For additional information about the ASC threat intelligence report, see Azure Security Center Threat Intelligence
Report.
Assessment
To help with testing, assessment and evaluation of your security posture, ASC provides for integrated vulnerability
assessment with Qualys cloud agents, as a part of its virtual machine recommendations component.
The Qualys agent reports vulnerability data to the Qualys management platform, which then sends vulnerability
and health monitoring data back to ASC. The recommendation to add a vulnerability assessment solution is
displayed in the Recommendations blade on the ASC dashboard.
After the vulnerability assessment solution is installed on the target VM, Security Center scans the VM to detect and
identify system and application vulnerabilities. Detected issues are shown under the Virtual Machines
Recommendations option.
How do I implement a vulnerability assessment solution?
If a Virtual Machine does not have an integrated vulnerability assessment solution already deployed, Security
Center recommends that it be installed.
1. In the ASC dashboard, on the Recommendations blade, select Add a vulnerability assessment solution.
2. Select the VMs where you want to install the vulnerability assessment solution.
3. Click on Install on [number of] VMs.
4. Select a partner solution in the Azure Marketplace, or under Use existing solution, select Qualys.
5. You can turn the auto update settings on or off in the Partner Solutions blade.
For further instructions on how to implement a vulnerability assessment solution, see Vulnerability Assessment in
Azure Security Center.
Next steps
Azure Security Center quick start guide
Introduction to Azure Security Center
Integrating Azure Security Center alerts with Azure log integration
Boost Azure Security Center with Integrated Vulnerability Assessment
Protect personal data with network security features:
Azure Application Gateway and Network Security
Groups
8/30/2017 • 6 min to read • Edit Online
This article provides information and procedures that will help you use Azure Application Gateway and Network
Security Groups to protect personal data.
An important element in a multi-layered security strategy to protect the privacy of personal data is a defense
against common vulnerability exploits such as SQL injection or cross-site scripting. Keeping unwanted network
traffic out of your Azure virtual network helps protect against potential compromise of sensitive data, and
Microsoft Azure gives you tools to help protect your data against attackers.
Scenario
A large cruise company, headquartered in the United States, is expanding its operations to offer itineraries in the
Mediterranean, Adriatic, and Baltic seas, as well as the British Isles. In furtherance of those efforts, it has acquired
several smaller cruise lines based in Italy, Germany, Denmark and the U.K.
The company uses Microsoft Azure to store corporate data in the cloud and run applications on virtual machines
that process and access this data. This data includes personal identifiable information such as names, addresses,
phone numbers, and credit card information of its global customer base. It also includes traditional Human
Resources information such as addresses, phone numbers, tax identification numbers and other information about
company employees in all locations. The cruise line also maintains a large database of reward and loyalty program
members that includes personal information to track relationships with current and past customers.
Corporate employees access the network from the company’s remote offices and travel agents located around the
world have access to some company resources and use web-based applications hosted in Azure VMs to interact
with it.
Problem statement
The company must protect the privacy of customers’ and employees’ personal data from attackers who exploit
software vulnerabilities to run malicious code that could expose personal data stored or used by the company’s
cloud-based applications.
Company goal
The company’s goal to ensure that unauthorized persons cannot access corporate Azure Virtual Networks and the
applications and data that reside there by exploiting common vulnerabilities.
Solutions
Microsoft Azure provides security mechanisms to help prevent unwanted traffic from entering Azure Virtual
Networks. Control of inbound and outbound traffic is traditionally performed by firewalls. In Azure, you can use the
Application Gateway with the Web Application Firewall and Network Security Groups (NSG), which act as a simple
distributed firewall. These tools enable you to detect and block unwanted network traffic.
Application Gateway/Web Application Firewall
The Web Application Firewall (WAF) component of the Azure Application Gateway protects web applications, which
are increasingly targets of malicious attacks that exploit common known vulnerabilities. A centralized WAF both
protects against web attacks and simplifies security management without requiring any application changes.
Azure WAF addresses various attack categories including SQL injection, cross site scripting, HTTP protocol
violations and anomalies, bots, crawlers, scanners, common application misconfigurations, HTTP Denial of Service,
and other common attacks such as command injection, HTTP request smuggling, HTTP response splitting, and
remote file inclusion attacks.
You can create an application gateway with WAF, or add WAF to an existing application gateway. In either case,
Azure Application Gateway requires its own subnet.
How do I create an application gateway with WAF?
To create a new application gateway with WAF enabled, do the following:
1. Log in to the Azure portal and in the Favorites pane of the portal, click New
2. In the New blade, click Networking.
3. Click Application Gateway.
4. Navigate to the Azure portal, click New > Networking > Application Gateway.
5. In the Basics blade that appears, enter the values for the following fields: Name, Tier (Standard or WAF),
SKU size (Small, Medium, or Large), Instance count (2 for high availability), Subscription, Resource group,
and Location.
6. In the Settings blade that appears under Virtual network, click Choose a virtual network. This step
opens enter the Choose virtual network blade.
7. Click Create new to open the Create virtual network blade.
8. Enter the following values: Name, Address space, Subnet name, Subnet address range. Click OK.
9. On the Settings blade under Frontend IP configuration, choose the IP address type.
10. Click Choose a public IP address, then Create new.
11. Accept the default value, and click OK.
12. On the Settings blade under Listener configuration, select to use HTTP or HTTPS under Protocol. To use
HTTPS, a certificate is required.
13. Configure the WAF specific settings: Firewall status (Enabled) and Firewall mode (Prevention). If you
choose Detection as the mode, traffic is only logged.
14. Review the Summary page and click OK. Now the application gateway is queued up and created.
After the application gateway has been created, you can navigate to it in the portal and continue configuration of
the application gateway.
NOTE
Note: If the subscription you selected already has several resources in it, you can enter the name in the Filter by
name… box to easily access the DNS zone.
3. Click Web application firewall and update the application gateway settings: Upgrade to WAF Tier (checked),
Firewall status (enabled), Firewall mode (Prevention). You also need to configure the rule set, and configure
disabled rules.
For more detailed information on how to create a new application gateway with WAF and how to add WAF to an
existing application gateway, see Create an application gateway with web application firewall by using the portal.
Network Security Groups
A network security group (NSG) contains a list of security rules that allow or deny network traffic to resources
connected to Azure Virtual Networks (VNet). NSGs can be associated to subnets or individual VMs. When an NSG
is associated to a subnet, the rules apply to all resources connected to the subnet. Traffic can further be restricted
by also associating an NSG to a VM or NIC.
NSGs contain four properties: Name, Region, Resource group, and Rules.
NOTE
Although an NSG exists in a resource group, it can be associated to resources in any resource group, as long as the resource
is part of the same Azure region as the NSG.
NSG rules contain nine properties: Name, Protocol (TCP, UDP, or *, which includes ICMP as well as UDP and TCP),
Source port range, Destination port range, Source address prefix, Destination address prefix, Direction (inbound or
outbound), Priority (between 100 and 4096) and Access type (allow or deny). All NSGs contain a set of default rules
that can be deleted, or overridden by the rules you create.
How do I implement NSGs?
Implementing NSGs requires planning, and there are several design considerations you need to take into account.
These include limits on the number of NSGs per subscription and rules per NSG; VNet and subnet design, special
rules, ICMP traffic, isolation of tiers with subnets, load balancers, and more.
For more guidance in planning and implementing NSGs, and a sample deployment scenario, see Filter network
traffic with network security groups.
How do I create rules in an NSG?
To create inbound rules in an existing NSG, do the following:
1. Click Browse, and then Network security groups.
2. In the list of NSGs, click NSG-FrontEnd, and then Inbound security rules.
3. In the list of Inbound security rules, click Add.
4. Enter the values in the following fields: Name, Priority, Source, Protocol, Source range, Destination,
Destination port range, and Action.
The new rule will appear in the NSG after a few seconds.
For more instructions on how to create NSGs in subnets, create rules, and associate an NSG with a front-end and
back-end subnet, see Create network security groups using the Azure portal.
Next steps
Azure Network Security
Azure Network Security Best Practices
Get information about a network security group
Web application firewall (WAF)
Azure Active Directory and Multi-Factor
Authentication: Protect personal data with identity
and access controls
8/30/2017 • 6 min to read • Edit Online
This article provides information and procedures you can use to protect personal data using Azure Active Directory
and Multi-factor authentication security features and services.
Scenario
A large cruise company, headquartered in the United States, is expanding its operations to offer itineraries in the
Mediterranean, Adriatic, and Baltic seas, as well as the British Isles. To support those efforts, it has acquired several
smaller cruise lines based in Italy, Germany, Denmark and the U.K.
The company uses Microsoft Azure to store corporate data in the cloud. This includes personal identifiable
information such as names, addresses, phone numbers, and credit card information of its global customer base. It
also includes traditional Human Resources information such as addresses, phone numbers, tax identification
numbers and other information about company employees in all locations. The cruise line also maintains a large
database of reward and loyalty program members that includes personal information to track relationships with
current and past customers.
Corporate employees access the network from the company’s remote offices and travel agents located around the
world have access to some company resources.
Problem statement
The company must protect the privacy of customers’ and employees’ personal data from attackers seeking to use
compromised identities to gain access. They also must ensure that access to personal data by legitimate users is
restricted to only those who need it to do their jobs.
Company goal
The company’s goal is to ensure that access to personal data is strictly controlled. It is essential that identities of
users with access to personal data be protected by strong authentication. A policy of least privilege must be
enforced so that legitimate users have only the level of access they need, and no more.
Solutions
Microsoft Azure provides identity and access management tools to help companies control who has access to
resources that contain personal data.
Azure Active Directory
Azure Active Directory (AAD) manages identities and controls access to Azure as well as other on-premises and
other cloud resources, data, and applications. Azure Active Directory Privileged Identity Management helps Azure
administrators to minimize the number of people who have access to certain information such as personal data. It
enables them to discover, restrict, and monitor privileged identities and their access to resources, and to assign
temporary, Just-In-Time (JIT) administrative rights to eligible users. It also provides insight into those who have
AAD administrative privileges.
The activities involved in using AAD PIM include:
Enabling Privileged Identity Management for your directory
Using Privileged Identity Management admin dashboard to see important information at a glance
Managing the privileged identities (administrators) by adding or removing permanent or eligible
administrators to each role
Configuring the role activation settings
Activating roles
Reviewing role activity
How do I enable AAD PIM?
To start using PIM for your directory, do the following:
1. Sign in to the Azure portal as a global administrator of your directory.
2. If your organization has more than one directory, select your username in the upper right-hand corner of
the Azure portal. Select the directory where you will use Azure AD Privileged Identity Management.
3. Select More services and use the Filter textbox to search for Azure AD Privileged Identity Management.
4. Check Pin to dashboard and then click Create. The Privileged Identity Management application opens.
Once Azure AD Privileged Identity Management is set up, you see the navigation blade whenever you open the
application.
For more information and instructions on getting started with AAD PIM, see Start Using Azure AD Privileged
Identity Management.
Azure Role -based Access Control
Azure Role-Based Access Control (RBAC) helps Azure administrators manage access to Azure resources by
enabling the granting of access based on the user’s assigned role. You can segregate duties within a team and
grant only the amount of access to users, groups and applications that they need to perform their jobs.
Role-based access can be granted to users using the Azure portal, Azure Command-Line tools or Azure
Management APIs.
For more information about Azure RBAC basics, see Get started with Role-Based Access Control in the Azure Portal.
How do I manage Azure RBAC with PowerShell?
You can use PowerShell cmdlets to manage Azure RBAC, including the following management tasks:
List roles
See who has access
Grant access
Remove access
Create a custom role
Get Actions for a Resource Provider
Modify a custom role
Delete a custom role
List custom roles
For instructions on how to manage Azure RBAC with PowerShell, see Manage Role-based Access with Azure
PowerShell.
Azure Multi-Factor Authentication
Azure Multi-Factor Authentication (MFA) is a two-step verification solution that helps safeguard access to data and
applications, while meeting user demand for a simple sign-in process. It delivers strong authentication via a range
of verification methods, including phone call, text message, or mobile app verification.
To deploy MFA in the Azure cloud, you need to first enable it and then turn on two-step verification for users.
How do I enable Azure to use MFA?
If your users have licenses that include Azure Multi-Factor Authentication, there's nothing that you need to do to
turn on Azure MFA. If not, you need to create a Multi-Factor Auth provider in your directory. To do this, follow these
steps:
1. Select Active Directory in the Azure classic portal (logged on as an administrator).
2. Select Multi-Factor Authentication Providers.
3. Select New, and then under App Services, select Multi-Factor Auth Provider.
4. Select Quick Create.
5. Fill in the name field and select a usage model (per authentication or per enabled user).
6. Designate a directory with which the MFA Provider is associated.
7. Click the Create button.
For more instructions on how to manage your Multi-Factor Auth Provider, see Getting Started with an Azure Multi-
Factor Auth Provider.
How do I turn on two-step verification for users?
You can enforce two-step verification for all sign-ins, or you can create conditional access policies to require two-
step verification only when specific conditions apply.
Enabling Azure MFA by changing user states is the traditional approach for requiring two-step verification. All the
users that you enable will have the same requirement to perform two-step verification every time they sign in.
Enabling a user overrides any conditional access policies that may affect that user.
Enabling Azure MFA with a conditional access policy is a more flexible approach for requiring two-step verification.
You can create conditional access policies that apply to groups as well as individual users. High-risk groups can be
given more restrictions than low-risk groups, or two-step verification can be required only for high-risk cloud apps
and skipped for low-risk ones. However, conditional access is a paid feature of Azure Active Directory.
To enable MFA by changing user state, do the following:
1. Sign in to the Azure portal as an administrator.
2. Go to Azure Active Directory > Users and groups > All users.
3. Select Multi-Factor Authentication.
4. Find the user that you want to enable for Azure MFA. You may need to change the view at the top.
5. Check the box next to the user’s name.
6. On the right, under quick steps, choose Enable.
7. Confirm your selection in the pop-up window that opens. Users for whom MFA has been enabled will be
asked to register the next time they sign in.
To enable Azure MFA with a conditional access policy, do the following:
1. Sign in to the Azure portal as an administrator.
2. Go to Azure Active Directory > Conditional access.
3. Select New policy.
4. Under Assignments, select Users and groups. Use the Include and Exclude tabs to specify which users
and groups will be managed by the policy.
5. Under Assignments, select Cloud apps. Choose to include All cloud apps.
6. Under Access controls, select Grant. Choose Require multi-factor authentication.
7. Turn Enable policy to On and then select Save.
For information on how to configure Azure MFA settings to set up fraud alerts, create a one-time bypass, use
custom voice messages, configure caching, specify trusted IPs, create app passwords, enable remembering MFA for
devices that users trust, and select verification methods, see Configure Azure Multi-Factor Authentication Settings.
Next steps
Securing privileged access in Azure AD
Frequently asked questions about Azure Multi-Factor Authentication
Role-based Access Control troubleshooting
Azure Active Directory Identity Protection
9/6/2017 • 7 min to read • Edit Online
title: Azure Protect personal data at rest with encryption | Microsoft Docs description: This article is part of a series
helping you use Azure to protect personal data services: security documentationcenter: na author: Barclayn
manager: MBaldwin editor: TomSh
ms.assetid: ms.service: security ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: na ms.date:
08/22/2017 ms.author: barclayn ms.custom:
Scenario
A large cruise company, headquartered in the United States, is expanding its operations to offer itineraries in the
Mediterranean, and Baltic seas, as well as the British Isles. To support those efforts, it has acquired several smaller
cruise lines based in Italy, Germany, Denmark, and the U.K.
The company uses Microsoft Azure to store corporate data in the cloud. This may include customer and/or
employee information such as:
addresses
phone numbers
tax identification numbers
credit card information
The company must protect the privacy of customer and employee data while making data accessible to those
departments that need it. (such as payroll and reservations departments)
The cruise line also maintains a large database of reward and loyalty program members that includes personal
information to track relationships with current and past customers.
Problem statement
The company must protect the privacy of customers' and employees’ personal data while making data accessible
to those departments that need it (such as payroll and reservations departments). This personal data is stored
outside of the corporate-controlled data center and is not under the company’s physical control.
Company goal
As part of a multi-layered defense-in-depth security strategy, it is a company goal to ensure that all data sources
that contain personal data are encrypted, including those residing in cloud storage. If unauthorized persons gain
access to the personal data, it must be in a form that will render it unreadable. Applying encryption should be easy,
or transparent – for users and administrators.
Solutions
Azure services provide multiple tools and technologies to help you protect personal data at rest by encrypting it.
Azure Key Vault
Azure Key Vault provides secure storage for the keys used to encrypt data at rest in Azure services and is the
recommended key storage and management solution. Encryption key management is essential to securing stored
data.
How do I use Azure Key Vault to protect keys that encrypt personal data?
To use Azure Key Vault, you need a subscription to an Azure account. You also need Azure PowerShell installed.
Steps include using PowerShell cmdlets to do the following:
1. Connect to your subscriptions
2. Create a key vault
3. Add a key or secret to the key vault
4. Register applications that will use the key vault with Azure Active Directory
5. Authorize the applications to use the key or secret
To create a key vault, use the New-AzureRmKeyVault PowerShell CmDlt. You will assign a vault name, resource
group name, and geographic location. You’ll use the vault name when managing keys via other Cmdlets.
Applications that use the vault through the REST API will use the vault URI.
Azure Key Vault can provide a software-protected key for you, or you can import an existing key in a .PFX file. You
can also store secrets (passwords) in the vault.
You can also generate a key in your local HSM and transfer it to HSMs in the Key Vault service, without the key
leaving the HSM boundary.
For detailed instructions on using Azure Key Vault, follow the steps in Get Started with Azure Key Vault.
For a list of PowerShell Cmdlets used with Azure Key Vault, see AzureRM.KeyVault.
Azure Disk Encryption for Windows
Azure Disk Encryption for Windows and Linux IaaS VMs protects personal data at rest on Azure virtual machines
and integrates with Azure Key Vault. Azure Disk Encryption uses BitLocker in Windows and DM-Crypt in Linux to
encrypt both the OS and the data disks. Azure Disk Encryption is supported on Windows Server 2008 R2, Windows
Server 2012, Windows Server 2012 R2, Windows Server 2016, and on Windows 8 and Windows 10 clients.
How do I use Azure Disk Encryption to protect personal data?
To use Azure Disk Encryption, you need a subscription to an Azure account. To enable Azure Disk Encryption for
Windows and Linux VMs, do the following:
1. Use the Azure Disk Encryption Resource Manager template, PowerShell, or the command line interface (CLI)
to enable disk encryption and specify the encryption configuration.
2. Grant access to the Azure platform to read the encryption material from your key vault.
3. Provide an Azure Active Directory (AAD) application identity to write the encryption key material to your key
vault.
Azure will update the VM and the key vault configuration, and set up your encrypted VM.
When you set up your key vault to support Azure Disk Encryption, you can add a key encryption key (KEK) for
added security and to support backup of encrypted virtual machines.
Detailed instructions for specific deployment scenarios and user experiences are included in Azure Disk Encryption
for Windows and Linux IaaS VMs.
Azure Storage Service Encryption
Azure Storage Service Encryption (SSE) for Data at Rest helps you protect and safeguard your data to meet your
organizational security and compliance commitments. Azure Storage automatically encrypts your data using 256-
bit AES encryption prior to persisting to storage, and decrypts it prior to retrieval. This service is available for Azure
Blobs and Files.
How do I use Storage Service Encryption to protect personal data?
To enable Storage Service Encryption, do the following:
1. Log into the Azure portal.
2. Select a storage account.
3. In Settings, under the Blob Service section, select Encryption.
4. Under the File Service section, select Encryption.
After you click the Encryption setting, you can enable or disable Storage Service Encryption.
New data will be encrypted. Data in existing files in this storage account will remain unencrypted.
After enabling encryption, copy data to the storage account using one of the following methods:
1. Copy blobs or files with the AzCopy Command Line utility.
2. Mount a file share using SMB so you can use a utility such as Robocopy to copy files.
3. Copy blob or file data to and from blob storage or between storage accounts using Storage Client Libraries
such as .NET.
4. Use a Storage Explorer to upload blobs to your storage account with encryption enabled.
Transparent Data Encryption
Transparent Data Encryption (TDE) is a feature in SQL Azure by which you can encrypt data at both the database
and server levels. TDE is now enabled by default on all newly created databases. TDE performs real-time I/O
encryption and decryption of the data and log files.
How do I use TDE to protect personal data?
You can configure TDE through the Azure portal, by using the REST API, or by using PowerShell. To enable TDE on
an existing database using the Azure Portal, do the following:
1. Visit the Azure portal at https://portal.azure.com and sign-in with your Azure Administrator or Contributor
account.
2. On the left banner, click to BROWSE, and then click SQL databases.
3. With SQL databases selected in the left pane, click your user database.
4. In the database blade, click All settings.
5. In the Settings blade, click Transparent data encryption part to open the Transparent data encryption blade.
6. In the Data encryption blade, move the Data encryption button to On, and then click Save (at the top of the
page) to apply the setting. The Encryption status will approximate the progress of the transparent data
encryption.
Instructions on how to enable TDE and information on decrypting TDE-protected databases and more can be found
in the article Transparent Data Encryption with Azure SQL Database.
Summary
The company can accomplish its goal of encrypting personal data stored in the Azure cloud. They can do this by
using Azure Disk Encryption to protect entire volumes. This may include both the operating system files and data
files that hold personal identifiable information and other sensitive data. Azure Storage Service encryption can be
used to protect personal data that is stored in blobs and files. For data that is stored in Azure SQL databases,
Transparent Data Encryption provides protection from unauthorized exposure of personal information.
To protect the keys that are used to encrypt data in Azure, the company can use Azure Key Vault. This streamlines
the key management process and enables the company to maintain control of keys that access and encrypt
personal data.
Next steps
Azure Disk Encryption Troubleshooting Guide
Encrypt an Azure Virtual Machine
Encryption of data in Azure Data Lake Store
Azure Cosmos DB database encryption at rest
Azure encryption technologies: Protect personal data
in transit with encryption
8/30/2017 • 7 min to read • Edit Online
This article will help you understand and use Azure encryption technologies to secure data in transit.
Protecting the privacy of personal data as it travels across the network is an essential part of a multi-layered
defense-in-depth security strategy. Encryption in transit is designed to prevent an attacker who intercepts
transmissions from being able to view or use the data.
Scenario
A large cruise company, headquartered in the United States, is expanding its operations to offer itineraries in the
Mediterranean, Adriatic, and Baltic seas, as well as the British Isles. To support those efforts, it has acquired several
smaller cruise lines based in Italy, Germany, Denmark and the U.K.
The company uses Microsoft Azure to store corporate data in the cloud. This includes personal identifiable
information such as names, addresses, phone numbers, and credit card information of its global customer base. It
also includes traditional Human Resource information such as addresses, phone numbers, tax identification
numbers and other information about company employees in all locations. The cruise line also maintains a large
database of reward and loyalty program members that includes personal information to track relationships with
current and past customers.
Personal data of customers is entered in the database from the company’s remote offices and from travel agents
located around the world. Documents containing customer information are transferred across the network to
Azure storage.
Problem statement
The company must protect the privacy of customers’ and employees’ personal data while it is in transit to and from
Azure services.
Company goal
The company goal to ensure that personal data is encrypted when off disk. If unauthorized persons intercept the
off-disk personal data, it must be in a form that will render it unreadable. Applying encryption should be easy, or
completely transparent, for users and administrators.
Solutions
Azure services provide multiple tools and technologies to help you protect personal data in transit.
Azure Storage
Data that is stored in the cloud must travel from the client, which can be physically located anywhere in the world,
to the Azure data center. When that data is retrieved by users, it travels again, in the opposite direction. Data that is
in transit over the public Internet is always at risk of interception by attackers. It is important to protect the privacy
of personal data by using transport-level encryption to secure it as it moves between locations.
The HTTPS protocol provides a secure, encrypted communications channel over the Internet. HTTPS should be used
to access objects in Azure Storage and when calling REST APIs. You enforce use of the HTTPS protocol when using
Shared Access Signatures (SAS) to delegate access to Azure Storage objects. There are two types of SAS: Service
SAS and Account SAS.
How do I construct a Service SAS?
A Service SAS delegates access to a resource in just one of the storage services (blob, queue, table or file service).
To construct a Service SAS, do the following:
1. Specify the Signed Version Field
2. Specify the Signed Resource (Blob and File Service Only)
3. Specify Query Parameters to Override Response Headers (Blob Service and File Service Only)
4. Specify the Table Name (Table Service Only)
5. Specify the Access Policy
6. Specify the Signature Validity Interval
7. Specify Permissions
8. Specify IP Address or IP Range
9. Specify the HTTP Protocol
10. Specify Table Access Ranges
11. Specify the Signed Identifier
12. Specify the Signature
For more detailed instructions, see Constructing a Service SAS.
How do I construct an Account SAS?
An Account SAS delegates access to resources in one or more of the storage services. You can also delegate access
to read, write, and delete operations on blob containers, tables, queues, and file shares that are not permitted with a
service SAS. Construction of an Account SAS is similar to that of a Service SAS. For detailed instructions, see
Constructing an Account SAS.
How do I enforce HTTPS when calling REST APIs?
To enforce the use of HTTPS when calling REST APIs to access objects in storage accounts, you can enable Secure
Transfer Required for the storage account.
1. In the Azure portal, select Create Storage Account, or for an existing storage account, select Settings and
then Configuration.
2. Under Secure Transfer Required, select Enabled.
For more detailed instructions, including how to enable Secure Transfer Required programmatically, see Require
Secure Transfer.
How do I encrypt data in Azure File Storage?
To encrypt data in transit with Azure File Storage, you can use SMB 3.x with Windows 8, 8.1, and 10 and with
Windows Server 2012 R2 and Windows Server 2016. When you are using the Azure Files service, any connection
without encryption fails when "Secure transfer required" is enabled. This includes scenarios using SMB 2.1, SMB 3.0
without encryption, and some flavors of the Linux SMB client.
Azure Client-Side Encryption
Another option for protecting personal data while it’s being transferred between a client application and Azure
Storage is Client-side Encryption. The data is encrypted before being transferred into Azure Storage and when you
retrieve the data from Azure Storage, the data is decrypted after it is received on the client side.
Azure Site -to -Site VPN
An effective way to protect personal data in transit between a corporate network or user and the Azure virtual
network is to use a site-to-site or point-to-site Virtual Private Network (VPN). A VPN connection creates a secure
encrypted tunnel across the Internet.
How do I create a site-to-site VPN connection?
A site-to-site VPN connects multiple users on the corporate network to Azure. To create a site-to-site connection in
the Azure portal, do the following:
1. Create a virtual network.
2. Specify a DNS server.
3. Create the gateway subnet.
4. Create the VPN gateway.
Summary
The company can accomplish its goal of protecting personal data and the privacy of such data by enforcing HTTPS
connections to Azure Storage, using Shared Access Signatures and enabling Secure Transfer Required on the
storage accounts. They can also protect personal data by using SMB 3.0 connections and implementing client-side
encryption. Site-to-site VPN connections from the corporate network to the Azure virtual network and point-to-site
VPN connections from individual users will create a secure tunnel through which personal data can securely travel.
Microsoft’s default encryption practices will further protect the privacy of personal data.
Next steps
Azure Data Security and Encryption Best Practices
Planning and design for VPN Gateway
VPN Gateway FAQ
Buy and configure an SSL Certificate for your Azure App Service
Document protection of personal data with Azure
reporting tools
8/30/2017 • 12 min to read • Edit Online
This article will discuss how to use Azure reporting services and technologies to help protect privacy of personal
data.
Scenario
A large cruise company, headquartered in the United States, is expanding its operations to offer itineraries in the
Mediterranean, Adriatic, and Baltic seas, as well as the British Isles. To help these efforts, it has acquired several
smaller cruise lines based in Italy, Germany, Denmark and the U.K.
The company uses Microsoft Azure for processing and storage of corporate data. This includes personal identifiable
information such as names, addresses, phone numbers, and credit card information of its global customer base. It
also includes traditional Human Resources information such as addresses, phone numbers, tax identification
numbers and other information about company employees in all locations. The cruise line also maintains a large
database of reward and loyalty program members that includes personal information to track relationships with
current and past customers.
Corporate employees access the network from the company’s remote offices and travel agents located around the
world have access to some company resources.
Problem statement
The company must protect the privacy of customers' and employees’ and personal data through a multi-layered
security strategy that uses Azure management and security features to impose strict controls on access to and
processing of personal data, and must be able to demonstrate its protective measures to internal and external
auditors.
Company goal
As part of its defense-in-depth security strategy, it is a company goal to track all access to and processing of
personal data, and ensure that documentation of adequate privacy protections for personal data are in place and
working.
Solutions
Microsoft Azure provides comprehensive monitoring, logging, and diagnostics tools to help track and record
activities and events associated with accessing and processing personal data, geographic flow of data, and third-
party access to personal data. Because security of personal data in the cloud is a shared responsibility, Microsoft
also provides customers with:
Detailed information about its own processing of customers’ data
Security measures administered by Microsoft
Where and how it sends customers’ data
Details of Microsoft’s own privacy reviews process
Azure Active Directory
Azure Active Directory is Microsoft’s cloud-based, multi-tenant directory and identity management service. The
service’s sign-in and audit reporting capabilities provide you with detailed sign-in and application usage activity
information to help you monitor and ensure proper access to customers’ and employees’ personal data.
There are two types of activity reports:
The audit activity reports/logs provide a detailed record of system activities/tasks
The sign-ins activity report/log shows you who has performed each activity listed in the audit report
Using the two together, you can track the history of every task performed and who performed each. Both types of
reports are customizable and can be filtered.
How do I access the audit and security logs?
The audit and security logs can be accessed from the Active Directory portal in three different ways: through the
Activity section (select either Audit logs or Sign-ins), or from Users and groups or Enterprise applications
under Manage in Active Directory. Reports can also be accessed through the Azure Active Directory reporting API.
1. In the Azure portal, select Azure Active Directory.
2. In the Activity section, select Audit logs.
Visit the Log Analytics documentation to learn more about the service.
Visit the Get started with a Log Analytics workspace tutorial to create an evaluation workspace and learn the basics
of how to use the service.
Visit the following web pages for more specific information on how to connect to use Log Analytics with the logs
described above:
Windows event logs data sources in Log Analytics
IIS logs in Log Analytics
Syslog data sources in Log Analytics
Azure Monitor/Azure Activity Log
Azure Monitor provides base level infrastructure metrics and logs for most services in Microsoft Azure. Monitoring
can help you to gain deep insights about your Azure applications. Azure Monitor relies on the Azure diagnostics
extension (Windows or Linux) to collect most application level metrics and logs. The Azure Activity Log is one of the
resources you can view with Azure Monitor. It tracks every API call, and provides a wealth of information about
activities that occur in Azure Resource Manager. You can search the Activity Log (previously called Operational or
Audit Logs) for information about your resource as seen by the Azure infrastructure.
Although much of the information recorded in the Activity log pertains to performance and service health, there is
also information that is related to protection of data. Using the Activity Log, you can determine the “what, who, and
when” for any write operations (PUT, POST, DELETE) taken on the resources in your Azure subscription.
For example, it provides a record when an administrator deletes a network security group, which could impact the
protection of personal data. Activity log entries are stored in Azure Monitor for 90 days.
How do I use the data collected by Azure Monitor?
There are a number of ways to use the data in the Activity log and other Azure Monitor resources.
You can stream the data to other locations in real line.
You can store the data for longer time periods than the defaults, using an Azure storage account and setting
a retention policy.
You can visual the data in graphics and charts, using the Azure portal, Azure Application Insights, Microsoft
PowerBI, or third-party visualization tools.
You can query the data using the Azure Monitor REST API, CLI commands, PowerShell cmdlets, or the .NET
SDK.
To get started with Azure Monitor, select More Services in the Azure portal.
1. Scroll down to Monitor in the Monitoring and Managing section.
2. You can then pin queries to a portal dashboard by clicking the Pin button. This helps you create a single
source of information for operational data on your services. The query name and number of results will be
displayed on the dashboard.
You can also use the Monitor to view metrics for all Azure resources, configure diagnostics settings and alerts, and
search the log. For more information on how to use the Azure Monitor and Activity Log, see Get Started with Azure
Monitor.
Azure Diagnostics
The diagnostics capability in Azure enables collection of data from several sources. The Windows Event logs, which
include the Security log, can be especially useful in tracking and documenting protection of personal data. The
security log tracks logon success and failure events, as well as permissions changes, detection of patterns indicating
certain types of attacks, changes to security-related policies, security group membership changes, and much more.
For example, Event ID 4695 alerts you to the attempted unprotection of auditable protected data. This pertains to
the Data Protection API (DPAPI), which helps to protect data such as private keys, stored credentials, and other
confidential information.
How do I enable the diagnostics extension for Windows VMs?
You can use PowerShell to enable the diagnostics extension for a Windows VM, so as to collect log data. The steps
for doing so depend on which deployment model you use (Resource Manager or Classic). To enable the diagnostics
extension on an existing VM that was created through the Resource Manager deployment model, you can use the
Set-AzureRMVMDiagnosticsExtension PowerShell cmdlet.
\$diagnosticsconfig_path is the path to the file that contains the diagnostics configuration in XML. For more
detailed instructions on enabling Azure Diagnostics on a VM, see Use PowerShell to enable Azure Diagnostics in a
virtual machine running Windows.
The Azure diagnostics extension can transfer the collected data to an Azure storage account or send it to services
such as Application Insights. You can then use the data for auditing.
How do I store and view diagnostic data?
It’s important to remember that diagnostic data is not permanently stored unless you transfer it to the Microsoft
Azure storage emulator or to Azure storage. To store and view diagnostic data in Azure Storage, follow these steps:
1. Specify a storage account in the ServiceConfiguration.cscfg file. Azure Diagnostics can use either the Blob
service or the Table service, depending on the type of data. Windows Event logs are stored in Table format.
2. Transfer the data. You can request to transfer the diagnostic data through the configuration file. For SDK 2.4
and previous, you can also make the request programmatically.
3. View the data, using Azure Storage Explorer, Server Explorer in Visual Studio, or Azure Diagnostics Manager
in Azure Management Studio.
For more information on how to perform each of these steps, see Store and view diagnostic data in Azure Storage.
Azure Storage Analytics
Storage Analytics logs detailed information about successful and failed requests to a storage service. This
information can be used to monitor individual requests, which can help in documenting access to personal data
stored in the service. However, Storage Analytics logging is not enabled by default for your storage account. You
can enable it in the Azure portal.
How do I configure monitoring for a storage account?
To configure monitoring for a storage account, do the following:
1. Select Storage accounts in the Azure portal, then select the name of the account you want to monitor.
2. In the Monitoring section, select Diagnostics.
3. Select the type of metrics data you want to monitor for each service (Blob, Table, File). To instruct Azure
Storage to save diagnostics logs for read, write, and delete requests for the blob, table, and queue services,
select Blob logs, Table logs and Queue logs.
4. Using the slider at the bottom, set the retention policy in days (value of 1 – 365). Seven days is the default.
5. Select Save to apply the configuration settings.
Storage Logging log entries contain the following information about individual requests:
Timing information such as start time, end-to-end latency, and server latency.
Details of the storage operation such as the operation type, the key of the storage object the client is
accessing, success or failure, and the HTTP status code returned to the client.
Authentication details such as the type of authentication the client used.
Concurrency information such as the ETag value and last modified timestamp.
The sizes of the request and response messages.
For more detailed instructions on how to enable Storage Analytics logging, see Monitor a storage account in the
Azure portal.
Azure Security Center
Azure Security Center monitors the security state of your Azure resources in order to prevent and detect threats,
and provide recommendations for responding. It provides several ways to help document your security measures
that protect the privacy of personal data.
Security health monitoring helps you ensure compliance with your security policies. Security monitoring is a
proactive strategy that audits your resources to identify systems that do not meet organizational standards or best
practices. You can monitor the security state of the following resources:
Compute (virtual machines and cloud services)
Networking (virtual networks)
Storage and data (server and database auditing and threat detection, TDE, storage encryption)
Applications (potential security issues)
Security issues in any of these categories could pose a threat to the privacy of personal data.
How do I view the security state of my Azure resources?
Security Center periodically analyzes the security state of your Azure resources. You can view any potential security
vulnerabilities it identifies in the Prevention section of the dashboard.
1. In the Prevention section, select the Compute tile. You’ll see here an Overview, along with the Virtual
machines listing of all VMs and their security states, and the Cloud services list of web and worker roles
monitored by Security Center.
2. On the Overview tab, second a recommendation to view more information.
3. On the Virtual machines tab, select a VM to view additional details.
When data collection is enabled in Azure Security Center, the Microsoft Monitoring Agent is automatically
provisioned on all existing and any new supported virtual machines that are deployed. Data collected from this
agent is stored in either an existing Log Analytics workspace associated with your subscription or a new workspace.
Threat Intelligence Reports are provided by Security Center. These give you useful information to help discern the
attacker’s identity, objectives, current and historical attack campaigns, and tactics, tools and procedures used.
Mitigation and remediation information is also included.
The primary purpose of these threat reports is to help you to respond effectively to the immediate threat and help
take measures afterward to mitigate the issue. The information in the reports can also be useful when you
document your incident response for reporting and auditing purposes.
The Threat Intelligence Reports are presented in .PDF format, accessed via a link in the Reports field of the
Suspicious process executed blade for each security alert in Azure Security Center.
For more information on how to view and use the Threat Intelligence Report, see Azure Security Center Threat
Intelligence Report.
Next Steps:
Getting Started with the Azure Active Directory reporting API
What is Log Analytics?
Overview of Monitoring in Microsoft Azure
Introduction to the Azure Activity Log (video)
Azure security courses from Microsoft Virtual
Academy
6/27/2017 • 3 min to read • Edit Online
Microsoft Virtual Academy provides free, online training to help Developers, IT and Data Professionals, and
students learn the latest technology, build their skills, and advance their careers.
On this page, you find a curated collection of Azure security-related courses. Visit the Microsoft Virtual Academy to
see all the courses they have available.
Dev/Test in the Cloud
Are you a developer who needs to deliver faster and better applications? Moving your development and testing
environments to the cloud can help you achieve exactly that! Learn how to get it done, and find out the benefits of
making the move. Plus, see demonstrations and presentations that show you how Microsoft Azure can support
your development and testing needs. Included are lesson on security development and deployment practices.
Common Tasks for Linux on Azure
If you have questions about using Linux on the Microsoft Azure platform, this detailed course has answers for you.
Explore some common tasks with the experts from Linux Academy. Learn about creating a Linux virtual machine
(VM) in Azure, accessing the Linux VM using remote desktop software, and running virtual hosts. Many security
technologies and configurations are addressed in this course.
Secure the Cloud
In this session, learn how Microsoft can help you meet global compliance requirements, such as ISO 27001 /
27018, FedRAMP, PCI, and HIPAA, with new security controls. These controls range from at-rest data encryption,
key management, VM protection, logging and monitoring, to anti-malware services, identity management, access
controls, and more.
Design and Implement Cloud Data Platform Solutions
Learn the features and capabilities of Microsoft cloud data platform solutions. Get a platform overview and hear
about security features, options for high availability, techniques for monitoring and managing cloud data, and
more. Plus, get guidance on how to identify tradeoffs and make decisions for designing public and hybrid cloud
solutions using Microsoft cloud data platform features.
Manage and Secure Identities in a Cloud and Mobile World
In this session, learn how Azure Active Directory and Microsoft Advanced Threat Analytics helps you secure and
manage user identity, identify security breaches before they cause damage, and provide your users a single identity
for accessing all corporate resources. Explore the technologies used to discover Shadow IT, manage application
access, and monitor suspicious activity through advanced security reporting, user behavioral analytics, auditing,
and alerting.
Security in a Cloud-Enabled World
Experts lead you through the customer responsibility roadmap in the Microsoft Cloud Security for Enterprise
Architects poster. The experts also provide recommendations for modernizing each part of your security posture,
including governance, containment strategies, security operations, high-value asset protection, information
protection, and user and device security, with a particular emphasis on protecting administrative control. Learn
from the same framework that the Microsoft cybersecurity team uses to assess customers' cloud security and to
build them a security roadmap.
Microsoft Azure IaaS Deep Dive
Learn how to use Microsoft Azure infrastructure capabilities. If you are an IT Pro, no need to have previous
experience with Azure. This course walks you through creating and configuring Azure VMs, Azure Virtual Networks,
and cross-premises connectivity to get things up and running on the cloud. Security features and considerations
are included throughout the course.
Getting Started with Azure Security for the IT Professional
In this demo-filled course, a team of security experts and Azure engineers takes you beyond the basic certifications
and explores what's possible inside Azure. See how to design and use various technologies to ensure that you have
the security and architecture you need to successfully launch your projects in the cloud. Dive into datacenter
operations, VM configuration, network architecture, and storage infrastructure.
Deep Dive into Azure Resource Manager Scenarios and Patterns
Explore Azure Resource Manager with a team of experts, who show you scripts and tools that make it easy to spin
up or spin down elements of your application infrastructure. Explore the use of role-based access control (RBAC) to
implement security with Azure Resource Manager.
Azure Rights Management Services Core Skills
Find out why information protection is a "must have" requirement in your organization and how rights
management protects your organization's intellectual property, wherever it travels across devices and the cloud.
Get hands-on experience and technical know-how from Microsoft experts.
Azure security videos on Channel 9
6/27/2017 • 3 min to read • Edit Online
Channel 9 is a community that brings forward the people behind our products and connects them with customers.
They think there is a great future in software and they’re excited about it. Channel 9 is a community to participate in
the ongoing conversation.
The following is a curated list of Azure security presentations on Channel 9. Make sure to check this page monthly
for new videos.
Accelerating Azure Consumption with Barracuda Security
See how you can use Barracuda security to secure your Azure deployments.
Azure Security Center - Threat Detection
With Azure Security Center, you get a central view of the security state of all your Azure resources. At a glance,
verify that the appropriate security controls are in place and configured correctly. Scott talks to Sarah Fender who
explains how Security Center integrates Threat Detection.
Azure Security Center Overview
With Azure Security Center, you get a central view of the security state of all your Azure resources. At a glance,
verify that the appropriate security controls are in place and configured correctly. Scott talks to Sara Fender who
explains it all!
Live Demo: Protecting against, Detecting and Responding to Threats
Join this session to see the Microsoft security platform in action. General Manager for Cloud & Enterprise, Julia
White, demonstrates the security features of Windows 10, Azure, and Office 365 that can help you keep your
organization secure.
Encryption in SQL Server Virtual Machines in Azure for better security
Jack Richins teaches Scott how to easily encrypt his SQL Server databases on Virtual Machine Azure instances. It's
easier than you'd think!
Areas covered in this video:
Understanding encryption and SQL Server
Understanding the Data Protection API, master keys, and certificates
Using SQL commands to create the master key and certificates, and encrypt the database
How to set security in DevTest Labs
As an owner of your lab, you can secure lab access by via two lab roles: Owner and DevTest Labs User. A person in
the Owner role has complete access in the lab whereas a person in the DevTest Labs User role has limited access. In
this video, we show you how to add a person in either of these roles to a lab.
Managing Secrets for Azure Apps
Every serious app you deploy on Azure has critical secrets – connection strings, certificates, keys. Silly mistakes in
managing these secrets leads to fatal consequences – leaks, outages, compliance violations. As multiple recent
surveys point out, silly mistakes cause four times more data breaches than adversaries. In this session, we go over
some best practices to manage your important app secrets. These best practices may seem like common sense, yet
many developers neglect them. We also go over how to use Azure Key Vault to implement those best practices. As
an added benefit, following these practices helps you demonstrate compliance with standards such as SOC. The
first 10 minutes of the session are level 100 and they apply to any cloud app you develop on any platform. The
remainder is level 200-300 and focuses on apps you build on the Azure platform.
Securing your Azure Virtual Network using Network Security Groups with Narayan Annamalai
Senior Program Manager Narayan Annamalai teaches Scott how to use Network Security Groups within an Azure
Virtual Network. You can create control access to objects within Azure by subnet and network! You learn how to
control access and create groups within Azure using PowerShell.
Azure AD Privileged Identity Management: Security Wizard, Alerts, Reviews
Azure Active Directory (AD) Privileged Identity Management is a premium functionality that allows you to discover,
restrict, and monitor privileged identities and their access to resources. It also enforces on-demand, just in time
administrative access when needed. Learn about:
Managing protection for Office 365 workload-specific administrative roles
Configuring Azure Multi-Factor Authentication(MFA) for privileged role activations
Measuring and improving your tenant security posture
Monitoring and fixing security findings
Reviewing who needs to remain in privileged roles for periodic recertification workflows
Azure Key Vault with Amit Bapat
Amit Bapat introduces Scott to Azure Key Vault. With Azure Key Vault, you can encrypt keys and small secrets like
passwords using keys stored in hardware security modules (HSMs). It's cloud-based, hardware-based secret
management for Microsoft Azure!
Microsoft Threat Modeling Tool
8/25/2017 • 1 min to read • Edit Online
The Threat Modeling Tool is a core element of the Microsoft Security Development Lifecycle (SDL). It allows
software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-
effective to resolve. As a result, it greatly reduces the total cost of development. Also, we designed the tool with
non-security experts in mind, making threat modeling easier for all developers by providing clear guidance on
creating and analyzing threat models.
The tool enables anyone to:
Communicate about the security design of their systems
Analyze those designs for potential security issues using a proven methodology
Suggest and manage mitigations for security issues
Here are some tooling capabilities and innovations, just to name a few:
Automation: Guidance and feedback in drawing a model
STRIDE per Element: Guided analysis of threats and mitigations
Reporting: Security activities and testing in the verification phase
Unique Methodology: Enables users to better visualize and understand threats
Designed for Developers and Centered on Software: many approaches are centered on assets or attackers.
We are centered on software. We build on activities that all software developers and architects are familiar with
-- such as drawing pictures for their software architecture
Focused on Design Analysis: The term "threat modeling" can refer to either a requirements or a design
analysis technique. Sometimes, it refers to a complex blend of the two. The Microsoft SDL approach to threat
modeling is a focused design analysis technique
Next steps
The table below contains important links to get you started with the Threat Modeling Tool:
STEP DESCRIPTION
Resources
Here are a few older articles still relevant to threat modeling today:
Article on the Importance of Threat Modeling
Training Published by Trustworthy Computing
Check out what a few Threat Modeling Tool experts have done:
Threats Manager
Simone Curzi Security Blog
Getting started with the Threat Modeling Tool
8/25/2017 • 7 min to read • Edit Online
The Cloud and Enterprise Security Tools team released the Threat Modeling Tool Preview earlier this year as a free
click-to-download. The change in delivery mechanism allows us to push the latest improvements and bug fixes
to customers each time they open the tool, making it easier to maintain and use. This article takes you through the
process of getting started with the Microsoft SDL threat modeling approach and shows you how to use the tool to
develop great threat models as a backbone of your security process.
This article builds on existing knowledge of the SDL threat modeling approach. For a quick review, refer to Threat
Modeling Web Applications and an archived version of Uncover Security Flaws Using the STRIDE Approach
MSDN article published in 2006.
To quickly summarize, the approach involves creating a diagram, identifying threats, mitigating them and
validating each mitigation. Here’s a diagram that highlights this process:
Feedback, Suggestions and Issues Button Takes you the MSDN Forum for all things SDL. It gives you
an opportunity to read through what other users are doing,
along with workarounds and recommendations. If you still
can’t find what you’re looking for, email
[email protected] for our support team to help
you
Create a Model Opens a blank canvas for you to draw your diagram. Make
sure to select which template you’d like to use for your model
Template for New Models You must select which template to use before creating a
model. Our main template is the Azure Threat Model
Template, which contains Azure-specific stencils, threats and
mitigations. For generic models, select the SDL TM Knowledge
Base from the drop-down menu. Want to create your own
template or submit a new one for all users? Check out our
Template Repository GitHub Page to learn more
Getting Started Guide Opens the Microsoft Threat Modeling Tool main page
Template section
COMPONENT DETAILS
Create New Template Opens a blank template for you to build on. Unless you have
extensive knowledge in building templates from scratch, we
recommend you to build from existing ones
The Threat Modeling Tool team is constantly working to improve tool functionality and experience. A few minor
changes might take place over the course of the year, but all major changes require rewrites in the guide. Refer to
it often to ensure you get the latest announcements.
Building a model
In this section, we follow:
Cristina (a developer)
Ricardo (a program manager) and
Ashish (a tester)
They are going through the process of developing their first threat model.
Ricardo: Hi Cristina, I worked on the threat model diagram and wanted to make sure we got the details right.
Can you help me look it over? Cristina: Absolutely. Let’s take a look. Ricardo opens the tool and shares his
screen with Cristina.
Cristina: Ok, looks straightforward, but can you walk me through it? Ricardo: Sure! Here is the breakdown:
Our human user is drawn as an outside entity—a square
They’re sending commands to our Web server—the circle
The Web server is consulting a database (two parallel lines)
What Ricardo just showed Cristina is a DFD, short for Data Flow Diagram. The Threat Modeling Tool allows users
to specify trust boundaries, indicated by the red dotted lines, to show where different entities are in control. For
example, IT administrators require an Active Directory system for authentication purposes, so the Active Directory
is outside of their control.
Cristina: Looks right to me. What about the threats? Ricardo: Let me show you.
Analyzing threats
Once he clicks on the analysis view from the icon menu selection (file with magnifying glass), he is taken to a list of
generated threats the Threat Modeling Tool found based on the default template, which uses the SDL approach
called STRIDE (Spoofing, Tampering, Info Disclosure, Denial of Service and Elevation of Privilege). The
idea is that software comes under a predictable set of threats, which can be found using these 6 categories.
This approach is like securing your house by ensuring each door and window has a locking mechanism in place
before adding an alarm system or chasing after the thief.
Ricardo begins by selecting the first item on the list. Here’s what happens:
First, the interaction between the two stencils is enhanced
Second, additional information about the threat appears in the Threat Properties window
The generated threat helps him understand potential design flaws. The STRIDE categorization gives him an idea on
potential attack vectors, while the additional description tells him exactly what’s wrong, along with potential ways
to mitigate it. He can use editable fields to write notes in the justification details or change priority ratings
depending on his organization’s bug bar.
Azure templates have additional details to help users understand not only what’s wrong, but also how to fix it by
adding descriptions, examples and hyperlinks to Azure-specific documentation.
The description made him realize the importance of adding an authentication mechanism to prevent users from
being spoofed, revealing the first threat to be worked on. A few minutes into the discussion with Cristina, they
understood the importance of implementing access control and roles. Ricardo filled in some quick notes to make
sure these were implemented.
As Ricardo went into the threats under Information Disclosure, he realized the access control plan required some
read-only accounts for audit and report generation. He wondered whether this should be a new threat, but the
mitigations were the same, so he noted the threat accordingly. He also thought about information disclosure a bit
more and realized that the backup tapes were going to need encryption, a job for the operations team.
Threats not applicable to the design due to existing mitigations or security guarantees can be changed to “Not
Applicable” from the Status drop-down. There are three other choices: Not Started – default selection, Needs
Investigation – used to follow up on items and Mitigated – once it’s fully worked on.
Next Steps
Send your questions, comments and concerns to [email protected]. Download the Threat Modeling
Tool to get started.
Threat Modeling Tool feature overview
9/7/2017 • 5 min to read • Edit Online
The Threat Modeling Tool can help you with your threat modeling needs. For a basic introduction to the tool, see
Get started with the Threat Modeling Tool.
NOTE
The Threat Modeling Tool is updated frequently, so check this guide often to see our latest features and improvements.
To see the features currently available in the tool, use the threat model created by our team in the Get started
example.
Navigation
Before we discuss the built-in features, let's review the main components found in the tool.
Menu items
The experience is similar to other Microsoft products. Let's review the top-level menu items.
LABEL DETAILS
Edit Undo and redo actions, as well as copy, paste, and delete.
Design Opens the Design view, where you can create models.
Zoom in/Zoom out Zooms in and out of the diagram for a better view.
Canvas
The canvas is the space where you drag and drop elements. Drag and drop is the quickest and most efficient way to
build models. You can also right-click and select items from the menu to add generic versions of elements, as
shown:
Drop the stencil on the canvas
Data flow Binary, ALPC, HTTP, HTTPS/TLS/SSL, IOCTL, IPSec, named pipe,
RPC/DCOM, SMB, UDP
Notes/messages
COMPONENT DETAILS
Messages Internal tool logic that alerts users whenever there's an error,
such as no data flows between elements.
Element properties
Element properties vary by the elements you select. Apart from trust boundaries, all other elements contain three
general selections:
ELEMENT PROPERTY DETAILS
Reason for out of scope Justification field to let users know why out of scope was
selected.
Properties are changed under each element category. Select each element to inspect the available options. Or you
can open the template to learn more. Let's review the features.
Welcome screen
When you open the app, you see the Welcome screen.
Open a model
Hover over Open A Model to reveal two options: Open From This Computer and Open From OneDrive. The
first option opens the File Open screen. The second option takes you through the sign-in process for OneDrive.
After successful authentication, you can select folders and files.
Design view
When you open or create a new model, the Design view opens.
Add elements
You can add elements on the grid in two ways:
Drag and drop: Drag the desired element to the grid. Then use the element properties to provide additional
information.
Right-click: Right-click anywhere on the grid, and select items from the drop-down menu. A generic
representation of the element you select appears on the screen.
Connect elements
You can connect elements in two ways:
Drag and drop: Drag the desired dataflow to the grid, and connect both ends to the appropriate elements.
Click + Shift: Click the first element (sending data), press and hold the Shift key, and then select the second
element (receiving data). Right-click, and select Connect. If you use a bi-directional data flow, the order is not as
important.
Properties
To see the properties that can be modified on the stencils, select the stencil and the information populates
accordingly. The following example shows before and after a Database stencil is dragged onto the diagram:
Before
After
Messages
If you create a threat model and forget to connect data flows to elements, you get a notification. You can ignore the
message, or you can follow the instructions to fix the issue.
Notes
To add notes to your diagram, switch from the Messages tab to the Notes tab.
Analysis view
After you build your diagram, select the Analysis symbol (the magnifying glass) on the shortcuts toolbar to switch
to the Analysis view.
Generated threat selection
When you select a threat, you can use three distinct functions:
FEATURE INFORMATION
Read indicator The threat is marked as read, which helps you keep track
of the items you reviewed.
Priority change
You can change the priority level of each generated threat. Different colors make it easy to identify high-, medium-,
and low-priority threats.
Reports
After you finish changing priorities and updating the status of each generated threat, you can save the file and/or
print out a report. Go to Report > Create Full Report. Name the report, and you should see something similar to
the following image:
Next steps
To contribute a template for the community, go to our GitHub page.
To get started with the tool, go to the Download page.
Microsoft Threat Modeling Tool threats
8/25/2017 • 2 min to read • Edit Online
The Threat Modeling Tool is a core element of the Microsoft Security Development Lifecycle (SDL). It allows
software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-
effective to resolve. As a result, it greatly reduces the total cost of development. Also, we designed the tool with
non-security experts in mind, making threat modeling easier for all developers by providing clear guidance on
creating and analyzing threat models.
The Threat Modeling Tool helps you answer certain questions, such as the ones below:
How can an attacker change the authentication data?
What is the impact if an attacker can read the user profile data?
What happens if access is denied to the user profile database?
STRIDE model
To better help you formulate these kinds of pointed questions, Microsoft uses the STRIDE model, which categorizes
different types of threats and simplifies the overall security conversations.
CATEGORY DESCRIPTION
Denial of Service Denial of service (DoS) attacks deny service to valid users—for
example, by making a Web server temporarily unavailable or
unusable. You must protect against certain types of DoS
threats simply to improve system availability and reliability
CATEGORY DESCRIPTION
Elevation of Privilege An unprivileged user gains privileged access and thereby has
sufficient access to compromise or destroy the entire system.
Elevation of privilege threats include those situations in which
an attacker has effectively penetrated all system defenses and
become part of the trusted system itself, a dangerous
situation indeed
Next steps
Proceed to Threat Modeling Tool Mitigations to learn the different ways you can mitigate these threats with
Azure.
Microsoft Threat Modeling Tool mitigations
8/25/2017 • 2 min to read • Edit Online
The Threat Modeling Tool is a core element of the Microsoft Security Development Lifecycle (SDL). It allows
software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-
effective to resolve. As a result, it greatly reduces the total cost of development. Also, we designed the tool with
non-security experts in mind, making threat modeling easier for all developers by providing clear guidance on
creating and analyzing threat models.
Visit the Threat Modeling Tool to get started today!
Mitigation categories
The Threat Modeling Tool mitigations are categorized according to the Web Application Security Frame, which
consists of the following:
CATEGORY DESCRIPTION
Auditing and Logging Who did what and when? Auditing and logging refer to how
your application records security-related events
Communication Security Who are you talking to? Communication Security ensures all
communication done is as secure as possible
Configuration Management Who does your application run as? Which databases does it
connect to? How is your application administered? How are
these settings secured? Configuration management refers to
how your application handles these operational issues
Cryptography How are you keeping secrets (confidentiality)? How are you
tamper-proofing your data or libraries (integrity)? How are
you providing seeds for random values that must be
cryptographically strong? Cryptography refers to how your
application enforces confidentiality and integrity
Exception Management When a method call in your application fails, what does your
application do? How much do you reveal? Do you return
friendly error information to end users? Do you pass valuable
exception information back to the caller? Does your
application fail gracefully?
CATEGORY DESCRIPTION
Input Validation How do you know that the input your application receives is
valid and safe? Input validation refers to how your application
filters, scrubs, or rejects input before additional processing.
Consider constraining input through entry points and
encoding output through exit points. Do you trust data from
sources such as databases and file shares?
Sensitive Data How does your application handle sensitive data? Sensitive
data refers to how your application handles any data that
must be protected either in memory, over the network, or in
persistent stores
Session Management How does your application handle and protect user sessions?
A session refers to a series of related interactions between a
user and your Web application
Next steps
Visit Threat Modeling Tool Threats to learn more about the threat categories the tool uses to generate possible
design threats.
Security Frame: Auditing and Logging | Mitigations
8/22/2017 • 8 min to read • Edit Online
PRODUCT/SERVICE ARTICLE
Attributes N/A
References N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
TITLE DETAILS
Ensure that the application does not log sensitive user data
TITLE DETAILS
Attributes N/A
References N/A
Steps Check that you do not log any sensitive data that a user
submits to your site. Check for intentional logging as well
as side effects caused by design issues. Examples of
sensitive data include:
User Credentials
Social Security number or other identifying information
Credit card numbers or other financial information
Health information
Private keys or other data that could be used to
decrypt encrypted information
System or application information that can be used to
more effectively attack the application
Attributes N/A
TITLE DETAILS
References N/A
Attributes N/A
References N/A
Attributes N/A
TITLE DETAILS
References N/A
References N/A
Component Database
Attributes N/A
TITLE DETAILS
Component Database
Attributes N/A
Steps For each storage account, one can enable Azure Storage
Analytics to perform logging and store metrics data. The
storage analytics logs provide important information such
as authentication method used by someone when they
access storage.
This can be really helpful if you are tightly guarding access
to storage. For example, in Blob Storage you can set all of
the containers to private and implement the use of an SAS
service throughout your applications. Then you can check
the logs regularly to see if your blobs are accessed using
the storage account keys, which may indicate a breach of
security, or if the blobs are public but they shouldn’t be.
Component WCF
Attributes N/A
Steps The lack of a proper audit trail after a security incident can
hamper forensic efforts. Windows Communication
Foundation (WCF) offers the ability to log successful
and/or failed authentication attempts.
Logging failed authentication attempts can warn
administrators of potential brute-force attacks. Similarly,
logging successful authentication events can provide a
useful audit trail when a legitimate account is
compromised. Enable WCF's service security audit feature
Example
The following is an example configuration with auditing enabled
<system.serviceModel>
<behaviors>
<serviceBehaviors>
<behavior name=""NewBehavior"">
<serviceSecurityAudit auditLogLocation=""Default""
suppressAuditFailure=""false""
serviceAuthorizationAuditLevel=""SuccessAndFailure""
messageAuthenticationAuditLevel=""SuccessAndFailure"" />
...
</behavior>
</servicebehaviors>
</behaviors>
</system.serviceModel>
Implement sufficient Audit Failure Handling
TITLE DETAILS
Component WCF
Attributes N/A
Example
The <behavior/> element of the WCF configuration file below instructs WCF to not notify the application when
WCF fails to write to an audit log.
<behaviors>
<serviceBehaviors>
<behavior name="NewBehavior">
<serviceSecurityAudit auditLogLocation="Application"
suppressAuditFailure="true"
serviceAuthorizationAuditLevel="Success"
messageAuthenticationAuditLevel="Success" />
</behavior>
</serviceBehaviors>
</behaviors>
Configure WCF to notify the program whenever it is unable to write to an audit log. The program should have an
alternative notification scheme in place to alert the organization that audit trails are not being maintained.
Attributes N/A
References N/A
TITLE DETAILS
Steps Enable auditing and logging on Web APIs. Audit logs should
capture user context. Identify all important events and log
those events. Implement centralized logging
Attributes N/A
References N/A
Attributes N/A
PRODUCT/SERVICE ARTICLE
Azure Event Hub Use per device authentication credentials using SaS
tokens
Service Fabric Trust Boundary Restrict anonymous access to Service Fabric Cluster
Ensure that Service Fabric client-to-node certificate is
different from node-to-node certificate
Use AAD to authenticate clients to service fabric
clusters
Ensure that service fabric certificates are obtained from
an approved Certificate Authority (CA)
Machine Trust Boundary Ensure that deployed application's binaries are digitally
signed
PRODUCT/SERVICE ARTICLE
IoT Cloud Gateway Ensure that devices connecting to Cloud gateway are
authenticated
Use per-device authentication credentials
Azure Storage Ensure that only the required containers and blobs are
given anonymous read access
Grant limited access to objects in Azure storage using
SAS or SAP
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
References N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
Details The first solution is to grant access only from a certain source
IP range to the administrative interface. If that solution would
not be possible than it is always recommended to enforce a
step-up or adaptive authentication for logging in into the
administrative interface
Attributes N/A
TITLE DETAILS
References N/A
Details The first thing is to verify that forgot password and other
recovery paths send a link including a time-limited
activation token rather than the password itself. Additional
authentication based on soft-tokens (e.g. SMS token,
native mobile applications, etc.) can be required as well
before the link is sent over. Second, you should not lock
out the users account whilst the process of getting a new
password is in progress.
This could lead to a Denial of service attack whenever an
attacker decides to intentionally lock out the users with an
automated attack. Third, whenever the new password
request was set in progress, the message you display
should be generalized in order to prevent username
enumeration. Fourth, always disallow the use of old
passwords and implement a strong password policy.
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
TITLE DETAILS
References N/A
Component Database
Component Database
Component Database
Attributes N/A
Component Database
Attributes N/A
TITLE DETAILS
Attributes N/A
Attributes N/A
Attributes N/A
Attributes N/A
References N/A
Component WCF
Attributes N/A
References MSDN
Example
The <netMsmqBinding/> element of the WCF configuration file below instructs WCF to disable authentication when
connecting to an MSMQ queue for message delivery.
<bindings>
<netMsmqBinding>
<binding>
<security>
<transport msmqAuthenticationMode=""None"" />
</security>
</binding>
</netMsmqBinding>
</bindings>
Configure MSMQ to require Windows Domain or Certificate authentication at all times for any incoming or
outgoing messages.
Example
The <netMsmqBinding/> element of the WCF configuration file below instructs WCF to enable certificate
authentication when connecting to an MSMQ queue. The client is authenticated using X.509 certificates. The client
certificate must be present in the certificate store of the server.
<bindings>
<netMsmqBinding>
<binding>
<security>
<transport msmqAuthenticationMode=""Certificate"" />
</security>
</binding>
</netMsmqBinding>
</bindings>
Component WCF
Example
<message clientCredentialType=""Certificate""/>
Component WCF
Example
<transport clientCredentialType=""Certificate""/>
Attributes N/A
Component Azure AD
TITLE DETAILS
Attributes N/A
Component Azure AD
Attributes N/A
Component Azure AD
Attributes N/A
Example
Example
Here is a sample implementation of the ITokenReplayCache interface. (Please customize and implement your
project-specific caching framework)
The implemented cache has to be referenced in OIDC options via the "TokenValidationParameters" property as
follows.
Please note that to test the effectiveness of this configuration, login into your local OIDC-protected application and
capture the request to "/signin-oidc" endpoint in fiddler. When the protection is not in place, replaying this
request in fiddler will set a new session cookie. When the request is replayed after the TokenReplayCache
protection is added, the application will throw an exception as follows:
SecurityTokenReplayDetectedException: IDX10228: The securityToken has previously been validated, securityToken:
'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNBVGZNNXBPWWlKSE1iYTlnb0VLWSIsImtpZCI6Ik1uQ1......
Component Azure AD
Attributes N/A
References ADAL
Attributes N/A
References N/A
References N/A, Azure IoT hub with .NET, Getting Started wih IoT hub and
Node JS, Securing IoT with SAS and certificates, Git repository
Example
static DeviceClient deviceClient;
await deviceClient.SendEventAsync(message);
Example
Node.JS: Authentication
Symmetric key
Create a IoT hub on azure
Create an entry in the device identity registry
javascript var device = new iothub.Device(null); device.deviceId = <DeviceId > registry.create(device,
function(err, deviceInfo, res) {})
Create a simulated device
javascript var clientFromConnectionString = require('azure-iot-device-amqp').clientFromConnectionString; var
Message = require('azure-iot-device').Message; var connectionString = 'HostName=<HostName>DeviceId=
<DeviceId>SharedAccessKey=<SharedAccessKey>'; var client = clientFromConnectionString(connectionString);
#### SAS Token
Gets internally generated when using symmetric key but we can generate and use it explicitly as well
Define a protocol : var Http = require('azure-iot-device-http').Http;
Create a sas token : ```javascript resourceUri =
encodeURIComponent(resourceUri.toLowerCase()).toLowerCase(); var deviceName = ""; var expires =
(Date.now() / 1000) + expiresInMins * 60; var toSign = resourceUri + '\n' + expires; // using crypto var
decodedPassword = new Buffer(signingKey, 'base64').toString('binary'); const hmac =
crypto.createHmac('sha256', decodedPassword); hmac.update(toSign); var base64signature =
hmac.digest('base64'); var base64UriEncoded = encodeURIComponent(base64signature); // construct
authorization string var token = "SharedAccessSignature sr=" + resourceUri +
"%2fdevices%2f"+deviceName+"&sig="
base64UriEncoded + "&se=" + expires; if (policyName) token += "&skn="+policyName; return token; ```
Connect using sas token: javascript Client.fromSharedAccessSignature(sas, Http); #### Certificates
Generate a self signed X509 certificate using any tool such as OpenSSL to generate a .cert and .key files to store
the certificate and the key respectively
Provision a device that accepts secured connection using certificates.
javascript var connectionString = '<connectionString>'; var registry =
iothub.Registry.fromConnectionString(connectionString); var deviceJSON = {deviceId:"<deviceId>",
authentication: { x509Thumbprint: { primaryThumbprint: "<PrimaryThumbprint>", secondaryThumbprint: "
<SecondaryThumbprint>" } }} var device = deviceJSON; registry.create(device, function (err) {});
Connect a device using a certificate
javascript var Protocol = require('azure-iot-device-http').Http; var Client = require('azure-iot-
device').Client; var connectionString = 'HostName=<HostName>DeviceId=<DeviceId>x509=true'; var client =
Client.fromConnectionString(connectionString, Protocol); var options = { key: fs.readFileSync('./key.pem',
'utf8'), cert: fs.readFileSync('./server.crt', 'utf8') }; // Calling setOptions with the x509 certificate and
key (and optionally, passphrase) will configure the client //transport to use x509 when connecting to IoT Hub
client.setOptions(options); //call fn to execute after the connection is set up client.open(fn);
Ensure that only the required containers and blobs are given
anonymous read access
TITLE DETAILS
Attributes N/A
PRODUCT/SERVICE ARTICLE
Machine Trust Boundary Ensure that proper ACLs are configured to restrict
unauthorized access to data on the device
Ensure that sensitive user-specific application content
is stored in user-profile directory
Ensure that the deployed applications are run with
least privileges
Azure Event Hub Use a send-only permissions SAS Key for generating
device tokens
Do not use access tokens that provide direct access to
the Event Hub
Connect to Event Hub using SAS keys that have the
minimum permissions required
Service Fabric Trust Boundary Restrict client's access to cluster operations using RBAC
PRODUCT/SERVICE ARTICLE
Dynamics CRM Perform security modeling and use Field Level Security
where required
Attributes N/A
References N/A
Attributes N/A
References N/A
Ensure that the deployed applications are run with least privileges
TITLE DETAILS
Attributes N/A
References N/A
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
References N/A
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
References N/A
Example
SELECT data
FROM personaldata
WHERE userID=:id < - session var
Now an possible attacker can not tamper and change the application operation since the identifier for retrieving the
data is handled server-side.
Ensure that content and resources are not enumerable or accessible via
forceful browsing
TITLE DETAILS
Attributes N/A
References N/A
Component Database
Attributes N/A
Component Database
Please note that RLS as an out-of-the-box database feature is applicable only to SQL Server starting 2016 and
Azure SQL database. If the out-of-the-box RLS feature is not implemented, it should be ensured that data access is
restricted Using Views and Procedures
Component Database
Attributes N/A
Attributes N/A
Do not use access tokens that provide direct access to the Event Hub
TITLE DETAILS
Attributes N/A
Steps A token that grants direct access to the event hub should not
be given to the device. Using a least privileged token for the
device that gives access only to a publisher would help identify
and blacklist it if found to be a rogue or compromised device.
Connect to Event Hub using SAS keys that have the minimum
permissions required
TITLE DETAILS
Attributes N/A
Attributes N/A
References N/A
Attributes N/A
Perform security modeling and use Field Level Security where required
TITLE DETAILS
Attributes N/A
References N/A
Steps Perform security modeling and use Field Level Security where
required
Attributes N/A
References N/A
Attributes N/A
Attributes N/A
References N/A
TITLE DETAILS
Component WCF
Attributes N/A
Steps The system uses a weak class reference, which might allow
an attacker to execute unauthorized code. The program
references a user-defined class that is not uniquely
identified. When .NET loads this weakly identified class, the
CLR type loader searches for the class in the following
locations in the specified order:
1. If the assembly of the type is known, the loader
searches the configuration file's redirect locations, GAC,
the current assembly using configuration information,
and the application base directory
2. If the assembly is unknown, the loader searches the
current assembly, mscorlib, and the location returned
by the TypeResolve event handler
3. This CLR search order can be modified with hooks such
as the Type Forwarding mechanism and the
AppDomain.TypeResolve event
If an attacker exploits the CLR search order by creating an
alternative class with the same name and placing it in an
alternative location that the CLR will load first, the CLR will
unintentionally execute the attacker-supplied code
Example
The <behaviorExtensions/> element of the WCF configuration file below instructs WCF to add a custom behavior
class to a particular WCF extension.
<system.serviceModel>
<extensions>
<behaviorExtensions>
<add name=""myBehavior"" type=""MyBehavior"" />
</behaviorExtensions>
</extensions>
</system.serviceModel>
Using fully qualified (strong) names uniquely identifies a type and further increases security of your system. Use
fully qualified assembly names when registering types in the machine.config and app.config files.
Example
The <behaviorExtensions/> element of the WCF configuration file below instructs WCF to add strongly-referenced
custom behavior class to a particular WCF extension.
<system.serviceModel>
<extensions>
<behaviorExtensions>
<add name=""myBehavior"" type=""Microsoft.ServiceModel.Samples.MyBehaviorSection, MyBehavior,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"" />
</behaviorExtensions>
</extensions>
</system.serviceModel>
Component WCF
Attributes N/A
Example
The following configuration instructs WCF to not check the authorization level of the client when executing the
service:
<behaviors>
<serviceBehaviors>
<behavior>
...
<serviceAuthorization principalPermissionMode=""None"" />
</behavior>
</serviceBehaviors>
</behaviors>
Use a service authorization scheme to verify that the caller of the service method is authorized to do so. WCF
provides two modes and allows the definition of a custom authorization scheme. The UseWindowsGroups mode
uses Windows roles and users and the UseAspNetRoles mode uses an ASP.NET role provider, such as SQL Server,
to authenticate.
Example
The following configuration instructs WCF to make sure that the client is part of the Administrators group before
executing the Add service:
<behaviors>
<serviceBehaviors>
<behavior>
...
<serviceAuthorization principalPermissionMode=""UseWindowsGroups"" />
</behavior>
</serviceBehaviors>
</behaviors>
Example
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class ApiAuthorizeAttribute : System.Web.Http.AuthorizeAttribute
{
public async override Task OnAuthorizationAsync(HttpActionContext actionContext, CancellationToken
cancellationToken)
{
if (actionContext == null)
{
throw new Exception();
}
if (!string.IsNullOrEmpty(base.Roles))
{
bool isAuthorized = ValidateRoles(actionContext);
if (!isAuthorized)
{
HandleUnauthorizedRequest(actionContext);
}
}
base.OnAuthorization(actionContext);
}
All the controllers and action methods which needs to protected should be decorated with above attribute.
[ApiAuthorize]
public class CustomController : ApiController
{
//Application code goes here
}
Attributes N/A
References N/A
TITLE DETAILS
Steps The Device should authorize the caller to check if the caller
has the required permissions to perform the action
requested. For e.g. Lets say the device is a Smart Door
Lock that can be monitored from the cloud, plus it
provides functionalities like Remotely locking the door.
The Smart Door Lock provides unlocking functionality only
when someone physically comes near the door with a
Card. In this case, the implementation of the remote
command and control should be done in such a way that
it does not provide any functionality to unlock the door as
the cloud gateway is not authorized to send a command
to unlock the door.
Attributes N/A
References N/A
Steps The Field Gateway should authorize the caller to check if the
caller has the required permissions to perform the action
requested. For e.g. there should be different permissions for
an admin user interface/API used to configure a field gateway
v/s devices that connect to it.
Security Frame: Communication Security | Mitigations
8/22/2017 • 14 min to read • Edit Online
PRODUCT/SERVICE ARTICLE
Dynamics CRM Check service account privileges and check that the
custom Services or ASP.NET Pages respect CRM's
security
Identity Server Ensure that all traffic to Identity Server is over HTTPS
connection
Web API Force all traffic to Web APIs over HTTPS connection
PRODUCT/SERVICE ARTICLE
Attributes N/A
Check service account privileges and check that the custom Services or
ASP.NET Pages respect CRM's security
TITLE DETAILS
Attributes N/A
References N/A
Steps Check service account privileges and check that the custom
Services or ASP.NET Pages respect CRM's security
References Moving data between On Prem and Azure Data Factory, Data
management gateway
Attributes N/A
Attributes N/A
References N/A
Steps Applications that use SSL, TLS, or DTLS must fully verify
the X.509 certificates of the entities they connect to. This
includes verification of the certificates for:
Domain name
Validity dates (both beginning and expiration dates)
Revocation status
Usage (for example, Server Authentication for servers,
Client Authentication for clients)
Trust chain. Certificates must chain to a root
certification authority (CA) that is trusted by the
platform or explicitly configured by the administrator
Key length of certificate's public key must be >2048
bits
Hashing algorithm must be SHA256 and above
Steps By default, Azure already enables HTTPS for every app with a
wildcard certificate for the *.azurewebsites.net domain.
However, like all wildcard domains, it is not as secure as using
a custom domain with own certificate Refer. It is
recommended to enable SSL for the custom domain which the
deployed app will be accessed through
Example
The following example contains a basic URL Rewrite rule that forces all incoming traffic to use HTTPS
This rule works by returning an HTTP status code of 301 (permanent redirect) when the user requests a page using
HTTP. The 301 redirects the request to the same URL as the visitor requested, but replaces the HTTP portion of the
request with HTTPS. For example, HTTP://contoso.com would be redirected to HTTPS://contoso.com.
Attributes N/A
Component Database
Component Database
Attributes N/A
References Azure File Storage, Azure File Storage SMB Support for
Windows Clients
TITLE DETAILS
Steps Azure File Storage supports HTTPS when using the REST API,
but is more commonly used as an SMB file share attached to a
VM. SMB 2.1 does not support encryption, so connections are
only allowed within the same region in Azure. However, SMB
3.0 supports encryption, and can be used with Windows
Server 2012 R2, Windows 8, Windows 8.1, and Windows 10,
allowing cross-region access and even access on the desktop.
Attributes N/A
Example
using System;
using System.Net;
using System.Net.Security;
using System.Security.Cryptography;
namespace CertificatePinningExample
{
class CertificatePinningExample
{
/* Note: In this example, we're hardcoding a the certificate's public key and algorithm for
demonstration purposes. In a real-world application, this should be stored in a secure
configuration area that can be updated as needed. */
try
{
var response = (HttpWebResponse)request.GetResponse();
Console.WriteLine($"Success, HTTP status code: {response.StatusCode}");
}
catch(Exception ex)
{
Console.WriteLine($"Failure, {ex.Message}");
}
Console.WriteLine("Press any key to end.");
Console.ReadKey();
}
}
}
Component WCF
Attributes N/A
Component WCF
Attributes N/A
References MSDN
TITLE DETAILS
Example
Configuring the service and the operation to only sign the message is shown in the following examples. Service
Contract Example of ProtectionLevel.Sign : The following is an example of using ProtectionLevel.Sign at the Service
Contract level:
[ServiceContract(Protection Level=ProtectionLevel.Sign]
public interface IService
{
string GetData(int value);
}
Example
Operation Contract Example of ProtectionLevel.Sign (for Granular Control): The following is an example of using
ProtectionLevel.Sign at the OperationContract level:
[OperationContract(ProtectionLevel=ProtectionLevel.Sign]
string GetData(int value);
Component WCF
Attributes N/A
TITLE DETAILS
References MSDN
Attributes N/A
Example
The following code shows a Web API authentication filter that checks for SSL:
public class RequireHttpsAttribute : AuthorizationFilterAttribute
{
public override void OnAuthorization(HttpActionContext actionContext)
{
if (actionContext.Request.RequestUri.Scheme != Uri.UriSchemeHttps)
{
actionContext.Response = new HttpResponseMessage(System.Net.HttpStatusCode.Forbidden)
{
ReasonPhrase = "HTTPS Required"
};
}
else
{
base.OnAuthorization(actionContext);
}
}
}
Add this filter to any Web API actions that require SSL:
Attributes N/A
Steps Redis server does not support SSL out of the box, but Azure
Redis Cache does. If you are connecting to Azure Redis Cache
and your client supports SSL, like StackExchange.Redis, then
you should use SSL. By default non-SSL port is disabled for
new Azure Redis Cache instances. Ensure that the secure
defaults are not changed unless there is a dependency on SSL
support for redis clients.
Please note that Redis is designed to be accessed by trusted clients inside trusted environments. This means that
usually it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment
where untrusted clients can directly access the Redis TCP port or UNIX socket.
Attributes N/A
References N/A
Attributes N/A
PRODUCT/SERVICE ARTICLE
Web API Ensure that only trusted origins are allowed if CORS is
enabled on ASP.NET Web API
Encrypt sections of Web API's configuration files that
contain sensitive data
IoT Device Ensure that all admin interfaces are secured with
strong credentials
Ensure that unknown code cannot execute on devices
Encrypt OS and additional partitions of IoT Device with
bit-locker
Ensure that only the minimum services/features are
enabled on devices
IoT Cloud Gateway Ensure that the Cloud Gateway implements a process
to keep the connected devices firmware up to date
PRODUCT/SERVICE ARTICLE
Machine Trust Boundary Ensure that devices have end-point security controls
configured as per organizational policies
Attributes N/A
Example
Example policy:
This policy allows scripts to load only from the web application’s server and google analytics server. Scripts loaded
from any other site will be rejected. When CSP is enabled on a website, the following features are automatically
disabled to mitigate XSS attacks.
Example
Inline scripts will not execute. Following are examples of inline scripts
Attributes N/A
Attributes N/A
TITLE DETAILS
Attributes N/A
References N/A
Attributes N/A
Example
The X-FRAME-OPTIONS header can be set via IIS web.config. Web.config code snippet for sites that should never be
framed:
<system.webServer>
<httpProtocol>
<customHeader>
<add name="X-FRAME-OPTIONS" value="DENY"/>
</customHeaders>
</httpProtocol>
</system.webServer>
Example
Web.config code for sites that should only be framed by pages in the same domain:
<system.webServer>
<httpProtocol>
<customHeader>
<add name="X-FRAME-OPTIONS" value="SAMEORIGIN"/>
</customHeaders>
</httpProtocol>
</system.webServer>
Attributes N/A
References N/A
TITLE DETAILS
Example
If access to Web.config is available, then CORS can be added through the following code:
<system.webServer>
<httpProtocol>
<customHeaders>
<clear />
<add name="Access-Control-Allow-Origin" value="http://example.com" />
</customHeaders>
</httpProtocol>
Example
If access to web.config is not available, then CORS can be configured by adding the following CSharp code:
HttpContext.Response.AppendHeader("Access-Control-Allow-Origin", "http://example.com")
Please note that it is critical to ensure that the list of origins in "Access-Control-Allow-Origin" attribute is set to a
finite and trusted set of origins. Failing to configure this inappropriately (e.g., setting the value as '*') will allow
malicious sites to trigger cross origin requests to the web application >without any restrictions, thereby making the
application vulnerable to CSRF attacks.
Attributes N/A
Example
However, this feature can be disabled at page level:
<configuration>
<system.web>
<pages validateRequest="false" />
</system.web>
</configuration>
Please note that Request Validation feature is not supported, and is not part of MVC6 pipeline.
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
Example
Add the header in the web.config file if the application is hosted by Internet Information Services (IIS) 7 onwards.
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="X-Content-Type-Options" value="nosniff"/>
</customHeaders>
</httpProtocol>
</system.webServer>
Example
Add the header through the global Application_BeginRequest
Example
Implement custom HTTP module
Example
You can enable the required header only for specific pages by adding it to individual responses:
this.Response.Headers["X-Content-Type-Options"] = "nosniff";
Component Database
Attributes N/A
Example
In the App_Start/WebApiConfig.cs, add the following code to the WebApiConfig.Register method
using System.Web.Http;
namespace WebService
{
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
// New code
config.EnableCors();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
}
}
}
Example
EnableCors attribute can be applied to action methods in a controller as follows:
public class ResourcesController : ApiController
{
[EnableCors("http://localhost:55912", // Origin
null, // Request headers
"GET", // HTTP methods
"bar", // Response headers
SupportsCredentials=true // Allow credentials
)]
public HttpResponseMessage Get(int id)
{
var resp = Request.CreateResponse(HttpStatusCode.NoContent);
resp.Headers.Add("bar", "a bar value");
return resp;
}
[EnableCors("http://localhost:55912", // Origin
"Accept, Origin, Content-Type", // Request headers
"PUT", // HTTP methods
PreflightMaxAge=600 // Preflight cache duration
)]
public HttpResponseMessage Put(Resource data)
{
return Request.CreateResponse(HttpStatusCode.OK, data);
}
[EnableCors("http://localhost:55912", // Origin
"Accept, Origin, Content-Type", // Request headers
"POST", // HTTP methods
PreflightMaxAge=600 // Preflight cache duration
)]
public HttpResponseMessage Post(Resource data)
{
return Request.CreateResponse(HttpStatusCode.OK, data);
}
}
Please note that it is critical to ensure that the list of origins in EnableCors attribute is set to a finite and trusted set
of origins. Failing to configure this inappropriately (e.g., setting the value as '*') will allow malicious sites to trigger
cross origin requests to the API without any restrictions, >thereby making the API vulnerable to CSRF attacks.
EnableCors can be decorated at controller level.
Example
To disable CORS on a particular method in a class, the DisableCors attribute can be used as shown below:
Attributes N/A
Approach-1 Enabling CORS with middleware: To enable CORS for the entire application add the CORS
middleware to the request pipeline using the UseCors extension method. A cross-origin policy can be specified
when adding the CORS middleware using the CorsPolicyBuilder class. There are two ways to do this:
Example
The first is to call UseCors with a lambda. The lambda takes a CorsPolicyBuilder object:
Example
The second is to define one or more named CORS policies, and then select the policy by name at run time.
Approach-2 Enabling CORS in MVC: Developers can alternatively use MVC to apply specific CORS per action, per
controller, or globally for all controllers.
Example
Per action: To specify a CORS policy for a specific action add the [EnableCors] attribute to the action. Specify the
policy name.
Example
Per controller:
[EnableCors("AllowSpecificOrigin")]
public class HomeController : Controller
{
Example
Globally:
Please note that it is critical to ensure that the list of origins in EnableCors attribute is set to a finite and trusted set
of origins. Failing to configure this inappropriately (e.g., setting the value as '*') will allow malicious sites to trigger
cross origin requests to the API without any restrictions, >thereby making the API vulnerable to CSRF attacks.
Example
To disable CORS for a controller or action, use the [DisableCors] attribute.
[DisableCors]
public IActionResult About()
{
return View();
}
Attributes N/A
Ensure that all admin interfaces are secured with strong credentials
TITLE DETAILS
Attributes N/A
References N/A
Attributes N/A
Steps UEFI Secure Boot restricts the system to only allow execution
of binaries signed by a specified authority. This feature
prevents unknown code from being executed on the platform
and potentially weakening the security posture of it. Enable
UEFI Secure Boot and restrict the list of certificate authorities
that are trusted for signing code. Sign all code that is deployed
on the device using one of the trusted authorities.
Attributes N/A
References N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
Ensure that the default login credentials of the field gateway are
changed during installation
TITLE DETAILS
Attributes N/A
References N/A
Steps Ensure that the default login credentials of the field gateway
are changed during installation
Steps LWM2M is a protocol from the Open Mobile Alliance for IoT
Device Management. Azure IoT device management allows to
interact with physical devices using device jobs. Ensure that
the Cloud Gateway implements a process to routinely keep
the device and other configuration data up to date using
Azure IoT Hub Device Management.
Attributes N/A
References N/A
Attributes N/A
Attributes N/A
Component WCF
Attributes N/A
Example
The following is an example configuration with throttling enabled:
<system.serviceModel>
<behaviors>
<serviceBehaviors>
<behavior name="Throttled">
<serviceThrottling maxConcurrentCalls="[YOUR SERVICE VALUE]" maxConcurrentSessions="[YOUR SERVICE VALUE]"
maxConcurrentInstances="[YOUR SERVICE VALUE]" />
...
</system.serviceModel>
Component WCF
Attributes N/A
Steps Metadata can help attackers learn about the system and plan
a form of attack. WCF services can be configured to expose
metadata. Metadata gives detailed service description
information and should not be broadcast in production
environments. The HttpGetEnabled / HttpsGetEnabled
properties of the ServiceMetaData class defines whether a
service will expose the metadata
Example
The code below instructs WCF to broadcast a service's metadata
Do not broadcast service metadata in a production environment. Set the HttpGetEnabled / HttpsGetEnabled
properties of the ServiceMetaData class to false.
Example
The code below instructs WCF to not broadcast a service's metadata.
PRODUCT/SERVICE ARTICLE
Web Application Use only approved symmetric block ciphers and key
lengths
Use approved block cipher modes and initialization
vectors for symmetric ciphers
Use approved asymmetric algorithms, key lengths, and
padding
Use approved random number generators
Do not use symmetric stream ciphers
Use approved MAC/HMAC/keyed hash algorithms
Use only approved cryptographic hash functions
Dynamics CRM Mobile Client Ensure a device management policy is in place that
requires a use PIN and allows remote wiping
Dynamics CRM Outlook Client Ensure a device management policy is in place that
requires a PIN/password/auto lock and encrypts all
data (e.g. Bitlocker)
Identity Server Ensure that signing keys are rolled over when using
Identity Server
Ensure that cryptographically strong client ID, client
secret are used in Identity Server
Attributes N/A
References N/A
Attributes N/A
References N/A
Attributes N/A
TITLE DETAILS
References N/A
Attributes N/A
TITLE DETAILS
References N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
TITLE DETAILS
Component Database
Attributes N/A
Component Database
Attributes N/A
Component Database
Attributes N/A
Component Database
Attributes N/A
Component Database
References TPM on Windows IoT Core, Set up TPM on Windows IoT Core,
Azure IoT Device SDK TPM
Example
As can be seen, the device primary key is not present in the code. Instead, it is stored in the TPM at slot 0. TPM
device generates a short-lived SAS token that is then used to connect to the IoT Hub.
References N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
Ensure that signing keys are rolled over when using Identity Server
TITLE DETAILS
Attributes N/A
Steps Ensure that signing keys are rolled over when using Identity
Server. The link in the references section explains how this
should be planned without causing outages to applications
relying on Identity Server.
Ensure that cryptographically strong client ID, client secret are used in
Identity Server
TITLE DETAILS
Attributes N/A
References N/A
PRODUCT/SERVICE ARTICLE
Component WCF
Attributes N/A
Example
The following configuration file includes the <serviceDebug> tag:
<configuration>
<system.serviceModel>
<behaviors>
<serviceBehaviors>
<behavior name=""MyServiceBehavior"">
<serviceDebug includeExceptionDetailInFaults=""True"" httpHelpPageEnabled=""True""/>
...
Disable debugging information in the service. This can be accomplished by removing the <serviceDebug> tag from
your application's configuration file.
Component WCF
Attributes N/A
Example
For further control on the exception response, the HttpResponseMessage class can be used as shown below:
To catch unhandled exceptions that are not of the type HttpResponseException , Exception Filters can be used.
Exception filters implement the System.Web.Http.Filters.IExceptionFilter interface. The simplest way to write an
exception filter is to derive from the System.Web.Http.Filters.ExceptionFilterAttribute class and override the
OnException method.
Example
Here is a filter that converts NotImplementedException exceptions into HTTP status code 501, Not Implemented :
namespace ProductStore.Filters
{
using System;
using System.Net;
using System.Net.Http;
using System.Web.Http.Filters;
Example
To apply the filter to all of the actions on a controller , add the filter as an attribute to the controller class:
[NotImplExceptionFilter]
public class ProductsController : ApiController
{
// ...
}
Example
To apply the filter globally to all Web API controllers, add an instance of the filter to the
GlobalConfiguration.Configuration.Filters collection. Exception filters in this collection apply to any Web API
controller action.
GlobalConfiguration.Configuration.Filters.Add(
new ProductStore.NotImplExceptionFilterAttribute());
Example
For model validation, the model state can be passed to CreateErrorResponse method as shown below:
Check the links in the references section for additional details about exceptional handling and model validation in
ASP.Net Web API
Attributes N/A
References N/A
Attributes N/A
Attributes N/A
Attributes N/A
Example
public static bool ValidateDomain(string pathToValidate, Uri currentUrl)
{
try
{
if (!string.IsNullOrWhiteSpace(pathToValidate))
{
var domain = RetrieveDomain(currentUrl);
var replyPath = new Uri(pathToValidate);
var replyDomain = RetrieveDomain(replyPath);
return false;
}
}
return true;
}
catch (UriFormatException ex)
{
LogHelper.LogException("Utilities:ValidateDomain", ex);
return true;
}
}
The above method will always return True, if some exception happens. If the end user provides a malformed URL,
that the browser respects, but the Uri() constructor doesn't, this will throw an exception, and the victim will be
taken to the valid but malformed URL.
Security Frame: Input Validation | Mitigations
8/22/2017 • 31 min to read • Edit Online
PRODUCT/SERVICE ARTICLE
Disable XSLT scripting for all transforms using untrusted style sheets
TITLE DETAILS
Attributes N/A
Example
Example
If you are using using MSXML 6.0, XSLT scripting is disabled by default; however, you must ensure that it has not
been explicitly enabled through the XML DOM object property AllowXsltScript.
Example
If you are using MSXML 5 or below, XSLT scripting is enabled by default and you must explicitly disable it. Set the
XML DOM object property AllowXsltScript to false.
Ensure that each page that could contain user controllable content opts
out of automatic MIME sniffing
TITLE DETAILS
Attributes N/A
TITLE DETAILS
Example
To enable the required header globally for all pages in the application, you can do one of the following:
Add the header in the web.config file if the application is hosted by Internet Information Services (IIS) 7
<system.webServer>
<httpProtocol>
<customHeaders>
<add name=""X-Content-Type-Options"" value=""nosniff""/>
</customHeaders>
</httpProtocol>
</system.webServer>
}
public void Init(HttpApplication context)
{
context.PreSendRequestHeaders += newEventHandler(context_PreSendRequestHeaders);
}
#endregion
void context_PreSendRequestHeaders(object sender, EventArgs e)
{
HttpApplication application = sender as HttpApplication;
if (application == null)
return;
if (application.Response.Headers[""X-Content-Type-Options ""] != null)
return;
application.Response.Headers.Add(""X-Content-Type-Options "", ""nosniff"");
}
}
You can enable the required header only for specific pages by adding it to individual responses:
this.Response.Headers[""X-Content-Type-Options""] = ""nosniff"";
Attributes N/A
Example
For .NET Framework code, you can use the following approaches:
// for .NET 4
XmlReaderSettings settings = new XmlReaderSettings();
settings.DtdProcessing = DtdProcessing.Prohibit;
XmlReader reader = XmlReader.Create(stream, settings);
Note that the default value of ProhibitDtd in XmlReaderSettings is true, but in XmlTextReader it is false. If you are
using XmlReaderSettings, you do not need to set ProhibitDtd to true explicitly, but it is recommended for safety
sake that you do. Also note that the XmlDocument class allows entity resolution by default.
Example
To disable entity resolution for XmlDocuments, use the XmlDocument.Load(XmlReader) overload of the Load method
and set the appropriate properties in the XmlReader argument to disable resolution, as illustrated in the following
code:
Example
If you need to resolve inline entities but do not need to resolve external entities, set the
XmlReaderSettings.XmlResolver property to null. For example:
Note that in MSXML6, ProhibitDTD is set to true (disabling DTD processing) by default. For Apple OSX/iOS code,
there are two XML parsers you can use: NSXMLParser and libXML2.
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
Example
For the last point regarding file format signature validation, refer to the class below for details:
ext = ext.ToUpperInvariant();
return true;
}
if (!fileSignature.ContainsKey(ext))
{
return true;
}
return flag;
}
Ensure that type-safe parameters are used in Web Application for data
access
TITLE DETAILS
Attributes N/A
References N/A
Steps If you use the Parameters collection, SQL treats the input
is as a literal value rather then as executable code. The
Parameters collection can be used to enforce type and
length constraints on input data. Values outside of the
range trigger an exception. If type-safe SQL parameters
are not used, attackers might be able to execute injection
attacks that are embedded in the unfiltered input.
Use type safe parameters when constructing SQL queries
to avoid possible SQL injection attacks that can occur with
unfiltered input. You can use type safe parameters with
stored procedures and with dynamic SQL statements.
Parameters are treated as literal values by the database
and not as executable code. Parameters are also checked
for type and length.
Example
The following code shows how to use type safe parameters with the SqlParameterCollection when calling a stored
procedure.
using System.Data;
using System.Data.SqlClient;
In the preceding code example, the input value cannot be longer than 11 characters. If the data does not conform to
the type or length defined by the parameter, the SqlParameter class throws an exception.
Attributes N/A
Attributes N/A
Example
* Encoder.HtmlEncode
* Encoder.HtmlAttributeEncode
* Encoder.JavaScriptEncode
* Encoder.UrlEncode
* Encoder.VisualBasicScriptEncode
* Encoder.XmlEncode
* Encoder.XmlAttributeEncode
* Encoder.CssEncode
* Encoder.LdapEncode
Attributes N/A
TITLE DETAILS
Steps All the input parameters must be validated before they are
used in the application to ensure that the application is
safeguarded against malicious user inputs. Validate the
input values using regular expression validations on server
side with a whitelist validation strategy. Unsanitized user
inputs / parameters passed to the methods can cause
code injection vulnerabilities.
For web applications, entry points can also include form
fields, QueryStrings, cookies, HTTP headers, and web
service parameters.
The following input validation checks must be performed
upon model binding:
The model properties should be annotated with
RegularExpression annotation, for accepting allowed
characters and maximum permissible length
The controller methods should perform ModelState
validity
Attributes N/A
Steps Identify all static markup tags that you want to use. A
common practice is to restrict formatting to safe HTML
elements, such as <b> (bold) and <i> (italic).
Before writing the data, HTML-encode it. This makes any
malicious script safe by causing it to be handled as text,
not as executable code.
1. Disable ASP.NET request validation by the adding the
ValidateRequest="false" attribute to the @ Page
directive
2. Encode the string input with the HtmlEncode method
3. Use a StringBuilder and call its Replace method to
selectively remove the encoding on the HTML
elements that you want to permit
The page-in the references disables ASP.NET request
validation by setting ValidateRequest="false" . It
HTML-encodes the input and selectively allows the <b>
and <i> Alternatively, a .NET library for HTML
sanitization may also be used.
HtmlSanitizer is a .NET library for cleaning HTML
fragments and documents from constructs that can lead
to XSS attacks. It uses AngleSharp to parse, manipulate,
and render HTML and CSS. HtmlSanitizer can be installed
as a NuGet package, and the user input can be passed
through relevant HTML or CSS sanitization methods, as
applicable, on the server side. Please note that Sanitization
as a security control should be considered only as a last
option.
Input validation and Output Encoding are considered
better security controls.
Attributes N/A
References N/A
Example
Following are insecure examples:
document.getElementByID("div1").innerHtml = value;
$("#userName").html(res.Name);
return $('<div/>').html(value)
$('body').append(resHTML);
Don't use innerHtml ; instead use innerText . Similarly, instead of $("#elm").html() , use $("#elm").text()
Validate all redirects within the application are closed or done safely
TITLE DETAILS
Attributes N/A
Attributes N/A
TITLE DETAILS
Steps For methods that just accept primitive data type, and not
models as argument,input validation using Regular Expression
should be done. Here Regex.IsMatch should be used with a
valid regex pattern. If the input doesn't match the specified
Regular Expression, control should not proceed further, and an
adequate warning regarding validation failure should be
displayed.
Attributes N/A
Example
For example, the following configuration will throw a RegexMatchTimeoutException, if the processing takes more
than 5 seconds:
Attributes N/A
TITLE DETAILS
References N/A
Example
Following is an insecure example:
<div class="form-group">
@Html.Raw(Model.AccountConfirmText)
</div>
<div class="form-group">
@Html.Raw(Model.PaymentConfirmText)
</div>
</div>
Do not use Html.Raw() unless you need to display markup. This method does not perform output encoding
implicitly. Use other ASP.NET helpers e.g., @Html.DisplayFor()
Component Database
Attributes N/A
References N/A
Example
Following is an example of insecure dynamic Stored Procedure:
CREATE PROCEDURE [dbo].[uspGetProductsByCriteria]
(
@productName nvarchar(200) = NULL,
@startPrice float = NULL,
@endPrice float = NULL
)
AS
BEGIN
DECLARE @sql nvarchar(max)
SELECT @sql = ' SELECT ProductID, ProductName, Description, UnitPrice, ImagePath' +
' FROM dbo.Products WHERE 1 = 1 '
PRINT @sql
IF @productName IS NOT NULL
SELECT @sql = @sql + ' AND ProductName LIKE ''%' + @productName + '%'''
IF @startPrice IS NOT NULL
SELECT @sql = @sql + ' AND UnitPrice > ''' + CONVERT(VARCHAR(10),@startPrice) + ''''
IF @endPrice IS NOT NULL
SELECT @sql = @sql + ' AND UnitPrice < ''' + CONVERT(VARCHAR(10),@endPrice) + ''''
PRINT @sql
EXEC(@sql)
END
Example
Following is the same stored procedure implemented securely:
Attributes N/A
Example
The following code demonstrates the same:
using System.ComponentModel.DataAnnotations;
namespace MyApi.Models
{
public class Product
{
public int Id { get; set; }
[Required]
[RegularExpression(@"^[a-zA-Z0-9]*$", ErrorMessage="Only alphanumeric characters are allowed.")]
public string Name { get; set; }
public decimal Price { get; set; }
[Range(0, 999)]
public double Weight { get; set; }
}
}
Example
In the action method of the API controllers, validity of the model has to be explicitly checked as shown below:
namespace MyApi.Controllers
{
public class ProductsController : ApiController
{
public HttpResponseMessage Post(Product product)
{
if (ModelState.IsValid)
{
// Do something with the product (not shown).
Attributes N/A
Steps For methods that just accept primitive data type, and not
models as argument,input validation using Regular Expression
should be done. Here Regex.IsMatch should be used with a
valid regex pattern. If the input doesn't match the specified
Regular Expression, control should not proceed further, and an
adequate warning regarding validation failure should be
displayed.
Ensure that type-safe parameters are used in Web API for data access
TITLE DETAILS
Attributes N/A
References N/A
Steps If you use the Parameters collection, SQL treats the input
is as a literal value rather then as executable code. The
Parameters collection can be used to enforce type and
length constraints on input data. Values outside of the
range trigger an exception. If type-safe SQL parameters
are not used, attackers might be able to execute injection
attacks that are embedded in the unfiltered input.
Use type safe parameters when constructing SQL queries
to avoid possible SQL injection attacks that can occur with
unfiltered input. You can use type safe parameters with
stored procedures and with dynamic SQL statements.
Parameters are treated as literal values by the database
and not as executable code. Parameters are also checked
for type and length.
Example
The following code shows how to use type safe parameters with the SqlParameterCollection when calling a stored
procedure.
using System.Data;
using System.Data.SqlClient;
In the preceding code example, the input value cannot be longer than 11 characters. If the data does not conform to
the type or length defined by the parameter, the SqlParameter class throws an exception.
Attributes N/A
Component WCF
Attributes N/A
References MSDN
TITLE DETAILS
Component WCF
Attributes N/A
References MSDN
TITLE DETAILS
PRODUCT/SERVICE ARTICLE
Machine Trust Boundary Ensure that binaries are obfuscated if they contain
sensitive information
Consider using Encrypted File System (EFS) is used to
protect confidential user-specific data
Ensure that sensitive data stored by the application on
the file system is encrypted
Web API Ensure that sensitive data relevant to Web API is not
stored in browser's storage
Azure IaaS VM Trust Boundary Use Azure Disk Encryption to encrypt disks used by
Virtual Machines
Azure Storage Use Azure Storage Service Encryption (SSE) for Data at
Rest (Preview)
Use Client-Side Encryption to store sensitive data in
Azure Storage
Attributes N/A
References N/A
Attributes N/A
References N/A
TITLE DETAILS
Ensure that sensitive data stored by the application on the file system is
encrypted
TITLE DETAILS
Attributes N/A
References N/A
Steps Ensure that sensitive data stored by the application on the file
system is encrypted (e.g., using DPAPI), if EFS cannot be
enforced
Attributes N/A
References N/A
Example
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="no-cache" />
<add name="Pragma" value="no-cache" />
<add name="Expires" value="-1" />
</customHeaders>
</httpProtocol>
</system.webServer>
</configuration>
Example
This may be implemented through a filter. Following example may be used:
var attributes =
filterContext.ActionDescriptor.GetCustomAttributes(typeof(System.Web.Mvc.OutputCacheAttribute), false);
if (attributes == null || **Attributes**.Count() == 0)
{
filterContext.HttpContext.Response.Cache.SetNoStore();
filterContext.HttpContext.Response.Cache.SetCacheability(HttpCacheability.NoCache);
filterContext.HttpContext.Response.Cache.SetExpires(DateTime.UtcNow.AddHours(-1));
if (!filterContext.IsChildAction)
{
filterContext.HttpContext.Response.AppendHeader("Pragma", "no-cache");
}
}
base.OnActionExecuting(filterContext);
}
Attributes N/A
Attributes N/A
Example
Attributes N/A
References N/A
Component Database
Component Database
TITLE DETAILS
Attributes N/A
Component Database
Component Database
Attributes N/A
Component Database
Steps SQL Server has the ability to encrypt the data while creating a
backup. By specifying the encryption algorithm and the
encryptor (a Certificate or Asymmetric Key) when creating a
backup, one can create an encrypted backup file.
References N/A
TITLE DETAILS
Example
The below JavaScript snippet is from a custom authentication library which stores authentication artifacts in local
storage. Such implementations should be avoided.
ns.AuthHelper.Authenticate = function () {
window.config = {
instance: 'https://login.microsoftonline.com/',
tenant: ns.Configurations.Tenant,
clientId: ns.Configurations.AADApplicationClientID,
postLogoutRedirectUri: window.location.origin,
cacheLocation: 'localStorage', // enable this for IE, as sessionStorage does not work for localhost.
};
Attributes N/A
References N/A
Attributes N/A
Attributes N/A
References N/A
Attributes N/A
References N/A
Train users on the risks associated with the Dynamics CRM Share
feature and good security practices
TITLE DETAILS
Attributes N/A
References N/A
Steps Train users on the risks associated with the Dynamics CRM
Share feature and good security practices
Attributes N/A
References N/A
Use Azure Storage Service Encryption (SSE) for Data at Rest (Preview)
TITLE DETAILS
Attributes N/A
Steps The Azure Storage Client Library for .NET Nuget package
supports encrypting data within client applications before
uploading to Azure Storage, and decrypting data while
downloading to the client. The library also supports
integration with Azure Key Vault for storage account key
management. Here is a brief description of how client side
encryption works:
The Azure Storage client SDK generates a content
encryption key (CEK), which is a one-time-use
symmetric key
Customer data is encrypted using this CEK
The CEK is then wrapped (encrypted) using the key
encryption key (KEK). The KEK is identified by a key
identifier and can be an asymmetric key pair or a
symmetric key and can be managed locally or stored in
Azure Key Vault. The Storage client itself never has
access to the KEK. It just invokes the key wrapping
algorithm that is provided by Key Vault. Customers can
choose to use custom providers for key
wrapping/unwrapping if they want
The encrypted data is then uploaded to the Azure
Storage service. Check the links in the references
section for low-level implementation details.
Attributes N/A
Example
Intune can be configured with following security policies to safeguard sensitive data:
return _Key;
}
}
Attributes N/A
Component WCF
TITLE DETAILS
Attributes N/A
References Fortify
Example
The following WCF service provider configuration uses the UsernameToken:
<security mode="Message">
<message clientCredentialType="UserName" />
Component WCF
Example
The following configuration sets the security mode to None.
<system.serviceModel>
<bindings>
<wsHttpBinding>
<binding name=""MyBinding"">
<security mode=""None""/>
</binding>
</bindings>
</system.serviceModel>
Example
Security Mode Across all service bindings there are five possible security modes:
None. Turns security off.
Transport. Uses transport security for mutual authentication and message protection.
Message. Uses message security for mutual authentication and message protection.
Both. Allows you to supply settings for transport and message-level security (only MSMQ supports this).
TransportWithMessageCredential. Credentials are passed with the message and message protection and server
authentication are provided by the transport layer.
TransportCredentialOnly. Client credentials are passed with the transport layer and no message protection is
applied. Use transport and message security to protect the integrity and confidentiality of messages. The
configuration below tells the service to use transport security with message credentials.
<system.serviceModel> <bindings> <wsHttpBinding> <binding name=""MyBinding""> <security
mode=""TransportWithMessageCredential""/> <message clientCredentialType=""Windows""/> </binding> </bindings>
</system.serviceModel>
Security Frame: Session Management | Articles
8/22/2017 • 14 min to read • Edit Online
PRODUCT/SERVICE ARTICLE
Component Azure AD
Attributes N/A
References N/A
TITLE DETAILS
Example
HttpContext.GetOwinContext().Authentication.SignOut(OpenIdConnectAuthenticationDefaults.AuthenticationType,
CookieAuthenticationDefaults.AuthenticationType)
Example
It should also destroy user's session by calling Session.Abandon() method. Following method shows secure
implementation of user logout:
[HttpPost]
[ValidateAntiForgeryToken]
public void LogOff()
{
string userObjectID =
ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").Value;
AuthenticationContext authContext = new AuthenticationContext(Authority + TenantId, new
NaiveSessionCache(userObjectID));
authContext.TokenCache.Clear();
Session.Clear();
Session.Abandon();
Response.SetCookie(new HttpCookie("ASP.NET_SessionId", string.Empty));
HttpContext.GetOwinContext().Authentication.SignOut(
OpenIdConnectAuthenticationDefaults.AuthenticationType,
CookieAuthenticationDefaults.AuthenticationType);
}
Attributes N/A
References N/A
Attributes N/A
References N/A
Component ADFS
Attributes N/A
References N/A
Example
[HttpPost, ValidateAntiForgeryToken]
[Authorization]
public ActionResult SignOut(string redirectUrl)
{
if (!this.User.Identity.IsAuthenticated)
{
return this.View("LogOff", null);
}
// Signs out at the specified security token service (STS) by using the WS-Federation protocol.
Uri signOutUrl = new Uri(FederatedAuthentication.WSFederationAuthenticationModule.Issuer);
Uri replyUrl = new Uri(FederatedAuthentication.WSFederationAuthenticationModule.Realm);
if (!string.IsNullOrEmpty(redirectUrl))
{
replyUrl = new Uri(FederatedAuthentication.WSFederationAuthenticationModule.Realm +
redirectUrl);
}
// Signs out of the current session and raises the appropriate events.
var authModule = FederatedAuthentication.WSFederationAuthenticationModule;
authModule.SignOut(false);
// Signs out at the specified security token service (STS) by using the WS-Federation
// protocol.
WSFederationAuthenticationModule.FederatedSignOut(signOutUrl, replyUrl);
return new RedirectResult(redirectUrl);
}
Attributes N/A
Steps Cookies are normally only accessible to the domain for which
they were scoped. Unfortunately, the definition of "domain"
does not include the protocol so cookies that are created over
HTTPS are accessible over HTTP. The "secure" attribute
indicates to the browser that the cookie should only be made
available over HTTPS. Ensure that all cookies set over HTTPS
use the secure attribute. The requirement can be enforced in
the web.config file by setting the requireSSL attribute to true.
It is the preferred approach because it will enforce the secure
attribute for all current and future cookies without the need to
make any additional code changes.
Example
<configuration>
<system.web>
<httpCookies requireSSL="true"/>
</system.web>
</configuration>
The setting is enforced even if HTTP is used to access the application. If HTTP is used to access the application, the
setting breaks the application because the cookies are set with the secure attribute and the browser will not send
them back to the application.
TITLE DETAILS
References N/A
Steps When the web application is the Relying Party, and the IdP is
ADFS server, the FedAuth token's secure attribute can be
configured by setting requireSSL to True in
system.identityModel.services section of web.config:
Example
<system.identityModel.services>
<federationConfiguration>
<!-- Set requireSsl=true; domain=application domain name used by FedAuth cookies (Ex: .gdinfra.com); -->
<cookieHandler requireSsl="true" persistentSessionLifetime="0.0:20:0" />
....
</federationConfiguration>
</system.identityModel.services>
All http based application should specify http only for cookie definition
TITLE DETAILS
Attributes N/A
Example
All HTTP-based applications that use cookies should specify HttpOnly in the cookie definition, by implementing
following configuration in web.config:
<system.web>
.
.
<httpCookies requireSSL="false" httpOnlyCookies="true"/>
.
.
</system.web>
TITLE DETAILS
Attributes N/A
TITLE DETAILS
Example
The following code example sets the requireSSL attribute in the Web.config file.
<authentication mode="Forms">
<forms loginUrl="member_login.aspx" cookieless="UseCookies" requireSSL="true"/>
</authentication>
TITLE DETAILS
Example
Following configuration shows the correct configuration:
<federatedAuthentication>
<cookieHandler mode="Custom"
hideFromScript="true"
name="FedAuth"
path="/"
requireSsl="true"
persistentSessionLifetime="25">
</cookieHandler>
</federatedAuthentication>
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
Example
Example
<form action="/UserProfile/SubmitUpdate" method="post">
<input name="__RequestVerificationToken" type="hidden"
value="saTFWpkKN0BYazFtN6c4YbZAmsEwG0srqlUqqloi/fVgeV2ciIFVmelvzwRZpArs" />
<!-- rest of form goes here -->
</form>
Example
At the same time, Html.AntiForgeryToken() gives the visitor a cookie called __RequestVerificationToken, with the
same value as the random hidden value shown above. Next, to validate an incoming form post, add the
[ValidateAntiForgeryToken] filter to the target action method. For example:
[ValidateAntiForgeryToken]
public ViewResult SubmitUpdate()
{
// ... etc.
}
<script>
@functions{
public string TokenHeaderValue()
{
string cookieToken, formToken;
AntiForgery.GetTokens(null, out cookieToken, out formToken);
return cookieToken + ":" + formToken;
}
}
$.ajax("api/values", {
type: "post",
contentType: "application/json",
data: { }, // JSON data goes here
dataType: "json",
headers: {
'RequestVerificationToken': '@TokenHeaderValue()'
}
});
</script>
Example
When you process the request, extract the tokens from the request header. Then call the AntiForgery.Validate
method to validate the tokens. The Validate method throws an exception if the tokens are not valid.
void ValidateRequestHeader(HttpRequestMessage request)
{
string cookieToken = "";
string formToken = "";
IEnumerable<string> tokenHeaders;
if (request.Headers.TryGetValues("RequestVerificationToken", out tokenHeaders))
{
string[] tokens = tokenHeaders.First().Split(':');
if (tokens.Length == 2)
{
cookieToken = tokens[0].Trim();
formToken = tokens[1].Trim();
}
}
AntiForgery.Validate(cookieToken, formToken);
}
TITLE DETAILS
Attributes N/A
Example
Here's the code you need to have in all of your pages:
Attributes N/A
Example
```XML code
TITLE DETAILS
Attributes N/A
Example
```XML code
| Title | Details |
| ----------------------- | ------------ |
| **Component** | Web Application |
| **SDL Phase** | Build |
| **Applicable Technologies** | Web Forms, MVC5 |
| **Attributes** | EnvironmentType - OnPrem |
| **References** | [asdeqa](https://skf.azurewebsites.net/Mitigations/Details/wefr) |
| **Steps** | When the web application is Relying Party and ADFS is the STS, the lifetime of the authentication
cookies - FedAuth tokens - can be set by the following configuration in web.config:|
### Example
```XML
<system.identityModel.services>
<federationConfiguration>
<!-- Set requireSsl=true; domain=application domain name used by FedAuth cookies (Ex: .gdinfra.com); -->
<cookieHandler requireSsl="true" persistentSessionLifetime="0.0:15:0" />
<!-- Set requireHttps=true; -->
<wsFederation passiveRedirectEnabled="true" issuer="http://localhost:39529/"
realm="https://localhost:44302/" reply="https://localhost:44302/" requireHttps="true"/>
<!--
Use the code below to enable encryption-decryption of claims received from ADFS. Thumbprint value varies
based on the certificate being used.
<serviceCertificate>
<certificateReference findValue="4FBBBA33A1D11A9022A5BF3492FF83320007686A" storeLocation="LocalMachine"
storeName="My" x509FindType="FindByThumbprint" />
</serviceCertificate>
-->
</federationConfiguration>
</system.identityModel.services>
Example
Also the ADFS issued SAML claim token's lifetime should be set to 15 minutes, by executing the following
powershell command on the ADFS server:
Attributes N/A
References N/A
TITLE DETAILS
Steps Perform proper Sign Out from the application, when user
presses log out button. Upon logout, application should
destroy user's session, and also reset and nullify session cookie
value, along with resetting and nullifying authentication cookie
value. Also, when multiple sessions are tied to a single user
identity, they must be collectively terminated on the server
side at timeout or logout. Lastly, ensure that Logout
functionality is available on every page.
Attributes N/A
References N/A
TITLE DETAILS
Attributes N/A
Steps Anti-CSRF and AJAX: The form token can be a problem for
AJAX requests, because an AJAX request might send JSON
data, not HTML form data. One solution is to send the tokens
in a custom HTTP header. The following code uses Razor
syntax to generate the tokens, and then adds the tokens to an
AJAX request.
Example
<script>
@functions{
public string TokenHeaderValue()
{
string cookieToken, formToken;
AntiForgery.GetTokens(null, out cookieToken, out formToken);
return cookieToken + ":" + formToken;
}
}
$.ajax("api/values", {
type: "post",
contentType: "application/json",
data: { }, // JSON data goes here
dataType: "json",
headers: {
'RequestVerificationToken': '@TokenHeaderValue()'
}
});
</script>
Example
When you process the request, extract the tokens from the request header. Then call the AntiForgery.Validate
method to validate the tokens. The Validate method throws an exception if the tokens are not valid.
IEnumerable<string> tokenHeaders;
if (request.Headers.TryGetValues("RequestVerificationToken", out tokenHeaders))
{
string[] tokens = tokenHeaders.First().Split(':');
if (tokens.Length == 2)
{
cookieToken = tokens[0].Trim();
formToken = tokens[1].Trim();
}
}
AntiForgery.Validate(cookieToken, formToken);
}
Example
Anti-CSRF and ASP.NET MVC forms - Use the AntiForgeryToken helper method on Views; put an
Html.AntiForgeryToken() into the form, for example,
@using (Html.BeginForm("UserProfile", "SubmitUpdate")) {
@Html.ValidationSummary(true)
@Html.AntiForgeryToken()
<fieldset>
}
Example
The example above will output something like the following:
Example
At the same time, Html.AntiForgeryToken() gives the visitor a cookie called __RequestVerificationToken, with the
same value as the random hidden value shown above. Next, to validate an incoming form post, add the
[ValidateAntiForgeryToken] filter to the target action method. For example:
[ValidateAntiForgeryToken]
public ViewResult SubmitUpdate()
{
// ... etc.
}
TITLE DETAILS
References Secure a Web API with Individual Accounts and Local Login in
ASP.NET Web API 2.2
TITLE DETAILS
Steps If the Web API is secured using OAuth 2.0, then it expects a
bearer token in Authorization request header and grants
access to the request only if the token is valid. Unlike cookie
based authentication, browsers do not attach the bearer
tokens to requests. The requesting client needs to explicitly
attach the bearer token in the request header. Therefore, for
ASP.NET Web APIs protected using OAuth 2.0, bearer tokens
are considered as a defense against CSRF attacks. Please note
that if the MVC portion of the application uses forms
authentication (i.e., uses cookies), anti-forgery tokens have to
be used by the MVC web app.
Example
The Web API has to be informed to rely ONLY on bearer tokens and not on cookies. It can be done by the following
configuration in WebApiConfig.Register method: ```C-Sharp code config.SuppressDefaultHostAuthentication();
config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType));
The SuppressDefaultHostAuthentication method tells Web API to ignore any authentication that happens before the
request reaches the Web API pipeline, either by IIS or by OWIN middleware. That way, we can restrict Web API to
authenticate only using bearer tokens.