Dev Tips Curator

@VisualStudio @Code – Printable Keyboard Shortcut #Cheatsheets

Visual Studio Code is a source code editor developed by Microsoft for Windows, macOS, and Linux.  It includes support for debugging, embedded Git control, syntax highlighting, intelligent code completion, snippets, and code refactoring.

For advanced users, Visual Studio Code lets you perform most tasks directly from the keyboard.

Here are the printable keyboard shortcut cheatsheets for Windows, macOS, and Linux in PDF.

To learn more about keyboard bindings for Visual Studio Code, click here.


Global Azure Bootcamp on April 22, 2017


All around the world user groups and communities want to learn about Azure and Cloud Computing!

On April 22, 2017, all communities will come together once again in the fifth great Global Azure Bootcamp event! Each user group will organize their own one day deep dive class on Azure the way they see fit and how it works for their members. The result is that thousands of people get to learn about Azure and join together online under the social hashtag #GlobalAzure! Join hundreds of other organizers to help out and be part of the experience!

Azure Bootcamp 2017 – Mississauga

The event that takes place at Microsoft Canada in Mississauga will be of interests to both Azure novices and seasoned professionals.

Topics include:

  • How to provision your Azure environment using Azure Resource Manager (ARM) and ARM templates
  • How to build an ASP.NET MVC application with Azure DocumetDB from scratch and deploying it on Azure PaaS (Platform as a Service)
  • How to monitor and analyze the behavior of your web application with Application Insights
  • Machine learning and Bot Services
  • Mobile development with Xamarin

Click here to register as seating is limited.

Azure Solution Architecture Templates

These are useful architecture templates to help you design and implement secure, highly-available, performant and resilient solutions on Azure:

Dev-Test deployment for testing PaaS solutions

Dev-Test deployment for testing IaaS solutions

Dev-Test deployment for testing microservice solutions

Social mobile and web app with authentication

Custom mobile workforce app

Task-based consumer mobile app

Video-on-demand digital media

Live streaming digital media

Keyword search/speech-to-text/OCR digital media

Simple digital marketing website

Big compute solutions as a service

HPC cluster deployed in the cloud

On-premises HPC implementation bursting to Azure

Back up on-premises applications and data to cloud

Back up cloud applications and data to cloud

Archive on-premises data to cloud

SMB disaster recovery with Azure Site Recovery

SMB disaster recovery with Double-Take DR

Enterprise-scale disaster recovery

Scalable Episerver marketing website

Scalable Sitecore marketing website

Scalable Umbraco CMS web app

Umbraco CMS for light to medium traffic sites

Azure App Service Plans Demystified

App Service brings together everything you need to create web and mobile apps for any platform and any device.

  • Free and Shared plans allow you to host your apps in a shared environment
  • Basic, Standard, and Premium plans provide Virtual Machines dedicated to your plan.

You can host multiple apps and domains in each instance you deploy within your plan.

The following table describes capabilities and limits available within App Service Plans as of January 2017.

Web, mobile, or API apps 10 100 Unlimited Unlimited Unlimited
Disk space 1 GB 1 GB 10 GB 50 GB 250 GB
Logic App Actions (per day) * 200 200 200 10,000 50,000
Maximum instances Up to 3 Up to 10 Up to 50
SLA 99.95% 99.95% 99.95%
Auto-Scale Supported Supported
Geo-distributed deployment Supported Supported
VPN hybrid connectivity Supported Supported
Staging environments 5 20
Custom domain Supported Supported Supported Supported
SSL certificates Unlimited SNI SSL certs Unlimited SNI SSL certs and 1 IP SSL included1 Unlimited SNI SSL certs and 1 IP SSL included1
Automated Backups (/day) 2 50
Active mobile devices 500 / day 500 / day Unlimited Unlimited Unlimited
Offline Sync 500 calls / day 1 K calls / day 1 K calls / day Unlimited Unlimited
Logic Apps Definitions 10 10 10 25 100
Logic App data storage cap 1 Day 1 Day 1 Day 7 Days 30 Days

* Included quantities of Azure Logic Apps are available only to EA customers

Additional Terms & Conditions:

165 MB outbound network traffic included, additional outbound network bandwidth charged separately.

Premium service plan allows up to 50 computes instances (subject to availability) and 500 GB of disk space when using App Service Environments (ASE), and 20 compute instances and 250 GB storage when not using ASE.

Azure is an evolving technology.  For the latest information, please refer to the Azure documentation.

How to Set Default Parameter Values in #PowerShell

Most PowerShell Cmdlets have default parameter values.

There might be situations where you need to override a default parameter value with another value. What if that other value is one that you use regularly and repeatedly? Won’t it be nice if you can replace the standard default value with your own default value?

This article will show you how to do exactly that with $PSDefaultParameterValues.

This feature is useful when you need to specify the same alternate parameter value nearly every time you use the Cmdlet or when a particular parameter value is difficult to remember, such as your Azure Subscription ID.

How to set $PSDefaultParameterValues

Method 1


Method 2


Method 3

$PSDefaultParameterValues["Disabled"]=$true | $false


Set Default Value for Send-MailMessage:SmtpServer

$PSDefaultParameterValues = @{"Send-MailMessage:SmtpServer"="mySmtpServer"}

Set Multiple Default Parameter Values

Use semi-colon (;) to separate each Name=Value pair

$PSDefaultParameterValues = @{"Send-MailMessage:SmtpServer"="mySmtpServer";

Set Default Values for All Commands

Here is how to use wildcard (*) to set the Verbose common parameter to $true in all commands.

$PSDefaultParameterValues = @{"*:Verbose"=$true}

Set Multiple Values (an array).

If a parameter takes multiple values (an array), you can set multiple values as the default value.

The following command sets the default value of the ComputerName parameter of the Invoke-Command cmdlet to “Server01” and “Server02”.

$PSDefaultParameterValues = @{"Invoke-Command:ComputerName"="Server01","Server02"}

Script Block

You can use a script block to specify different default values for a parameter under different conditions. Windows PowerShell evaluates the script block and uses the result as the default parameter value.

$PSDefaultParameterValues=@{"Format-Table:AutoSize"={if ($host.Name -eq "myHost"){$true}}}

If a parameter takes a script block value, enclose the script block in an extra set of braces. When Windows PowerShell evaluates the outer script block, the result is the inner script block, which is set as the default parameter value.

The following command sets the default value of the ScriptBlock parameter of Invoke-Command. Because the script block is enclosed in a second set of braces, the enclosed script block is passed to the Invoke-Command cmdlet.

$PSDefaultParameterValues=@{"Invoke-Command:ScriptBlock"={{Get-EventLog -Log System}}}

How to Add Default Parameter Values

$PSDefaultParameterValues.Add("<CmdletName>:<ParameterName>", "<ParameterValue>")

How to Get All Default Parameter Values

At the PowerShell command prompt, type:


How to Get A Particular Default Parameter Value

To get the value of a particular parameter key, use the following command syntax:


How to Change A Default Parameter Value


How to Remove A Default Parameter Value


How to Save $PSDefaultParameterValues

To save $PSDefaultParameterValues for future sessions, add a $PSDefaultParameterValues command to your Windows PowerShell profile.

Continuous Integration vs Continuous Delivery vs Continuous Deployment #CICD

Continuous Integration

  • A software development practice in which the Continuous Integration (CI) server automatically builds and tests software whenever a developer commits code changes to the application.
  • CI is an essential part of a Continuous Delivery workflow.
  • Aims to help software teams ensure code changes are built and tested with the latest version of the entire code base.
  • Reveals bugs promptly after the code change is committed to source control.
  • Leads to better quality since each bug can be easily isolated to a specific code change and fixed promptly.
  • CI tests your code against the current state of your code base and always in the same (production-like) environment allow you to spot any integration challenges right away.
  • Increased code coverage.  A CI server can check your code for test coverage.  If you commit something new without any tests, your coverage percentage will go down because of your changes.  Seeing code coverage increase over time is a motivator for the team to write tests.
  • Inspires transparency and accountability across the team.  Results of your tests should be displayed on your build pipeline.  If a build passes, that increases the confidence of the team.  If it fails, you can get help from team members to determine what may have gone wrong.  This is similar to the level of transparency that code reviews provide.
  • A CI server can have parallel build support so that you can split your tests and build processes over multiple VMs or containers.  As a result, the overall build time will be a lot shorter than if you build locally.  This will free up your local resources for your other work.
  • A CI server can be configured to send notifications to everyone on the team or to certain key people whenever there is a broken build.
  • With automated testing, your code is tested in the same way for every change so that you can trust that every change is tested before it goes to production.
  • CI does not include deploying to production.  However, a CI server can be configured to automatically deploy your code to staging, pre-production, or even production if all the tests within a specific branch are successful (green).  This is known as Continuous Delivery (see next section).

Continuous Delivery

  • A software engineering approach in which Continuous Integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably, and repeatedly with minimal human intervention.
  • For Continuous Delivery, deployment to production is done strategically and triggered manually.

Continuous Deployment

  • A software development practice in which every committed code change goes through the entire pipeline and is put into production automatically, resulting in many production deployments every day.
  • Continuous Deployment does everything Continuous Delivery does but the process is fully automated with no human intervention at all.
  • Eliminate manual delivery and increase focus on developing the actual product.
  • Automate repetitive tasks and focus on actual testing.
  • Make deployments “frictionless” without compromising security.
  • Can scale from a single application to an Enterprise IT portfolio.
  • Connect your existing tools and technologies into a harmonious workflow by integrating teams and processes with a unified pipeline.
  • Create workflows across development, testing and production environments.
  • Provide a single view across all applications and environments.
  • Improve overall productivity.

Continuous Delivery vs Continuous Deployment

There is often confusion between Continuous Delivery and Continuous Deployment.  The differences can be explained by the following diagram:


  • Continuous Delivery means every code change is proven to be “deployable” at any time.  However, this does not mean every change needs to be deployed to production immediately.
  • Continuous Deployment is the next step of Continuous Delivery.  Every code change passes the entire pipeline and is put into production automatically, resulting in many production deployments every day.

While Continuous Deployment might not be suitable for every company, Continuous Delivery is an essential requirement for DevOps practices.  Only when you continuously deliver your code can you have true confidence that your changes will be serving value to  your customers within minutes of pushing the “go live” button any time the business is ready for it.


Quick Tip – How to Wait for User Keypress in #PowerShell

You’ve developed a PowerShell script that returns some useful information to the user.  At the end of the script execution, you want the user to “Press any key to continue…” before exiting.  How to do it?

Solution 1: For PowerShell Console

Write-Host "Press any key to continue..."

If you run the script above in Windows PowerShell commandline console, you will get the following results when you press the Enter key:

VirtualKeyCode  Character  ControlKeyState  KeyDown
--------------  ---------  ---------------  -------
            13  ...        0                True

However, if you are running your script in PowerShell ISE, you will receive the following error:

Exception calling "ReadKey" with "1" argument(s): "The method or operation is not implemented."

To resolve this error in PowerShell ISE, see the next solution.

Solution 2: Works in PowerShell ISE

Here is a simple way to pause the script execution and wait for the user to press the ENTER key to continue. This works for both the PowerShell commandline console as well as in the PowerShell ISE.

read-host “Press ENTER to continue...”

Solution 3: MessageBox UI

Another way to pause the script execution and wait for the user’s interaction is by showing a MessageBox UI. This works for both the PowerShell commandline console as well as in the PowerShell ISE.

$Shell = New-Object -ComObject "WScript.Shell"
$Button = $Shell.Popup("Click OK to continue.", 0, "Hello", 0)

This will result in a MessageBox UI as follows:


Solution 4: Pause function

Here is an encompassing solution that works whether you are running your script in the PowerShell commandline console or in the PowerShell ISE:

Function Pause ($Message = "Press any key to continue...") {
   # Check if running in PowerShell ISE
   If ($psISE) {
      # "ReadKey" not supported in PowerShell ISE.
      # Show MessageBox UI
      $Shell = New-Object -ComObject "WScript.Shell"
      $Button = $Shell.Popup("Click OK to continue.", 0, "Hello", 0)

   $Ignore =
      16,  # Shift (left or right)
      17,  # Ctrl (left or right)
      18,  # Alt (left or right)
      20,  # Caps lock
      91,  # Windows key (left)
      92,  # Windows key (right)
      93,  # Menu key
      144, # Num lock
      145, # Scroll lock
      166, # Back
      167, # Forward
      168, # Refresh
      169, # Stop
      170, # Search
      171, # Favorites
      172, # Start/Home
      173, # Mute
      174, # Volume Down
      175, # Volume Up
      176, # Next Track
      177, # Previous Track
      178, # Stop Media
      179, # Play
      180, # Mail
      181, # Select Media
      182, # Application 1
      183  # Application 2

   Write-Host -NoNewline $Message
   While ($KeyInfo.VirtualKeyCode -Eq $Null -Or $Ignore -Contains $KeyInfo.VirtualKeyCode) {
      $KeyInfo = $Host.UI.RawUI.ReadKey("NoEcho, IncludeKeyDown")

You will see Press any key to continue... if you are running in the PowerShell commandline console, or a MessageBox UI if you are running in PowerShell ISE.

Have fun with PowerShell!

#Azure #Storage Replication Demystified

Replication protects your data and preserves your application up-time in the event of transient hardware failures. To ensure durability and high availability, replication copies your data, either within the same data center, or to a second data center, depending on which replication option you choose.

Replication ensures that your storage account meets the Service-Level Agreement (SLA) for Storage even in the face of failures.  If your data is replicated to a second data center, that also protects your data against a catastrophic failure in the primary location.

Replication Options

On Azure, you can select one of the following replication options when creating a storage account:

Read-Access Geo-Redundant Storage (RA-GRS) is the default option when you create a new storage account.

Here is a quick overview of the differences between LRS, ZRS, GRS, and RA-GRS:

Replication strategy LRS ZRS GRS RA-GRS
Data is replicated across multiple data centers. No Yes Yes Yes
Data can be read from the secondary location as well as from the primary location. No No No Yes
Number of copies of data maintained on separate nodes. 3 3 6 6

Locally Redundant Storage (LRS)

Locally Redundant Storage (LRS) replicates your data 3 times within a storage scale unit which is hosted in a data center in the same region in which you created your storage account.

A write request returns successfully only once it has been written to all 3 replicas. These 3 replicas each reside in separate Fault Domains (FD) and Upgrade Domains (UD) within one storage scale unit.  A storage scale unit is a collection of racks of storage nodes.

Fault Domain (FD)

A fault domain is a group of nodes that represent a physical unit of failure and can be considered as nodes belonging to the same physical rack.

Update Domain (UD)

An upgrade domain is a group of nodes that are upgraded together during the process of a service upgrade (rollout).

The 3 replicas are spread across UDs and FDs within one storage scale unit to ensure that data is available even if hardware failure impacts a single rack or when nodes are upgraded during a rollout.

LRS is the lowest cost option and offers least durability compared to other options. In the event of a data center level disaster (fire, flooding etc.) all 3 replicas might be lost or unrecoverable.

To mitigate this risk, Geo Redundant Storage (GRS) is recommended for most applications.

LRS may still be desirable in certain scenarios:

  • Provides highest maximum bandwidth of Azure Storage replication options.
  • If your application stores data that can be easily reconstructed, you may opt for LRS.
  • Some applications are restricted to replicating data only within a country due to data governance requirements. A paired region could be in another country; please see Azure regions for information on region pairs.

Zone-Redundant Storage (ZRS)

Zone-Redundant Storage (ZRS) replicates your data asynchronously across data centers within one or two regions in addition to storing 3 replicas similar to LRS, thus providing higher durability than LRS. Data stored in ZRS is durable even if the primary data center is unavailable or unrecoverable.


  • ZRS is only available for block blobs in general purpose storage accounts, and is supported only in storage service versions 2014-02-14 and later.
  • Since asynchronous replication involves a delay, in the event of a local disaster it is possible that changes that have not yet been replicated to the secondary will be lost if the data cannot be recovered from the primary.
  • The replica may not be available until Microsoft initiates fail-over to the secondary.
  • ZRS accounts cannot be converted later to LRS or GRS. Similarly, an existing LRS or GRS account cannot be converted to a ZRS account.
  • ZRS accounts do not have metrics or logging capability.

Geo-Redundant Storage (GRS)

Geo-Redundant Storage (GRS) replicates your data to a secondary region that is hundreds of miles away from the primary region. If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region is not recoverable.

For a storage account with GRS enabled, an update is first committed to the primary region, where it is replicated 3 times. Then the update is replicated asynchronously to the secondary region, where it is also replicated 3 times.

With GRS both the primary and secondary regions manage replicas across separate Fault Domains and Upgrade Domains within a storage scale unit as described with LRS.


  • Since asynchronous replication involves a delay, in the event of a regional disaster it is possible that changes that have not yet been replicated to the secondary region will be lost if the data cannot be recovered from the primary region.
  • The replica is not available unless Microsoft initiates fail-over to the secondary region.
  • If an application wants to read from the secondary region the user should enable RA-GRS.

When you create a storage account, you select the primary region for the account. The secondary region is determined based on the primary region, and cannot be changed.

The following table shows the primary and secondary region pairings.

Primary Secondary
North Central US South Central US
South Central US North Central US
East US West US
West US East US
US East 2 Central US
Central US US East 2
North Europe West Europe
West Europe North Europe
South East Asia East Asia
East Asia South East Asia
East China North China
North China East China
Japan East Japan West
Japan West Japan East
Brazil South South Central US
Australia East Australia Southeast
Australia Southeast Australia East
India South India Central
India Central India South
US Gov Iowa US Gov Virginia
US Gov Virginia US Gov Iowa
Canada Central Canada East
Canada East Canada Central
UK West UK South
UK South UK West
Germany Central Germany Northeast
Germany Northeast Germany Central
West US 2 West Central US
West Central US West US 2

For up-to-date information about regions supported by Azure, see Azure Regions.

Read-Access Geo-Redundant Storage (RA-GRS)

Read-Access Geo-Redundant Storage (RA-GRS) maximizes availability for your storage account, by providing read-only access to the data in the secondary location, in addition to the replication across two regions provided by GRS.

When you enable read-only access to your data in the secondary region, your data is available on a secondary endpoint, in addition to the primary endpoint for your storage account. The secondary endpoint is similar to the primary endpoint, but appends the suffix –secondary to the account name.

For example, if your primary endpoint for the Blob service is, then your secondary endpoint is The access keys for your storage account are the same for both the primary and secondary endpoints.


  • Your application has to manage which endpoint it is interacting with when using RA-GRS.
  • RA-GRS is intended for high-availability purposes. For scalability guidance, please review the performance checklist.

What is #Azure Batch?

Azure Batch is a platform service for running large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud.

What does Azure Batch do?

  • schedules compute-intensive work to run on a managed collection of virtual machines
  • automatically scales compute resources to meet the needs of your jobs
  • can easily define Azure compute resources to execute your applications in parallel, and at scale
  • no need to manually create, configure, and manage an HPC cluster, individual virtual machines, virtual networks, or a complex job and task scheduling infrastructure

Batch computing is most commonly used by organizations that regularly process, transform, and analyze large volumes of data.

Intrinsically Parallel Workloads

Batch works well with intrinsically parallel (also known as “embarrassingly parallel”) applications and workloads. Intrinsically parallel workloads are those that are easily split into multiple tasks that perform work simultaneously on many computers.

Examples of workloads that are commonly processed using this technique are:

  • Financial risk modeling
  • Climate and hydrology data analysis
  • Image rendering, analysis, and processing
  • Media encoding and transcoding
  • Genetic sequence analysis
  • Engineering stress analysis
  • Software testing


Azure Batch is a free service; you aren’t charged for the Batch account itself. You are charged for the underlying Azure compute resources that your Batch solutions consume, and for the resources consumed by other services when your workloads run. For example, you are charged for the compute nodes in your pools and for the data you store in Azure Storage as input or output for your tasks.

Developing with Batch

Processing parallel workloads with Azure Batch is typically done programmatically by using one of the Batch APIs.

Your client application or service can use the Batch APIs to:

  • communicate with the Batch service
  • create and manage pools of compute nodes, either virtual machines or cloud services
  • schedule jobs and tasks to run on those nodes

You can efficiently process large-scale workloads for your organization, or provide a service front end to your customers so that they can run jobs and tasks–on demand, or on a schedule–on one, hundreds, or even thousands of nodes. You can also use Azure Batch as part of a larger workflow, managed by tools such as Azure Data Factory.

Azure Accounts for Batch Development

When you develop Batch solutions, you will need the following accounts in your Microsoft Azure subscription:

  • Batch account – Azure Batch resources, including pools, compute nodes, jobs, and tasks, are associated with an Azure Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name, the URL of the account, and an access key. You can create Batch account in the Azure portal.
  • Storage account – Batch includes built-in support for working with files in Azure Storage. Nearly every Batch scenario uses Azure Blob storage for staging the programs that your tasks run and the data that they process, and for the storage of output data that they generate. To create a Storage account, see About Azure storage accounts.

Batch Development APIs

Your applications and services can issue direct REST API calls or use one or more of the following client libraries to run and manage your Azure Batch workloads.

API API reference Download Tutorial Code samples
Batch .NET NuGet Tutorial GitHub
Batch Python PyPI Tutorial GitHub
Batch Node.js npm
Batch Java (preview) Maven GitHub

Batch Command-line Tools

Functionality provided by the development APIs is also available using command-line tools:

  • Batch PowerShell cmdlets: The Azure Batch cmdlets in the Azure PowerShell module enable you to manage Batch resources with PowerShell.
  • Azure CLI: The Azure Command-Line Interface (Azure CLI) is a cross-platform toolset that provides shell commands for interacting with many Azure services, including Batch.

Batch Resource Management

The Azure Resource Manager APIs for Batch provide programmatic access to Batch accounts. Using these APIs, you can programmatically manage Batch accounts, quotas, and application packages.

API API reference Download Tutorial Code samples
Batch Resource Manager REST N/A GitHub
Batch Resource Manager .NET NuGet Tutorial GitHub

Batch Tools

While not required to build solutions using Batch, here are some valuable tools to use while building and debugging your Batch applications and services.

  • Azure portal: You can create, monitor, and delete Batch pools, jobs, and tasks in the Azure portal’s Batch blades. You can view the status information for these and other resources while you run your jobs, and even download files from the compute nodes in your pools (download a failed task’s stderr.txt while troubleshooting, for example). You can also download Remote Desktop (RDP) files that you can use to log in to compute nodes.
  • Azure Batch Explorer: Batch Explorer provides similar Batch resource management functionality as the Azure portal, but in a standalone Windows Presentation Foundation (WPF) client application. One of the Batch .NET sample applications available on GitHub, you can build it with Visual Studio 2015 or above and use it to browse and manage the resources in your Batch account while you develop and debug your Batch solutions. View job, pool, and task details, download files from compute nodes, and connect to nodes remotely by using Remote Desktop (RDP) files you can download with Batch Explorer.
  • Microsoft Azure Storage Explorer: While not strictly an Azure Batch tool, the Storage Explorer is another valuable tool to have while you are developing and debugging your Batch solutions.

For more information

Create a website or blog at

Up ↑