Major Incident Notification Template

Overview

This email template is intended to provide businesses with transparency for priority 1 and/or priority 2 critical incidents.

The concept is to provide a concise update within time intervals, progressing the incident towards resolution.

Use Case

The below matrix is a strong guideline on when to send these email comms. These numbers are industry practice and have been formulated to provide an effective balance between communication and technical resolution.

Ideally, it is the incident manager’s duty to share these comms across the business, allowing the assigned engineer to perform technical work in tandem without loss of focus on the P1/P2 incident.

PriorityInterval
Priority 11 hour
Priority 22 hour

Email Template

Subject: Priority {number} Incident Notification – {Ticket number} – {Short summary}

{INSERT_COMPANY_LOGO_HERE}

Hi all,

Please be advised that an issue has been identified that is currently under the control of the Major Incident Management team. The reported issue is affecting {Service name}.

Details of the outage can be found below:

APPLICATION/SERVICE AFFECTED: ____________________________________________________________
IMPACTED AREA: ____________________________________________________________
CURRENT STATUS / ACTIONS COMPLETED: ____________________________________________________________
NEXT ACTIONS: ____________________________________________________________
ETA NEXT UPDATE: ____________________________________________________________

How to monitor Ribbon SBC using Azure Log Analytics

In this tutorial I will be teaching you how to configure monitoring of your Ribbon SBC appliance using Microsoft Azure Log Analytics.

Particularly useful as Ribbon currently do not provide a monitoring solution for Ribbon SBC on Azure.

Let’s face it, Azure monitoring is downright awesome. It’s super easy to get going, inexpensive (compared to other 3rd party products) and the potential is limitless!

Overview

We will be building an Ubuntu Server Linux virtual machine to act as an intermediary syslog gateway for Ribbon SBC SWeLite to forward logs into an Azure Log Analytics Workspace.

High Level Design

Prerequisites

  1. Azure subscription.
  2. Azure Log Analytics workspace.
  3. Ribbon SBC hosted in Azure (I am using SWeLite 9.0.1 in this guide).
  4. Network connectivity between Ribbon SBC and Linux VM.
    1. In this guide, there is a VNET peer between the Linux VM and Ribbon SBC network. They are both in Azure.

Note: There will be a small operational expenditure with this exercise as you will be creating a new Linux virtual machine in Azure.

If you have a pre-existing Linux VM in Azure you can use that without incurring additional costs.

Provision Linux VM

We are provisioning a Ubuntu Server 18.04 Linux VM for this exercise as it is cheap and secure.

  1. Within Azure Portal, click Create a resource.
  1. Within Search the Marketplace bar, enter Ubuntu and click Ubuntu Server 18.04 LTS.
  1. Click Create.
  1. Give your new Ubuntu Server Linux VM a name and customise. In this guide we are using B1ls as it is the least expensive.
  1. Your configuration screen should look like above exhibit.
  2. Create your virtual machine when ready.
  3. Finish.

Install Log Analytics Agent

Log Analytics Agent (formerly known as OMS agent) installation guide for PROD-UBUNTU-01 virtual machine. This will act as our syslog gateway for Ribbon / Sonuc SBC.

Note: An Azure Log Analytics workspace is a prerequisite for this section.

  1. In Azure Portal, search for “Log Analytics” in the top search bar and click to open.
  1. Click to open your Log Analytics workspace.
  1. Within newly opened blade on the right of your screen, click Virtual machines under Workspace Data Sources.
  1. Within distant right-hand blade, click to open our newly created Linux VM. It is PROD-Ubuntu-01 my example below.
  1. Click Connect.
  1. Log Analytics monitoring agent is now deploying to our Linux VM. This can take up to 5 minutes to complete.
  2. Once it is complete and the agent is connected to our workspace, click Advanced settings within Log Analytics workspace blade.
  1. Click Data > Syslog > within facility search bar in right pane, type local0 and click +. Ribbon SBC will only utilise local0 per below instructions.
  1. Ensure all facility options ranging from EMERGENCY to DEBUG are ticked.
  2. Click Save.
  3. Connect to the Linux VM using SSH. For instructions on how to SSH to an Azure hosted Linux VM, check this out.
  4. Once successfully logged on, execute the following command to enable Rsyslog remote log forwarding. sudo vi /etc/rsyslog.d /95-omsagent.conf.
  1. Append the following two lines at end of the file, save and close VI.
  1. Restart the rsyslog service using the command sudo service rsyslog restart.
  2. Verify that our Linux VM is listening on port 514 using the command netstat -an | grep 514.
  1. Finish.

Our Ubuntu Server Linux virtual machine is now configured to act as a syslog gateway to forward logs to our Log Analytics workspace. Our next action is to configure Ribbon SBC to send remote syslogs to our Ubuntu Server Linux VM.

Enable Ribbon SBC Remote Syslog

  1. Navigate to your Ribbon SBC appliance’ web GUI. You can do this via its management IP.
  1. Click Settings > Remote Log Servers > +.
  1. A pop-up window will appear, enter the following settings.
    1. Global Log Level: Informational
    2. Log Destination: Ubuntu Server Linux VM IP
    3. Port: 514
    4. Protocol: UDP
    5. Log Facility: local0 (Local Use 0)
    6. Enabled: Yes
  1. Click OK.
  2. Finish.

Viewing Logs

We can now see logs begin to feed into our Log Analytics Workspace.

  1. In Azure Portal, search for “Log Analytics” in the top search bar and click to open.
  1. Click to open your Log Analytics workspace.
  1. Click Logs.
  1. Execute the following KQL query;

Syslog
| where TimeGenerated > ago(24h)
| where Computer contains "RIBBON SBC IP ADDRESS"

  1. Congratulations! We can now see syslogs from Ribbon SBC in Azure Log Analytics.

Next Actions

Next steps are to configure Azure alerts and actions based on event severity level.

For example we can configure an action to send email to a Microsoft Teams channel or raise a ServiceNow ticket using an ITSM hook when there is a “warn” or “critical” severity level event. The possibilities are endless.

Stay tuned!

 

Report all Azure AD user IDs last logon timestamp using Microsoft Graph API

It’s been a while since I’ve updated my blog so here we go!

This is a step-instruction guide on how to generate Azure AD reports listing all users’ last logon time. This is particularly handy as is not possible to generate any such report using AzureAD or AzureADPreview PowerShell modules.

Prerequisites

  • Azure Active Directory
  • Web browser
  • Microsoft Graph API delegate permissions
    • User.Read.All
    • Directory.Read.All
    • Directory.AccessAsUser.All

Instructions

  1. Launch your web browser and navigate to https://developer.microsoft.com/en-us/graph/graph-explorer.
  2. Click Sign in to Graph Explorer and login using your Azure AD tenant credentials.
  1. Once you have successfully signed in, run the following query https://graph.microsoft.com/beta/users?$top=999&$select=displayName,userPrincipalName,signInActivity.
  1. When successful, our output will be produced in JSON format within the bottom pane, response preview tab per below.
  1. Select all (CTRL + A) within the bottom response preview tab and copy (CTRL + C).
  2. Go to https://json-csv.com/ and convert your JSON output to CSV.
  3. Open your CSV file using Microsoft Excel, filter and sort last login column from A-Z (which also sorts out in date order).
  1. Congratulations! You have successfully generated an Azure AD report detailing last logon timestamps for your organisation using Microsoft Graph API.

Connect to AWS CLI via Powershell

In this tutorial, you can connect your Powershell terminal to AWS CLI using SAML2AWS.

Especially handy for AWS tenancies utilising MFA authentication.

Pre-requisites

Instructions

  1. Launch elevated Powershell.
  2. Execute CMDLET Set-ExecutionPolicy Bypass -Scope Process.
  3. Download and save Chocolatey installation script.
  4. Install Chocolatey by executing installation script using CMDLET .\install.ps1. Wait for it to complete.
  5. Install SAML2AWS using command choco install saml2aws.
  6. Configure SAML2AWS for your AWS tenancy using command saml2aws configure.
    Please choose a provider: ADFS
    (Optional)Please choose an MFA: {MFA Token Provider}
    AWS Profile: saml
    URL: https://<Server Name>/adfs/ls/idpinitiatedsignon.aspx
    Username: {Domain}\{Username}
    Password: {DomainPassword}
    Confirm: {DomainPassword}
  7. Once configured, you will receive the following message: Configuration saved for IDP account: default.
  8. Now, execute the command saml2aws login this will now attempt to login to your AWS tenancy using above details. You may be prompted for username, password and Security Token.
  9. (Optional) Enter your Symantec VIP token code for Security Token [000000] and hit Enter.
  10. Please choose the role – Select the AWS account you wish to login to.

Congratulations! You have successfully logged in to AWS CLI.

Azure ExpressRoute Utilisation – Automate EOM Reporting with Log Analytics & Logic Apps

We can generate ExpressRoute link utilisation within Azure by leveraging log analytics, logic apps and email!

Very beneficial for automating end of month reporting.

 

Pre-requisites

  • Azure subscription.
  • Log analytics workspace (per GB if you require reporting more than 7 days).
  • ExpressRoute link.
  • ExpressRoute network monitor
  • SPN
  • SMTP Relay

 

KQL

KQL stands for Kusto-Query Language and is used to manipulate/draw data from Azure Log Analytics. I’ve written the following query which we can execute within our Log Analytics Workspace in Azure to generate a graphical timechart of ExpressRoute utilisation, spanning 30 days with data aggregation of 24 hours.

Code

NetworkMonitoring
| where TimeGenerated > ago(30d)
| where SubType == "ERCircuitTotalUtilization"
| summarize avg(BitsInPerSecond), avg(BitsOutPerSecond) by bin(TimeGenerated, 24h)
| render timechart

 

Output

 

Automation with Logic Apps

We can now automate the above query to send email at the end of each month with our graphical timechart attached!

To do so, follow these instructions:

  1. Within Azure search bar, enter Logic Apps and click open Logic Apps. This will take us to the following screen.
  2. Click Add and call your new logic app “ExpressRoute-Report-30days”.
  3. Within our new logic app “ExpressRoute-Report-30days”, click Logic app designer.
  4. We must create the following 4 steps within Logic app designer:
    1. Recurrence.
    2. Run query and visualize results (Preview).
    3. Compose.
    4. Send Email (V3) (Preview).

Recurrence

Here, we define our recurrence interval for our logic app. Since it is EOM, we require once a month.

Run query and visualize results (Preview)

  1. Select your Azure subscription which contains your log analytics workspace and ExpressRoute resources.
  2. Select resource group which contains your log analytics workspace.
  3. Select workspace name of your log analytics workspace.
  4. Enter KQL query to generate timechart report, as specified in this article.
  5. Select Time Chart or desired within Chart Type drop-down menu.

Compose

Within Compose step > Inputs, select Attachment Name. This will ensure our rendered time chart will be attached within email.

Send Email (V3)

Our last step within logic app is to send email.

  1. Write a generic message within Body which will be informative enough for recipients to understand message intention ie. ExpressRoute report for <Customer Name>.
  2. Within Attachments Content data – 1, select Attachment Content 1.
  3. Attachments File name – 1, select Attachment Name.
  4. Click Add/Change connection.
    1. Specify SMTP relay server.
    2. Configure authentication Username/Password.
    3. Finish.

SPN

You will require an SPN with Contributor privileges within your Azure Subscription in order to execute above KQL and integrate within logic app. For instructions on how to create an SPN, follow my previous blog post.

 

SMTP

You will require an SMTP relay server using authentication. Recommended to use a service account for this purpose.

SMTP Server

Username

Password

smtp.domain.com svc-service ********

 

Finish

Congratulations! We have now configured ExpressRoute automated end-of-month reporting leveraging log analytics, logic apps and email relay!

Terraform Variables

Similar to all other programming languages, Terraform also makes use of variables for dynamic coding.

You can have a dedicated Terraform variables file with extension .TFVARS within your working directory/folder and this will automatically be referenced in your main Terraform code file, so you can call directly and/or interpolate.

There are several different types of variables in Terraform;

  • String; Name objects or any methods requiring simple text.
  • Map; Defining Azure location/region methods.
  • List; Defining Azure Subnets or IP address methods. Lists are essentially arrays, for those familiar with programming syntax, ie. [ ].
  • Boolean; Defining True/False methods. Bear in mind, in Terraform type is still defined as String.

String

variable "server_name {
 default = "nethugo-server"
}

Map

variable "locations" {
 type = "map"
 default = {
  location1 = "Australia Southeast"
  location2 = "westus2"
  }
}

List

variable "subnets" {
 type = "list"
 default = ["10.0.1.10", "10.0.1.11"]
}

Boolean

variable "status" {
 type = "string"
 default = false
}

Calling Variables

We can call above variables with the following. Ensure you include quotes.

  • String; “${var.server_name}”
    • Output: nethugo-server
  • Map: “${var.locations[“location1″]}”
    • Output: Australia Southeast
  • List: “${var.subnets[0]}”
    • Output: 10.0.1.10
  • Boolean: “${var.status}”
    • Output: false

Examples

.TFVARS variables

Initialising TFVARS variables in Main .TF file

Calling TFVARS variables in Main .TF file

How to run Terraform Code

In this tutorial, we will be executing Terraform code to create a new Azure Resource Group.

Prerequisites

Code for creating a Resource Group

resource "azurerm_resource_group" "web_server_rg" {
name="web-rg"
location="westus2"
}

Apply Changes

Within Microsoft Visual Studio Code;

  1. Right-click our TF file. In this example, it is Main.TF.
  2. Click Open in Terminal.
  3. Within the bottom of Microsoft Visual Studio Code, we have now received a terminal.
  4. Type terraform init and Press Enter. This initialises a new or existing Terraform configuration.
  5. Now, we can pilot our Terraform code to determine the changes it will commit to our azure environment using the command terraform plan. Press Enter.
  6. You will receive output within terminal advising of changes determined;
  7. If you are satisfied with pilot results, we can commit our changes to production with the command terraform apply. Press Enter.
  8. It will prompt you with the changes to be implemented; Type yes and press Enter to commit changes to production.
  9. Our Terraform Apply code completed successfully!
  10. Upon checking Azure Portal, we can now validate that our new Resource Group was successfully created.

Revert Changes

Now if you wish to revert changes you have made, you can simply execute command terraform destroy within our Visual Studio Code terminal.

Now we can validate within Azure Portal that our resource group was successfully deleted.

 

REF: https://www.terraform.io/docs/commands/index.html

How to create an Azure SPN for Terraform

Service Principal Name is effectively a service account within Azure.

Required in this scenario for our Terraform Provider function to connect our TF code to our Azure Subscription.

  1. Login to https://portal.azure.com.
  2. Click into Azure Active Directory.
  3. Click App Registrations >+ New registration.
  4. Provide a name for your SPN ie. terraform-spn, provide any valid URL within Redirect URI.
  5. Click Register.
  6. We require 4x details of information in order to use Terraform;
    1. client_id; This is Azure Application ID.
    2. client_secret; This is our SPN’s client secret key which we generate within Azure.
    3. tenant_id; This is Azure AD’s directory ID.
    4. subscription_id; This is our Azure Subscription ID.
  7. Application ID from our newly created SPN. In Terraform, this is client_id variable.
  8. Within our SPN blade, click Certificates & secrets > + New client secret.
  9. Provide a name and duration for secret. Click Add.
  10. Copy our newly created client secret string. In Terraform, this is client_secret variable.
  11. Click into Azure Active Directory > Properties.
  12. Copy the Directory ID string. In Terraform, this is tenant_id variable.
  13. Now go into Subscriptions > Overview and copy the Subscription ID. In Terraform, this is the subscription_id variable.
  14. Within Subscriptions > [Your Subscription] > Access control (IAM) > + Add > Add role assignment.
  15. Role: Contributor.
  16. Select our newly created SPN ie. terraform-spn and click Save.

 

Congratulations, we have now successfully configured an Azure SPN ready for use with Terraform!

How to hook Terraform into Azure

You can hook Terraform into Azure with the following requirements:

  1. client_id, Azure counterpart is Application ID within AAD directory application.
  2. client_secret, Password Keys within AAD directory application.
  3. tenant_id, AAD Directory ID.
  4. subscription_id, Azure Subscription ID.

Once we gather above details, we can then proceed to write our Provider function within our Terraform script.

 

Below is an example; replacing variable values with your own tenant.

provider "azurerm" {
version="1.27"
client_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
tenant_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
subscription_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

How to increase MBR volume more than 2 TB

Ever encounter a scenario with a Windows VM requiring a disk extension over 2 TB, only to discover it cannot be performed?

Don’t fret as this is by design; You cannot increase MBR partition style volumes larger than 2 TB.

Unfortunately we do not have the choice of converting an MBR partition style volume to GPT on the fly in order to overcome this burden, however here is a viable solution for such scenarios.

Prerequisites

  1. 2.5x storage capacity that of required in your hypervisor datastore. Eg. If you need to extend 2 TB to 2.5 TB, ensure your datastore is at least 5 TB with 2.5 TB free. See step 1.1.
  2. OS: Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and Windows Server 2016 (albeit solution may vary).
  3. Patience as this can be a time constraining activity.

Solution

Note: In below solution exhibits, we are using an MBR partition that is 127 GB and a 135 GB GPT partition.

  1. From the hypervisor layer (ESX/XenServer/Hyper-V), confirm you have a datastore (storage) to provision required volume.
    1. Ie. If original 2 TB volume is required to be extended to 2.5 TB total, ensure you have 2.5 TB storage readily available in excess of existing MBR volume.
  2. Provision an additional disk with the required extended size (ie. 2.5 TB) to the VM in question (requiring storage).
  3. RDP to the VM in question (or gain console access, based on your personal preference).
  4. Login using a Local Administrator privileged account.
  5. Launch Disk Management console (diskmgmt.msc).
  6. Within Disk Management console, you will notice a new, offline disk. This is what we have provisioned in step 2.
  7. Right-click offline disk and click Online.
  8. Disk status will now change to Not Initialized. Right click disk and click Initialize Disk.
  9. Select GPT (GUID Partition Table) for the partition style and click OK.
  10. Right-click original MBR volume (in my example, it’s the E:\ drive) and click Add Mirror…
  11. Select our unallocated new volume listed within Add Mirror window and click Add Mirror.
  12. You will receive the following warning. Click Yes to continue.
  13. You will see both disks change color to red and entering a state of resynching which will then progress in percentile figures per below. Note: Excess storage displays as unallocated space – We will leverage this later. Also, depending on disk tier (premium SSD or standard HDD), size & contents, this process can range from minutes to days.
  14. Once complete, both disks will be in a Healthy state.
  15. Right-click MBR volume (smaller disk) and click Break Mirrored Volume…
  16. Click Yes within the warning window suggesting risk of fault tolerance.
  17. Right-click our (larger) GPT volume which has now taken the original drive letter (E:\) and click Extend Volume…
  18. Extend our volume with all available (unallocated space from step 19) and confirm larger volume size per below example exhibit.
  19. Complete. We’re almost there!

Clean-up

Now to clean-up, we can remove our original (smaller) MBR disk and reclaim this storage at the hypervisor layer.

  1. Right-click original MBR disk (smaller in size) and click Offline.

Disk is now ready to be deleted from the hypervisor layer, reclaiming storage within datastore.

 

Finished! Great work, you have now increased what was formerly an MBR partition style disk, to capability larger than 2 TB using above Microsoft in-built conversion method.