Welcome the third part of the blog series SFTP Users for Storage Accounts. In the previous blog post SFTP Users for Storage Accounts - Part 2, we have created containers and local users for the SFTP access. In this blog post, we will use the Azure API to generate the sftp credentials for the local users.
SFTP Users for Storage Accounts - Part 2
Welcome back folks! In the previous blog post SFTP Users for Storage Accounts - Part 1, we have created a Bicep template to deploy an Azure Storage Account and initialize the blob service. In this blog post, we will continue to extend the Bicep template to create containers and local users for the sftp access.
SFTP Users for Storage Accounts - Part 1
Hello Folks,
I'm here to talk about an interesting topic today. I will be sharing my experience on how to create SFTP users for Azure Storage Accounts. This is a three-part series. In this first part, we will cover the basics of SFTP and how to create an SFTP user for an Azure Storage Account using Bicep. In the second part, we will discuss how to create a password for the SFTP user and how to use it to connect to the Azure Storage Account.
Working with Bicep CIDR Functions - Part 2
In the previous blog post, we left out an example where we have parameters for vNetAddress
, vSubnetCount
, and vSubnetRange
. I would like to show how the deployment looks like when we use what-if and how the output looks like.
Our requirement was to create a virtual network with a given address space and a given number of subnets. We also wanted to specify the range of subnets. We used the cidrSubnet
function to create the subnets.
Working with Bicep CIDR Functions - Part 1
Welcome to the start of our journey with Bicep CIDR functions! This series is something I've been excited to share, offering insights into subnetting and network configurations, especially within the realm of Infrastructure as Code (IaC).
In this part, we're going to cover the basics of Bicep CIDR functions, including how they can be used and in which scenarios they are most applicable. But before we dive into the details, let's begin with a brief introduction to Bicep CIDR functions.
Bicep Deployment Pane - Preview
Sadly, Azure Bicep and ARM Templates lack a built-in option for local deployment trials, particularly when your template involves variables, parameters, functions, and outputs. To test the functionality of certain functions or data structures, deploying them in Azure is still necessary. This challenge persists, meaning each time you wish to experiment with just your variables and outputs, initiating a deployment via AZ_CLI
or PowerShell
is required to observe the outcomes.
However, the Bicep team has been working on a new feature that will make this process much more straightforward. The new feature, known as the "Deployment Pane," is currently in preview and available in VSCode. This feature allows you to deploy your Bicep files quickly and easily, without the need to use the Azure CLI or PowerShell.
Interacting with the Databricks API using PowerShell
In this post, we will be exploring how to engage with the Databricks API through PowerShell. I would love to cover the following topics as parts of this post:
[✔️] Prerequisites: We'll start with the basics, ensuring you have all the necessary setup done. This includes having the right PowerShell modules and permissions in place, and an overview of the Databricks environment we'll be interacting with.
[✔️] Authenticate to the Databricks API via Azure Access Token: Security is our top priority, so we'll walk through the process of securely authenticating to the Databricks API. We'll understand how to obtain and use an Azure access token, which is essential for making it get going.
[✔️] Retrieve the Databricks Resources Using the Databricks API: Once we're in, it's all about getting the information you need. We'll go over how to send requests to the API to retrieve details on your Databricks resources. Whether it's clusters, jobs, or notebooks, you'll learn how to pull the data you're searching for.
Identifying Monitoring Agents via KQL for transition AMA Migration - Part 2
Hello Folks, Welcome back to the second part of our journey to transition from the Log Analytics agents to the Azure Monitor Agent (AMA). In the first part, we learned how to find and check the monitoring agents using KQL. In this part, we'll continue our journey by identifying the agents that have reported to the Log Analytics Workspace and then extend our query to include all virtual machines within your subscription or tenant.
Last time, we discovered which virtual machines were running the old MMA or OMS agents. This time, we're refining our search to quickly determine whether a machine uses MMA or the updated AMA.
Identifying Monitoring Agents via KQL for transition AMA Migration - Part 1
Hello Folks, We're going to look closely at Azure's monitoring tools, focusing on moving from the Log Analytics agents to Azure Monitor Agent (AMA). This is the first step in our journey. We'll learn how to find and check the monitoring agents using KQL to help us identify the agents we need to migrate.
As Microsoft announced the retirement of the Log Analytics agent on August 31, 2024
, it's imperative to gear up for what lies ahead. Post-retirement, utilizing the MMA or OMS agent could lead to certain expectations and operational shifts that we need to be prepared for.
🕐 The Clock is Ticking for MMA and OMS
Why focus on this transition, you might wonder? Moving from MMA (Microsoft Monitoring Agent) and OMS (Operations Management Suite) to AMA isn't just about staying current with Azure's offerings. It's about tapping into improved security, efficiency, and the fresh features that AMA offers. Microsoft's decision to retire MMA and OMS is a strategic step towards enhancing and simplifying the monitoring experience for infrastructure.
Using YAML to Drive Azure Resource Deployment with Bicep: Part 2
Welcome back! In our previous session, we delved into the strengths of YAML as a tremendous alternative for orchestrating Azure configurations via Bicep. Today, I'll guide you through deploying Azure resources using a YAML with Bicep.
🧑💻 Using YAML and Bicep Together
Revisiting the previous post, you might remember our YAML file, structured as follows:
resourceGroups:
- name: "app01"
location: "westeurope"
tags:
environment: "dev"
project: "project01"
- name: "app02"
location: "northeurope"
tags:
environment: "dev"
This file lists two resource groups, app01
and app02
. Each resource group has a name
, location
, and tags
property. It's like a to-do list for our task. Now, we will write a resource block in Bicep to create these resource groups in Azure.