• How to Set Up Core Services in Microsoft Azure (with Terraform)

    How to Set Up Core Services in Microsoft Azure (with Terraform)

    If you’re building an Azure environment for the first time (or rebuilding it correctly), you want a repeatable “core services” foundation: management groups, RBAC, hub-and-spoke networking, policies, logging/monitoring, backup, cost controls, and Defender for Cloud.

    This guide includes Terraform you can copy into a repo and run. You’ll plug in your subscription IDs and region, then deploy a baseline foundation in a consistent way.


    Prerequisites

    • Azure tenant access (Entra ID)
    • Permissions: Management Group + Subscription contributor/owner for the target scope
    • Terraform 1.6+ installed
    • Azure CLI installed and authenticated (az login)

    Repo Layout

    azure-core-foundation/
      versions.tf
      providers.tf
      variables.tf
      main.tf
      terraform.tfvars.example
      modules/
        management-groups/
        rbac/
        network-hub-spoke/
        governance-policy/
        monitoring/
        backup/
        cost-management/
        defender/

    Step 1: Management Groups + Subscription Organization (Terraform)

    Terraform typically does not create Azure subscriptions. Instead, you create subscriptions (Portal / EA / MCA) and Terraform organizes them into management groups with consistent governance.

    modules/management-groups/main.tf

    resource "azurerm_management_group" "corp" {
      display_name = var.mgmt_group_names.corp
    }
    
    resource "azurerm_management_group" "prod" {
      display_name               = var.mgmt_group_names.production
      parent_management_group_id = azurerm_management_group.corp.id
    }
    
    resource "azurerm_management_group" "nonprod" {
      display_name               = var.mgmt_group_names.nonproduction
      parent_management_group_id = azurerm_management_group.corp.id
    }
    
    resource "azurerm_management_group" "shared" {
      display_name               = var.mgmt_group_names.sharedservices
      parent_management_group_id = azurerm_management_group.corp.id
    }
    
    resource "azurerm_management_group_subscription_association" "prod_assoc" {
      management_group_id = azurerm_management_group.prod.id
      subscription_id     = var.subscription_ids.production
    }
    
    resource "azurerm_management_group_subscription_association" "nonprod_assoc" {
      management_group_id = azurerm_management_group.nonprod.id
      subscription_id     = var.subscription_ids.nonproduction
    }
    
    resource "azurerm_management_group_subscription_association" "shared_assoc" {
      management_group_id = azurerm_management_group.shared.id
      subscription_id     = var.subscription_ids.sharedservices
    }

    Step 2: IAM / RBAC Baseline (Terraform)

    Create Entra ID security groups and assign baseline roles at the management group scope. This gives you repeatable access control aligned with least privilege.

    modules/rbac/main.tf

    resource "azuread_group" "readers" {
      display_name     = var.reader_group_name
      security_enabled = true
    }
    
    resource "azuread_group" "contributors" {
      display_name     = var.contributor_group_name
      security_enabled = true
    }
    
    resource "azurerm_role_assignment" "corp_readers" {
      scope                = var.scope_mgmt_group_id
      role_definition_name = "Reader"
      principal_id         = azuread_group.readers.object_id
    }
    
    resource "azurerm_role_assignment" "corp_contributors" {
      scope                = var.scope_mgmt_group_id
      role_definition_name = "Contributor"
      principal_id         = azuread_group.contributors.object_id
    }

    Step 3: Core Networking (Hub-and-Spoke) (Terraform)

    This creates a hub VNet, two spoke VNets, subnets, and bi-directional VNet peering. It’s a clean baseline you can expand with Azure Firewall, Bastion, VPN Gateway, Private DNS, NSGs, and UDRs.

    modules/network-hub-spoke/main.tf

    resource "azurerm_resource_group" "rg" {
      name     = var.resource_group_name
      location = var.location
      tags     = var.tags
    }
    
    resource "azurerm_virtual_network" "hub" {
      name                = "${var.resource_group_name}-hub-vnet"
      location            = var.location
      resource_group_name = azurerm_resource_group.rg.name
      address_space       = [var.hub_vnet_cidr]
      tags                = var.tags
    }
    
    resource "azurerm_subnet" "hub_subnets" {
      for_each             = var.hub_subnets
      name                 = each.key
      resource_group_name  = azurerm_resource_group.rg.name
      virtual_network_name = azurerm_virtual_network.hub.name
      address_prefixes     = [each.value]
    }
    
    resource "azurerm_virtual_network" "spokes" {
      for_each            = var.spoke_vnets
      name                = "${var.resource_group_name}-${each.key}-spoke-vnet"
      location            = var.location
      resource_group_name = azurerm_resource_group.rg.name
      address_space       = [each.value.cidr]
      tags                = var.tags
    }
    
    locals {
      spoke_subnet_map = merge([
        for vnet_key, vnet in var.spoke_vnets : {
          for sn_key, sn_cidr in vnet.subnets :
          "${vnet_key}.${sn_key}" => {
            vnet_key = vnet_key
            name     = sn_key
            cidr     = sn_cidr
          }
        }
      ]...)
    }
    
    resource "azurerm_subnet" "spokes" {
      for_each             = local.spoke_subnet_map
      name                 = each.value.name
      resource_group_name  = azurerm_resource_group.rg.name
      virtual_network_name = azurerm_virtual_network.spokes[each.value.vnet_key].name
      address_prefixes     = [each.value.cidr]
    }
    
    resource "azurerm_virtual_network_peering" "hub_to_spoke" {
      for_each                     = azurerm_virtual_network.spokes
      name                         = "peer-hub-to-${each.key}"
      resource_group_name          = azurerm_resource_group.rg.name
      virtual_network_name         = azurerm_virtual_network.hub.name
      remote_virtual_network_id    = each.value.id
      allow_virtual_network_access = true
      allow_forwarded_traffic      = true
    }
    
    resource "azurerm_virtual_network_peering" "spoke_to_hub" {
      for_each                     = azurerm_virtual_network.spokes
      name                         = "peer-${each.key}-to-hub"
      resource_group_name          = azurerm_resource_group.rg.name
      virtual_network_name         = each.value.name
      remote_virtual_network_id    = azurerm_virtual_network.hub.id
      allow_virtual_network_access = true
      allow_forwarded_traffic      = true
    }

    Step 4: Security & Governance (Azure Policy) (Terraform)

    This enforces allowed regions and mandatory tags at the management group scope, preventing common misconfigurations early.

    modules/governance-policy/main.tf

    resource "azurerm_policy_definition" "allowed_locations" {
      name         = "allowed-locations"
      policy_type  = "Custom"
      mode         = "All"
      display_name = "Allowed locations"
    
      policy_rule = jsonencode({
        if = {
          not = {
            field = "location"
            in    = "[parameters('listOfAllowedLocations')]"
          }
        }
        then = { effect = "Deny" }
      })
    
      parameters = jsonencode({
        listOfAllowedLocations = {
          type     = "Array"
          metadata = { displayName = "Allowed locations" }
        }
      })
    }
    
    resource "azurerm_policy_assignment" "allowed_locations" {
      name                = "pa-allowed-locations"
      scope               = var.mgmt_group_id_corp
      policy_definition_id = azurerm_policy_definition.allowed_locations.id
    
      parameters = jsonencode({
        listOfAllowedLocations = { value = var.allowed_locations }
      })
    }
    
    resource "azurerm_policy_definition" "require_tags" {
      name         = "require-tags"
      policy_type  = "Custom"
      mode         = "Indexed"
      display_name = "Require resource tags"
    
      policy_rule = jsonencode({
        if = {
          anyOf = [
            for t in var.required_tags : {
              field  = "tags[${t}]"
              exists = "false"
            }
          ]
        }
        then = { effect = "Deny" }
      })
    }
    
    resource "azurerm_policy_assignment" "require_tags" {
      name                 = "pa-require-tags"
      scope                = var.mgmt_group_id_corp
      policy_definition_id = azurerm_policy_definition.require_tags.id
    }

    Step 5: Monitoring & Logging (Log Analytics) (Terraform)

    modules/monitoring/main.tf

    resource "azurerm_resource_group" "rg" {
      name     = var.resource_group_name
      location = var.location
      tags     = var.tags
    }
    
    resource "azurerm_log_analytics_workspace" "law" {
      name                = var.law_name
      location            = var.location
      resource_group_name = azurerm_resource_group.rg.name
      sku                 = "PerGB2018"
      retention_in_days   = 30
      tags                = var.tags
    }

    Step 6: Backup & Recovery (Recovery Services Vault) (Terraform)

    modules/backup/main.tf

    resource "azurerm_resource_group" "rg" {
      name     = var.resource_group_name
      location = var.location
      tags     = var.tags
    }
    
    resource "azurerm_recovery_services_vault" "rsv" {
      name                = var.rsv_name
      location            = var.location
      resource_group_name = azurerm_resource_group.rg.name
      sku                 = "Standard"
      soft_delete_enabled = true
      tags                = var.tags
    }

    Step 7: Cost Controls (Budgets + Alerts) (Terraform)

    modules/cost-management/main.tf

    resource "azurerm_consumption_budget_subscription" "budget" {
      name            = "monthly-budget"
      subscription_id = var.subscription_id
    
      amount     = var.monthly_budget
      time_grain = "Monthly"
    
      time_period {
        start_date = "2025-01-01T00:00:00Z"
        end_date   = "2035-01-01T00:00:00Z"
      }
    
      notification {
        enabled        = true
        threshold      = 80
        operator       = "GreaterThan"
        contact_emails = var.emails
      }
    
      notification {
        enabled        = true
        threshold      = 100
        operator       = "GreaterThan"
        contact_emails = var.emails
      }
    }

    Optional: Defender for Cloud Baseline (Terraform)

    modules/defender/main.tf

    provider "azurerm" {
      alias           = "sub"
      features        {}
      subscription_id = var.subscription_id
    }
    
    resource "azurerm_security_center_subscription_pricing" "vm" {
      provider      = azurerm.sub
      tier          = "Standard"
      resource_type = "VirtualMachines"
    }

    Run It

    • Create a terraform.tfvars file (example below)
    • Run: terraform init
    • Run: terraform plan
    • Run: terraform apply

    For all Code Files, visit the following GitHub Repository:

    https://github.com/mbtechgru/Azure_Core_Services.git


  • Understanding ROSA: Managed OpenShift Simplified

    Understanding ROSA: Managed OpenShift Simplified
    https://i0.wp.com/www.vamsitalkstech.com/wp-content/uploads/2021/05/Rosa-2.png

    Overview of ROSA Architecture Diagram


    When I first started working with ROSA, I’ll be honest — I was overthinking it. I was coming from months, maybe over a year now, of dealing with:

    • self-managed Kubernetes,
    • DIY OpenShift installs,
    • control planes that everyone pretends are easy until something breaks at 2 a.m.

    So when someone said, “ROSA is just managed OpenShift on AWS,” my immediate reaction was:

    Okay… but who actually owns what when things go wrong?

    That question is what this post is really about.


    The One Sentence That Changed Everything for Me

    Here’s the sentence that finally made ROSA click:

    Red Hat runs the control plane. I run my workloads in my AWS account.

    That’s it.

    Once I stopped trying to mentally map ROSA as “EKS but different” and instead saw it as a clean ownership split, everything else started to make sense.


    How ROSA Is Really Split (In Practice)

    The Control Plane (Not My Problem — and That’s a Good Thing)

    The first thing I had to accept was that I don’t touch the control plane.

    No:

    • API server patching
    • etcd tuning
    • upgrade choreography
    • “don’t reboot that node yet” moments

    All of that lives in Red Hat–managed AWS accounts.

    At first, that felt uncomfortable — engineers like control.
    But then I realized something important:

    I’ve never once added business value by babysitting a Kubernetes control plane.

    Red Hat:

    • patches it,
    • upgrades it,
    • monitors it,
    • keeps it highly available.

    And I get to focus on things that actually matter to my team and my customers.


    The Worker Nodes (Very Much My Responsibility)

    This is where ROSA started to feel familiar again.

    The worker nodes:

    • live in my AWS account,
    • sit inside my VPC,
    • run in my subnets.

    This is where:

    • applications run,
    • containers execute,
    • data is processed.

    From a security and compliance standpoint, this was huge for me — especially thinking about government and regulated environments.

    My data never leaves my AWS account.

    That one fact alone removes a lot of friction in security conversations.


    The AWS Account Model (Why Security Teams Like ROSA)

    In real-world terms, ROSA usually looks like this:

    • Red Hat AWS account → control plane
    • My AWS account → worker nodes, networking, data

    That separation is intentional.

    When I had to explain ROSA to security stakeholders, this model actually made the conversation easier, not harder. It’s a very clean boundary, and clean boundaries are exactly what auditors like.


    Networking: Where I Stopped Guessing and Started Understanding

    Networking is usually where platforms fall apart mentally. ROSA was no exception — until I traced the traffic flow end to end.

    Here’s the simplified version I keep in my head now:

    User → AWS Load Balancer → OpenShift Router → Service → Pod
    
    

    A few key realizations for me:

    • ROSA uses OpenShift Routes, not raw Kubernetes Ingress
    • AWS still handles the heavy lifting at the edge
    • OpenShift handles how traffic gets to apps inside the cluster

    Once I accepted that Routes are just the OpenShift-native way of doing ingress, I stopped fighting it — and it became one of my favorite features.

    (We’ll go deep on this in a later part of the series.)


    Identity and Access (Less Magic Than It Looks)

    At first glance, ROSA IAM + RBAC looks complex.

    In reality, I think of it like this:

    • AWS IAM decides who can interact with AWS
    • OpenShift RBAC decides what users can do inside the cluster

    That separation is actually really powerful. It lets platform teams control the platform without stepping on application teams — something I wish more environments did by default.


    The Shared Responsibility Model (The Part You Can’t Ignore)

    This was another mindset shift for me.

    Red Hat owns:

    • control plane uptime,
    • platform patching,
    • Kubernetes upgrades.

    I own:

    • applications,
    • app security,
    • configuration,
    • access decisions,
    • data protection.

    Once I stopped assuming “managed” meant “someone else handles everything,” ROSA became very predictable.

    And predictable platforms are the ones that scale well.


    Why This Architecture Grew on Me

    After spending time with ROSA, here’s why I genuinely like this model:

    • I don’t waste energy on undifferentiated platform work
    • Security boundaries are clear and defensible
    • Compliance conversations are simpler
    • Platform teams can focus on enablement instead of firefighting

    It feels like OpenShift grown up for real-world operations.


    No Lab This Time — And That’s Intentional

    This part isn’t about clicking buttons or running commands.

    It’s about getting the mental model right.

    If you understand:

    • where things live,
    • who owns what,
    • how traffic flows,

    then everything we do next will feel logical instead of magical.


  • Part 1: What Is Red Hat OpenShift Service on AWS (ROSA)?

    Part 1: What Is Red Hat OpenShift Service on AWS (ROSA)?

    Introduction

    If you’ve ever thought:

    “Kubernetes is powerful… but running it ourselves is a lot of work”

    That’s exactly where Red Hat OpenShift Service on AWS (ROSA) fits in.

    ROSA gives you a fully managed OpenShift platform running directly on AWS, jointly supported by Red Hat and AWS. You get the benefits of Kubernetes and OpenShift without having to manage the control plane yourself.

    This series will show you how to go from cluster access to running real applications on ROSA, step by step.


    What Is ROSA (Without the Marketing Speak)?

    ROSA is:

    • OpenShift running natively on AWS
    • Managed by Red Hat (OpenShift components)
    • Running inside your AWS account
    • Integrated with AWS networking, IAM, and load balancers

    You:

    • Deploy apps
    • Manage namespaces and workloads
    • Control access and security

    Red Hat:

    • Manages the OpenShift control plane
    • Handles upgrades and platform reliability

    AWS:

    • Provides the infrastructure (VPC, EC2, ELB, storage)

    How ROSA Compares to Amazon EKS

    FeatureROSAEKS
    Kubernetes ManagementFully managed OpenShiftManaged Kubernetes only
    Built-in CI/CD & Dev ToolsYesNo
    Security ControlsStrong defaultsDIY
    Enterprise SupportRed Hat + AWSAWS only
    Operational OverheadLowerHigher

    Simple rule:
    If you want enterprise Kubernetes with guardrails, ROSA wins.
    If you want raw Kubernetes, EKS may be better.


    Typical ROSA Architecture

    https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2021/06/03/rosa-arch-private-993x630.png

    A standard ROSA deployment includes:

    • An AWS VPC with public and private subnets
    • OpenShift control plane managed by Red Hat
    • Worker nodes in private subnets
    • AWS load balancers exposing apps
    • Native AWS storage and networking

    This makes ROSA a great fit for secure and regulated environments.


    When Should You Use ROSA?

    ROSA is a strong choice if you:

    • Need enterprise Kubernetes
    • Want OpenShift features without managing it
    • Are deploying mission-critical apps
    • Operate in regulated or government environments
    • Want tight AWS integration

    What You’ll Learn in This Series

    By the end of this series, you’ll know how to:

    • Access and manage a ROSA cluster
    • Deploy and expose applications
    • Scale workloads
    • Apply security best practices
    • Operate ROSA in production

    No fluff — just practical steps.


    What’s Next?

    👉 Part 2: Prerequisites and Environment Setup

    In the next post, we’ll:

    • Set up AWS and Red Hat access
    • Install the required CLI tools
    • Verify cluster connectivity
    • Avoid common permission issues
  • Mastering OpenShift on AWS: A Step-by-Step Series

    Mastering OpenShift on AWS: A Step-by-Step Series

    This series walks you from zero to production-ready on ROSA, without assuming deep OpenShift experience.


    Series Overview

    Part 1 – What Is ROSA and When Should You Use It?

    • What ROSA is (plain English)
    • How it compares to EKS
    • Common enterprise & government use cases
    • Architecture overview

    Part 2 – Prerequisites and Environment Setup

    • AWS & Red Hat accounts
    • IAM permissions
    • Installing CLI tools
    • Verifying access

    Part 3 – Creating and Accessing a ROSA Cluster

    • Cluster sizing choices
    • Networking basics
    • Logging in with oc
    • Understanding projects, users, and roles

    Part 4 – Deploying Your First Application

    • Creating a project
    • Deploying an app from an image
    • Understanding deployments, pods, and services

    Part 5 – Exposing Applications with Routes and Load Balancers

    • OpenShift Routes explained
    • AWS load balancer integration
    • TLS and HTTPS basics

    Part 6 – Scaling and Managing Applications

    • Manual scaling
    • Autoscaling basics
    • Rolling updates

    Part 7 – Security Best Practices for ROSA

    • Security Context Constraints (SCCs)
    • IAM Roles for Service Accounts (IRSA)
    • Network policies
    • Image security

    Part 8 – Monitoring, Logging, and Operations

    • OpenShift monitoring
    • AWS CloudWatch integration
    • Day-2 operations tips

    Part 9 – Production Readiness Checklist

    • High availability
    • Cost optimization
    • Backup considerations
    • Compliance notes

    Stay Tune as I start sharing my experience throughout this journey of OpenShift on AWS


  • Deploying a NetApp Filer using Windows PowerShell and NetApp Power Shell Module

    Deploying a NetApp Filer using Windows PowerShell and NetApp Power Shell Module

    A PowerShell module for managing and automating NetApp operations.


    Overview

    NetApp-PowerShell is a robust suite of PowerShell scripts and cmdlets meticulously crafted to enhance the efficiency of NetApp storage management. This module facilitates the automation of essential tasks such as provisioning, monitoring, backup, and reporting through PowerShell, significantly streamlining the interactions for administrators and DevOps professionals with NetApp storage systems. Additionally, it offers the capability to automate the entire NetApp Filer deployment process, ensuring a more efficient and error-free implementation through this comprehensive PowerShell script.

    Features

    • Connect to NetApp storage controllers
    • Perform common storage tasks like creating/deleting volumes, snapshots, and aggregates
    • Query NetApp system health and performance metrics
    • Automate backup operations
    • Generate reports
    • Integration with CI/CD workflows

    Getting Started

    Prerequisites

    • PowerShell 5.1 or later (Windows, Linux, or macOS)
    • Access to NetApp API (ONTAP)

    Installation

    You can clone this repository and import the module manually:

    git clone https://github.com/mbtechgru/NetApp-PowerShell.git
    Import-Module ./NetApp-PowerShell/NetAppPowerShell.psm1

    Usage

    1. Connect to NetApp system: Connect-NetAppController -Address <controller-address> -Username <username> -Password <password>
    2. List volumes: Get-NetAppVolume
    3. Create a volume: New-NetAppVolume -Name "TestVolume" -Size "100GB"

    For detailed cmdlet documentation, see the module help or usage examples in the docs/ folder (if available).

    Contributing

    Contributions and feature requests are welcome! Please fork the repository and submit a pull request or open an issue for suggestions and bugs.


  • Beginner’s Guide to Kubernetes: What It Is, How It Works, and Why It Matters

    Beginner’s Guide to Kubernetes: What It Is, How It Works, and Why It Matters

    Introduction

    Kubernetes (often shortened to K8s) is the most powerful and widely adopted system for running containerized applications at scale. If Docker helps you package applications, Kubernetes helps you run, scale, update, and maintain those applications in production.

    In this beginner-friendly guide, we’ll break down Kubernetes in simple terms — no prior experience needed.


    🧱 What is Kubernetes?

    Think of Kubernetes as:

    A smart, automated system that ensures your applications are always running — even if servers fail or traffic spikes.

    If your application lives inside containers, Kubernetes is the brain that:

    • Starts containers
    • Repairs containers if they crash
    • Distributes containers across machines
    • Scales replicas up or down
    • Updates apps with zero downtime

    🏗️ Key Kubernetes Concepts

    Image Description: A visual representation of Kubernetes architecture and its components, showcasing how different elements interact within a Kubernetes cluster.

    https://iximiuz.com/kubernetes-vs-age-old-infra-patterns/kubernetes-service-min.png

    The images above visually represent the architecture and components of a Kubernetes cluster. They illustrate how various elements, such as pods, nodes, and services, interact within a Kubernetes environment. The diagrams highlight the structure that enables Kubernetes to manage containerized applications effectively, showcasing the control plane’s role and the distribution of workloads across worker nodes. These visual aids serve as a helpful reference for understanding Kubernetes’ complex functionalities and overall framework.

    1️⃣ Cluster

    A Kubernetes cluster is made up of:

    • Master (control plane) — the brain
    • Worker nodes — where containers run

    2️⃣ Nodes

    A node is a server (virtual or physical).
    Kubernetes spreads workloads across nodes automatically.

    3️⃣ Pods

    Smallest unit in Kubernetes.

    A pod = one or more containers working together.
    If containers need to share storage or network, put them in the same pod.

    4️⃣ Deployments

    A deployment tells Kubernetes:

    • what container image to run
    • how many replicas to maintain
    • how to roll out updates safely

    5️⃣ Services

    A service gives your pods a stable network identity — even when pods restart or move.

    Types:

    • ClusterIP (internal)
    • NodePort (external)
    • LoadBalancer (cloud-integrated)
    • Ingress (HTTP/HTTPS routing)

    🚀 Why Use Kubernetes? (Benefits)

    ✔️ High Availability

    If a pod or node fails, Kubernetes restarts or relocates it instantly.

    ✔️ Automatic Scaling

    Traffic spike? Kubernetes adds replicas.
    Traffic drops? It scales down to save money.

    ✔️ Zero-Downtime Updates

    Using rolling updates and rollbacks.

    ✔️ Consistent Across Clouds

    Run Kubernetes on:

    • AWS (EKS)
    • Azure (AKS)
    • Google Cloud (GKE)
    • On-Prem or Bare Metal

    ✔️ Community, Ecosystem, and Extensibility

    Thousands of add-ons:

    • Prometheus / Grafana
    • Istio
    • ArgoCD
    • Helm

    ⚙️ How Kubernetes Works (Easy Visualization)

    https://raw.githubusercontent.com/collabnix/dockerlabs/master/kubernetes/beginners/what-is-kubernetes/k8s-architecture.png

    Image Description: Kubernetes Architecture Diagram

    Simple workflow:

    1. You write a deployment YAML describing how your app should run
    2. You apply it to the cluster
    3. Kubernetes scheduler finds appropriate nodes
    4. Pods get created
    5. Services expose the app
    6. Kubernetes continuously monitors health
    7. Autoscaler adjusts replicas based on demand

    🧪 Hands-On Example 101

    Here’s a minimal example deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-world
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-world
      template:
        metadata:
          labels:
            app: hello-world
        spec:
          containers:
          - name: hello-world
            image: nginx
            ports:
            - containerPort: 80
    
    

    Expose it:

    kubectl expose deployment hello-world --type=LoadBalancer --port=80
    
    

    This creates:

    • a deployment with 3 pods
    • a service that exposes them to the internet (if supported by cloud provider)

    🔒 Basic Security Tips for Beginners

    Even on day one, consider these:

    • Always use namespaces (dev, staging, production)
    • Avoid running containers as root
    • Limit resource usage (CPU/memory)
    • Use role-based access control (RBAC)
    • Scan container images

    🌐 Where to Run Kubernetes?

    Cloud Options

    • AWS EKS
    • Azure AKS
    • Google GKE

    Local Options

    • Docker Desktop
    • Minikube
    • kind (Kubernetes in Docker)

    🏁 Conclusion

    Kubernetes is an orchestration system that keeps modern applications healthy, scalable, and resilient. Even though it looks intimidating at first, learning the basics — pods, deployments, services, nodes — unlocks enormous power.


  • AWS announce new feature for Route 53 Service

    AWS announce new feature for Route 53 Service

    Amazon AWS has just unveiled an exciting new feature for its Route 53 DNS Service, aptly named Accelerate Recovery. This innovative addition comes in response to the recent DNS disruptions that affected businesses in the AWS East-1 Region, plunging many into operational chaos. With Accelerate Recovery, AWS aims to empower organizations to swiftly recover from such disruptions, minimizing downtime and ensuring smoother business operations. It’s a significant step forward in reinforcing reliability and trust in AWS’s services, making it an essential tool for businesses looking to safeguard their online presence.


    Here is the Original Blog from AWS:

    https://aws.amazon.com/blogs/aws/amazon-route-53-launches-accelerated-recovery-for-managing-public-dns-records


    Enhancing DNS Resilience: A Look at New Route 53 Features

    In today’s digital landscape, ensuring the dependable delivery of online services is paramount. Service disruptions can occur at any time, and being prepared is essential. Amazon Web Services (AWS) has rolled out a new feature that significantly enhances the resilience of Domain Name System (DNS) entries through its Route 53 service.

    Targeted DNS Entries for Faster Recovery

    This new functionality focuses on targeting specifically public DNS entries within 60 minutes of a service disruption and it is only available as of today on US East Region (N. Virgina). This rapid response is crucial for maintaining service continuity and minimizing downtime for users.

    The feature provides seamless access to a range of API actions, particularly when services are failing over to alternate regions, predominantly in US West (Oregon).

    Simple and Straightforward Implementation

    One of the standout aspects of this new feature is its ease of use. According to AWS, there’s no need to change endpoints or recreate any public records in different regions. The operations can be enabled or disabled effortlessly through the AWS Web Console, AWS Command Line Interface (CLI), Software Development Kits (SDKs), or Infrastructure as Code (IaC) tools, including CloudFormation and AWS CDK, as noted in the official documentation.

    This means that developers and system admins can quickly implement necessary changes without the hassle of intricate configurations or downtime.

    Stay Informed

    For those looking to dive deeper into the specifics and capabilities of this new feature, AWS offers comprehensive documentation. By reviewing the full details, users can ensure that they are fully equipped to leverage this powerful toolset to bolster their DNS infrastructure.

    In conclusion, AWS’s enhancements to Route 53 present an invaluable opportunity for businesses seeking to maintain service reliability and enhance their response strategies during disruptions. Stay proactive and informed—it’s the best defense against downtime!

    https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/accelerated-recovery.html


  • How to Deploy and Configure Kali Linux on AWS

    How to Deploy and Configure Kali Linux on AWS

    Kali Linux is one of the most popular penetration testing and ethical hacking distributions used by cybersecurity professionals. Running Kali Linux in the cloud, specifically on Amazon Web Services (AWS) provides flexibility, scalability, and the ability to conduct remote security assessments securely. In this post, we’ll walk through step-by-step instructions to deploy and configure Kali Linux on AWS, along with best practices for securing and optimizing your cloud instance.

    Prerequisites

    • An AWS account (with permissions to launch EC2 instances)
    • Basic understanding of Linux commands
    • An SSH key pair or plan to create one
    • Familiarity with AWS EC2 and networking concepts

    Step 1: Log In and Access the AWS Management Console

    • Navigate to AWS Management Console.
    • Search for EC2 and click Launch Instance.
    • Select your region, ideally close to your location for better latency.

    Step 2: Locate the Kali Linux AMI

    • In the Choose an Amazon Machine Image (AMI) section, select AWS Marketplace.
    • Search for Kali Linux.
    • Look for ‘Kali Linux Rolling AMI’ published by Offensive Security or Kali Linux Official.
    • Click Select and review the AMI pricing (usually based on EC2 usage only).

    Step 3: Choose an Instance Type

    • For general penetration testing and learning: t2.medium or t3.medium instances offer a good balance of performance and cost.
    • For resource-heavy operations: consider c5.large, m5.large, or g4dn.xlarge for GPU acceleration.

    Step 4: Configure Network Settings

    • Choose an existing VPC or create a new one.
    • Enable Auto-assign Public IP for SSH access.
    • Add inbound rules: SSH (TCP 22) — allow from your IP only; HTTP/HTTPS — optional.
    • Avoid allowing 0.0.0.0/0 unless using a VPN or proxy.

    Step 5: Add Storage

    • Set at least 20 GB for root volume to accommodate tools and logs.
    • Optionally, add another volume for storing large capture files or results.

    Step 6: Review and Launch

    • Review your configuration.
    • Click Launch.
    • Select or create a new SSH key pair.

    Download your .pem key and launch the instance.


    Connecting to Kali Linux Instance via SSH

    Once the instance is provisioned and ready to use, the next step is connecting to the instance. There is a few ways that AWS do provide but the most well known as well use is via SSH

    The first step will be getting or finding your prefer terminal application wether is PUTTY, Wave or even the MAC Terminal command prompt will work. Once you have he desire terminal software, follow this three steps:

    • Use: ssh -i /path/to/your-key.pem ec2-user@<your-public-ip>

    NOTE: Ensure to save your Keys on a safe place and DONOT share them with anyone.

    • Default username might be ec2-user, ubuntu, or kali. I just recently provisioned an instance and the default account was kali


    Configure Kali Tools and Environment

    Once logged in, it may seem the instance looks flat, slim, lack of tools. The main reason is to allow users to customize as need based on the reason or require test. So to get the most following command to have a good, baseline images tool using the Kali’s Metapackage. Here is the command:

    sudo apt update && sudo apt install -y kali-linux-headless

    Running this command will get you all of the essential tools images need to get you started. There is other ways you can accomplish this again, base on what you are trying to accomplish:

    • – Install top tools: sudo apt install kali-linux-top10 -y
    • – For all tools: sudo apt install kali-linux-everything -y

    Installing the GUI

    If you are new to Kali Linux or Linux in general or a Microsoft Window Kool Aid drinker, there is a GUI you can install as well. Here is the command:

    sudo apt install kali-desktop-xfce -y

    Once the install completed, you will need to set up remote desktop with xRDP or VNC or any other RDP tool you may have available.


    Hardening and Secure Your Instance

    Even though this is a pentesting instance, security is still a must; therefore, here are some basic hardening steps you can perform to keep those unwanted guests away:

    • Change default SSH port – As everyone know by now what port SSH use so changing the port number may be a great idea
    • Use key-based authentication only – Key-based authentication will provide greater security over password-based methods, as passwords can be easily cracked if a bad actor compromises your network. Please do not share any keys unless there is a legitimate reason.
    • Disable root login via SSH – you should not need to use the root account as the kali account will provide sufficient elevated privilege along with sudo too.
    • Enable firewall – Using the native Linux firewall may be a great idea but if you are using a Security Group, NACL or any other Firewall technology, you may be good with that, now if you need to use the native or built-in firewall, tun this command: sudo ufw enable; sudo ufw allow 22/tcp

    NOTE: If you change the SSH port number, replace the 22 with the new SSH port number.

    • Regularly apply system updates – Updates will ensure that the OS as well as all applications are kept secure from any new vulnerabilities or flaws, and will provide new features too.

    Optional – Create an AMI for Reuse

    Once your instance is in the desired state with all necessary tools and security hardening, create an AMI image for future use. This will save you time by avoiding repeated setups. Here’s how easy it is to create an AMI image for yourself:

    • Go to EC2 Dashboard → Instances.
    • Select your Kali instance → ActionsImage and templatesCreate image.
    • Name it (e.g., Kali-Lab-Base).
    • Launch future instances from this image anytime.

    One common practice observed is the removal of AMI snapshot, driven by the misconception that it is unnecessary or to conserve space and costs. However, it is advisable not to delete these snapshot, as the AMI relies on it for maintaining a point-in-time state. In essence, all modifications, package installations, and hardening procedures are preserved within that snapshot; deleting it may render the AMI invalid or corrupted.


    Conclusion

    Deploying Kali Linux on AWS enables you to perform cloud-based security testing, run automated scans, or practice ethical hacking from anywhere — all while taking advantage of AWS scalability and reliability.

    With proper setup and hardening, you can maintain a powerful and secure remote lab environment for penetration testing, digital forensics, and cybersecurity research.


    Legal Disclaimer

    Kali Linux is a powerful penetration testing and cybersecurity assessment platform. When deploying Kali Linux or using any associated tools on Amazon Web Services (AWS) or any other cloud platform, you are solely responsible for ensuring that all testing activities are conducted in compliance with applicable laws, regulations, and AWS Terms of Service.

    Unauthorized scanning, exploitation, or testing of systems or networks that you do not own or have explicit written permission to test may be considered illegal and could result in civil and criminal penalties.

    This guide is provided for educational and authorized security testing purposes only. The author and publisher assume no liability for any misuse, data loss, or damage resulting from the use of this information or associated configurations.

    Always obtain proper authorization before conducting penetration tests and operate within the boundaries of your organization’s or client’s established security policies.

  • Container Security Best Practices

    Container Security Best Practices

    Containers are a great tool for developers. They are also valuable for systems administrators to simplify and rapidly deploy applications. Containers offer many other benefits. As it is still considered a relatively new technology for some organizations, it brings a set of challenges. These include implementation and defining the best use case. Do we have the proper technical skill?

    But one of the many challenges amount many others, is how to best secure container deployments.

    In this post, I would like to review some of the best practices. You can take these steps to implement a robust security posture for your Container Environment.

    1. Secure the Container Images

    • Use trusted base images: Always use official or trusted images from reputable registries.
    • Regularly update images: Stay on top of security updates for base images and rebuild containers often.
    • Scan images for vulnerabilities: Use tools like Trivy, Clair, or Anchore to detect vulnerabilities in images before deploying.
    • Minimize the attack surface: Use minimal images (e.g., Alpine) and remove unnecessary components, libraries, and utilities.
    • Sign images: Use tools like Docker Content Trust or cosign to sign and verify images.

    2. Secure the Build and Deployment Process

    • Implement CI/CD security checks: Scan code and images for vulnerabilities in your CI/CD pipelines.
    • Use Infrastructure as Code (IaC) security tools: Tools like Checkov or kics can guarantee secure configuration in IaC.
    • Restrict access to registries: Limit who can push, pull, or change container images in your container registry.
    • Enforce policies: Use admission controllers like OPA/Gatekeeper or Kyverno to enforce security policies during deployments.

    3. Set Containers Securely

    • Run as non-root: Avoid running containers as the root user.
    • Limit privileges: Use --cap-drop to drop unnecessary Linux capabilities, and avoid the --privileged flag.
    • Use read-only file systems: Set containers to run with read-only file systems unless write access is explicitly needed.
    • Set resource limits: Use Kubernetes requests and limits for CPU and memory to avoid resource exhaustion attacks.
    • Isolate containers: Use namespaces, cgroups, and Pod Security Standards (PSS) to isolate containerized workloads.

    4. Secure the Runtime Environment

    • Monitor and log activity: Use tools like Falco, Sysdig, or Datadog to detect suspicious behavior in real-time.
    • Keep the host secure: Regularly patch the host OS and use a container-specific OS like Bottlerocket or Flatcar Linux.
    • Network segmentation: Use Kubernetes Network Policies to control traffic between pods and enforce the principle of least privilege.
    • Enable SELinux/AppArmor: Leverage security modules to add an extra layer of runtime security.

    5. Secure Access and Secrets

    • Use secret management solutions: Tools like HashiCorp Vault, AWS Secrets Manager, or Kubernetes Secrets (with encryption) should manage sensitive data.
    • Use secure authentication: Enable role-based access control (RBAC) for container orchestration tools like Kubernetes.
    • Avoid embedding secrets in images: Use environment variables or volume-mounted secrets instead.

    6. Automate and Audit Security

    • Automate compliance: Use tools like Kubernetes Bench or Kubeaudit to confirm compliance with CIS benchmarks and other standards.
    • Perform regular security assessments: Periodically conduct penetration testing and container-focused vulnerability scans.
    • Enable logging and monitoring: Centralize logs with tools like ELK, Fluentd, or Prometheus to detect and respond to incidents.

    7. Use Zero Trust Principles

    • Microsegmentation: Isolate workloads to limit lateral movement.
    • Mutual TLS (mTLS): Use service meshes like Istio or Linkerd to secure communication between services.
    • Limit ingress/egress: Restrict external communication to only what’s necessary.

    8. Educate and Train Teams

    • Secure coding practices: Train developers to write secure code and recognize vulnerabilities.
    • Understand containerization: Make sure your team understands container-specific threats and how to mitigate them.
    • Threat modeling: Regularly conduct threat modeling to foresee risks.

    Key Tools to Use

    • Image Scanning: Trivy, Clair, Anchore
    • Runtime Security: Falco, Sysdig, Aqua Security
    • Policy Enforcement: Kyverno, OPA/Gatekeeper
    • Secret Management: Vault, AWS Secrets Manager
    • Monitoring and Logging: ELK, Fluentd, Prometheus

    By implementing these best practices, you can significantly reduce the risk of vulnerabilities in your containerized environment.

    Cheers!

  • How to implement good API Security 

    How to implement good API Security 

    API have become essential tools for application integration, data analysis, automation, and many other technological tasks. Yet, this widespread reliance makes them a prime target for hackers and other malicious actors. Without proper security measures, API—across production, test, and development environments—are vulnerable to sophisticated attacks that can lead to significant breaches. 

    First, let’s define what an API is. Then, we will dive into some of the key elements of how we can secure API. We will also discuss some of the major use cases.  

    What is API 

    An API or Application Programming Interface is a set of rules and tools. These rules allow different software applications to communicate. They also allow them to interact with each other. It defines how requests and responses should be structured. This enables developers to access functionality or data from another service, system, or application. They can do this without needing to understand its internal workings.

    For example, a weather app might use a weather service’s API to fetch current temperature and forecast data.

    API are often used to allow communication between different systems, platforms, or components of an application. They allow developers to access specific features or data of an application, service, or device without exposing its entire codebase. 

    In other words, it allows developers or any engineer to interact with any software or application utilizing code. This is mainly from the backend, which is great for not exposing or altering any data. 

    Major Use Cases of API 

    Here is a list of must predominant Use Cases of API Platforms based on research I did as well based on my experience working with clients and organizations: 

    • Service Integration: Connect apps and services (e.g., payment gateways, social media). 
    • Mobile Apps: Power features like weather data, maps, and more. 
    • Data Sharing: Fetch and exchange data between systems (e.g., news, financial data). 
    • Automation: Automate workflows and tasks (e.g., email marketing, scheduling). 
    • Cloud Services: Manage storage, computing, and other resources (e.g., AWS, Google Cloud). 
    • Authentication: Allow third-party logins (e.g., Google, Facebook OAuth). 
    • E-commerce: Integrate inventory, shipping, and payment features. 
    • IoT Devices: Ease communication for smart devices (e.g., Alexa, Fitbit). 
    • AI/ML: Access AI tools for NLP, image recognition, etc. 
    • Gaming: Support leaderboards, multiplayer, and VR/AR integration. 
    • Finance: Allow open banking, digital wallets, and fintech apps. 
    • Monitoring: Give analytics and performance data (e.g., Google Analytics). 

    Why API Are Important 

    API are important, mainly for the next reasons: 

    • Interoperability: API allow systems to work together regardless of platform or language. 
    • Efficiency: API streamline processes, eliminating the need for custom-built solutions. 
    • Scalability: API allow modular development, making it easier to scale systems. 
    • Innovation: API empower developers to create new applications and services by leveraging existing tools and data. 

    I’m sure they are many more though, this fourth are consider the main reason of using API

    We now have a comprehensive understanding of API This includes their major use cases and importance. Let’s dive into the key elements of an effective API security framework.  

    The  Key elements of an effective API Security Framework 

    1. Authentication and Authorization: Using established standards like OAuth 2.0 for user authentication and granular access control based on scopes and claims. This will offer a great first line of defense. Only authorized users will have the necessary permissions to carry out the task or job at hand. Implement strong password policies. Consider MFA to have a robust security posture. Make sure to have a good password rotation policy in place. 
    2. Encryption:  Always use HTTPS to encrypt data transmitted between the API and clients. You will be surprised as I see clients use HTTP mainly on Dev and or Test environment. This should always be a red flag. Nowadays, all application API endpoints support HTTPS. In fact, most of them stop supporting HTTP or block any connection by default. So HTTPS please! Consider encrypting sensitive data at rest. This responsibility falls more to the Infrastructure team. Nonetheless, everyone should make sure they have encryption enabled at the Server Side. It’s important to have encryption on personal devices too. 
    3. Input Validation: Validating all user input parameters is a most. This would help preventing injection types attacks like SQL Injections or XSS. There is a Code Control solution like GitHub It helps with versioning, code checks, and collaboration. Use this before releasing any code or parameters to any platform. This step leads to the next item, sanitize input data before processing. 
    4. API Gateway: Use an API Gateway will guarantee centralize security controls and enforce access policies. Many Cloud Provider do offer an API gateway or Endpoint. 
    5. Monitoring and Logging: Continuously monitor API activity for suspicious patterns and anomalies. Implement detailed logging to track API requests, responses, and errors. There is quite a lot of platform that allows to implement great observing and logging like Splunk, etc. 
    6. Security Audits and Penetration Testing: Regularly conduct security audits. Carry out penetration tests to find vulnerabilities and potential attack vectors. This initiative is highly recommended. Conduct audits every 6 months as a starting point. But, business requirements and API use cases need more frequent audits or at least yearly ones.  
    7. API key Management: When securely managing API keys, consider rotation, expiry dates, and limiting access. Always keep track of whom we share or give keys. They pass them along among users. I have seen this quite a lot too. 
    8. Error Handling: Design error responses to avoid leaking sensitive information. Develop a strong error response process. It will help prevent data leaks or environment exposure. This will stop hackers from developing more complex attacks. 
    9. Rate Limits: Implement rate limiting to mitigate brute-force attacks and prevent excessive API usage.  
    10. Zero Trust Architecture: If you haven’t heard of the “Never trust, always verify” saying, let me explain. This approach is very effective because it assumes potential threats from all sources. In other word, DON’T TRUST NO ONE. 

    Some Key Considerations: 

    Now that we found some of the most important elements of an effective API Security Framework, let’s find some important considerations when building your API Security Framework: 

    • API Design – As a best practice, always focus on security first. This includes your API platform. Clear documentation is a must. This will help in maintaining consistency on how to run, protect and even keep up your API platform.  Incorporating robust access control will keep things tight and better controlled too. This will guarantee you keep a very high security posture across your environment. It prevents bad actors, ransomware, or any other cyber attacks. These can have a very negative impact on your business or organization. 
    • Least Privilege Principle – Do not give the entire Keys to the Kingdom. Do not give root or admin accounts to anyone. Grant only the least necessary access levels and offer other access as need and with others approval process. 
    • Versioning – Keep your old versions thigh too as they can also leak data or critical information of your Infrastructure. Avoid sharing or storing old versions in none-secure or unencrypted storage or any other unencrypted system. 
    • Compliance – Follow your industry security standard and or regulations. They will offer great insight and guidance on who to properly keep good security best practice. If your Organization doesn’t have any compliance to follow, look for a business like yours. See what Compliance Governor entity they follow. If you have customers, find out what compliance standard they must adhere to and adopt it.

    I hope this serves as a helpful starting point for adopting a solid API security framework. Stay tuned as I dive deeper into each of these elements in future posts—there’s much more to explore!  

    Cheers.