<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[CloudDecode]]></title><description><![CDATA[CloudDecode simplifies cloud &amp; DevOps—covering Azure, AWS, Kubernetes, Terraform, CI/CD &amp; more—with clear guides to help you decode, learn, and build wi]]></description><link>https://clouddecode.in</link><generator>RSS for Node</generator><lastBuildDate>Fri, 24 Apr 2026 17:20:57 GMT</lastBuildDate><atom:link href="https://clouddecode.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[HashiCorp Vault Secrets & GitHub Actions: Centralized Secret Management]]></title><description><![CDATA[Managing sensitive credentials like API keys across multiple GitHub repositories is challenging. It leads to duplicate secrets, a lack of versioning, and potential inconsistencies between environments.
The solution is to use HashiCorp Vault Secrets o...]]></description><link>https://clouddecode.in/hashicorp-vault-secrets-and-github-actions-centralized-secret-management</link><guid isPermaLink="true">https://clouddecode.in/hashicorp-vault-secrets-and-github-actions-centralized-secret-management</guid><category><![CDATA[hashicorp-vault]]></category><category><![CDATA[secrets management]]></category><category><![CDATA[github-actions]]></category><dc:creator><![CDATA[Abhay Patil]]></dc:creator><pubDate>Sun, 16 Nov 2025 03:53:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763265169750/f62aa0a8-c33a-48d4-b14b-eb2ab6c87b85.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing sensitive credentials like API keys across multiple GitHub repositories is challenging. It leads to <strong>duplicate secrets</strong>, a lack of <strong>versioning</strong>, and potential <strong>inconsistencies</strong> between environments.</p>
<p>The solution is to use <strong>HashiCorp Vault Secrets</strong> on the HashiCorp Cloud Platform (HCP) to create a single, secure source of truth that automatically <strong>synchronizes</strong> secrets to your GitHub Actions workflows.</p>
<hr />
<h2 id="heading-step-1-define-the-problem-with-a-failing-workflow">Step 1: Define the Problem with a Failing Workflow</h2>
<p>First, we establish a baseline by creating a simple GitHub Actions workflow that will <strong>fail</strong> because the necessary secret is missing. This mirrors the real-world issue before centralization.</p>
<h3 id="heading-example-workflow-githubworkflowsvault-demoyaml">Example Workflow (<code>.github/workflows/vault-demo.yaml</code>)</h3>
<p>This workflow is manually triggered and checks for the existence of an <code>AWS_API_KEY</code>.</p>
<p>YAML</p>
<pre><code class="lang-plaintext">name: Vault Demo
on:
  # Manually trigger the workflow from the GitHub UI
  workflow_dispatch: 

jobs:
  echo-vault-secret:
    runs-on: ubuntu-latest
    steps:
      - name: Verify AWS_API_KEY exists
        run: |
          # Check if the secret is empty (i.e., not found)
          if [[ -z "${{ secrets.AWS_API_KEY }}" ]]; then
            echo "::error::Secret Not Found"
            # Exit with a non-zero code to indicate failure
            exit 1 
          else
            echo "::notice::Secret Found"
          fi
</code></pre>
<h3 id="heading-purpose">Purpose:</h3>
<p>When you run this workflow, it will immediately <strong>fail</strong> because the <code>AWS_API_KEY</code> secret has not been defined in the repository settings.</p>
<hr />
<h2 id="heading-step-2-provision-and-configure-the-secret-in-hcp-vault">Step 2: Provision and Configure the Secret in HCP Vault</h2>
<p>We use HashiCorp Vault Secrets on the HashiCorp Cloud Platform (HCP) to create a secure, centralized store.</p>
<h3 id="heading-details-and-purpose">Details and Purpose:</h3>
<ol>
<li><p><strong>Access HCP Vault Secrets:</strong> Log in to the HCP dashboard and select <strong>Vault Secrets</strong> (the fully managed service).</p>
</li>
<li><p><strong>Create an Application:</strong> Create a new application (e.g., <strong>Secret App</strong>) within your project. This acts as a logical container for your secrets.</p>
</li>
<li><p><strong>Add the Secret:</strong> Within the application, add the sensitive key-value pair.</p>
<ul>
<li><p><strong>Key:</strong> <code>AWS_API_KEY</code></p>
</li>
<li><p><strong>Value:</strong> <code>one-two-three-four-five</code> (Use a secure, complex value in a real scenario)</p>
</li>
</ul>
</li>
</ol>
<p>This action ensures that your secret is now stored centrally with <strong>versioning</strong> and <strong>audit logs</strong>.</p>
<hr />
<h2 id="heading-step-3-integrate-vault-secrets-with-github-actions">Step 3: Integrate Vault Secrets with GitHub Actions</h2>
<p>Now we establish the automatic synchronization connection between Vault and your GitHub repository.</p>
<h3 id="heading-details-and-purpose-1">Details and Purpose:</h3>
<ol>
<li><p><strong>Navigate to Integrations:</strong> In the Vault Secrets console, go to <strong>Integrations</strong> on the left menu.</p>
</li>
<li><p><strong>Select GitHub Actions:</strong> Choose the <strong>GitHub Actions</strong> option for integration.</p>
</li>
<li><p><strong>Authorize GitHub:</strong> You will be prompted to authorize Vault's access to your GitHub account.</p>
</li>
<li><p><strong>Configure Sync Destination:</strong> Select the <strong>specific repository</strong> (e.g., <code>action-one-repository</code>) that contains the <code>Vault Demo</code> workflow.</p>
</li>
<li><p><strong>Save &amp; Sync:</strong> Configure the sync destination and click <strong>Save and Sync Secrets</strong>.</p>
<ul>
<li><strong>Purpose:</strong> Vault automatically pushes the <code>AWS_API_KEY</code> secret to the target repository's secrets store. This process <strong>synchronizes</strong> the secret, eliminating the need to manually update it in GitHub.</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-step-4-verify-success-and-centralized-management">Step 4: Verify Success and Centralized Management</h2>
<p>The final step is to confirm that the synchronization worked and that your workflow can now successfully access the secret.</p>
<h3 id="heading-verification">Verification:</h3>
<ol>
<li><p><strong>Check GitHub Secrets:</strong> Navigate to your GitHub repository's <strong>Settings</strong> → <strong>Secrets and variables</strong> → <strong>Actions</strong>. The <code>AWS_API_KEY</code> secret, which was manually added by the Vault integration, should now be present.</p>
</li>
<li><p><strong>Rerun the Workflow:</strong> Rerun the <strong>Vault Demo</strong> workflow from the GitHub Actions tab.</p>
</li>
</ol>
<h3 id="heading-expected-result">Expected Result:</h3>
<p>The workflow will now successfully execute. In the logs, you will see the output confirming that the secret was found:</p>
<pre><code class="lang-plaintext">::notice::Secret Found
</code></pre>
<p><strong>Note:</strong> GitHub's security features ensure that the actual value of <code>${{</code> <a target="_blank" href="http://secrets.AWS"><code>secrets.AWS</code></a><code>_API_KEY }}</code> is automatically masked with asterisks (<code>***</code>) in the logs, even if you try to print it.</p>
<h3 id="heading-key-benefits">Key Benefits</h3>
<p>By centralizing, you gain:</p>
<ul>
<li><p><strong>Single Source of Truth:</strong> Manage secrets for all repositories from one place.</p>
</li>
<li><p><strong>Automatic Synchronization:</strong> Changes in Vault are instantly reflected in GitHub.</p>
</li>
<li><p><strong>Auditability:</strong> Vault provides a detailed log of all secret access and changes.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Qwen: The AI-Powered Assistant for DevOps and Beyond]]></title><description><![CDATA[Introduction
DevOps engineers today spend countless hours context-switching between dashboards, logs, cloud consoles, and documentation. Traditional AI tools like ChatGPT are great at answering questions, but they lack direct access to your infrastru...]]></description><link>https://clouddecode.in/qwen-the-ai-powered-assistant-for-devops-and-beyond</link><guid isPermaLink="true">https://clouddecode.in/qwen-the-ai-powered-assistant-for-devops-and-beyond</guid><category><![CDATA[#qwen]]></category><category><![CDATA[AI]]></category><category><![CDATA[AIAssistant]]></category><category><![CDATA[Devops]]></category><category><![CDATA[mcp]]></category><dc:creator><![CDATA[Abhay Patil]]></dc:creator><pubDate>Sat, 20 Sep 2025 12:07:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758369910313/c4cf60b9-cb81-41c5-87e9-83223eb498e4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>DevOps engineers today spend countless hours context-switching between dashboards, logs, cloud consoles, and documentation. Traditional AI tools like ChatGPT are great at answering questions, but they <strong>lack direct access to your infrastructure</strong>. That’s where <strong>Qwen</strong> changes the game.</p>
<p>Qwen isn’t just another coding assistant—it’s a purpose-built AI system that combines <strong>code generation, operations management, and real-time infrastructure awareness</strong> into one unified platform.</p>
<hr />
<h2 id="heading-what-is-qwen">What is Qwen?</h2>
<p>Qwen is an <strong>AI coding + DevOps assistant</strong> designed to work directly inside your environment. It leverages the <strong>Model Context Protocol (MCP)</strong> to connect seamlessly with tools like:</p>
<ul>
<li><p><strong>AWS CloudFormation MCP</strong> → Query and manage AWS resources</p>
</li>
<li><p><strong>AWS Docs MCP</strong> → Fetch documentation, best practices, and API references</p>
</li>
<li><p><strong>Terraform MCP</strong> → Automate Infrastructure-as-Code with built-in compliance scanning</p>
</li>
</ul>
<p>With Qwen, you don’t just ask questions—you execute real commands, validate changes, and fix issues in minutes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758371378446/28191798-4682-4ba6-8717-5bf31cca3fea.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-why-qwen-matters-for-devops">Why Qwen Matters for DevOps</h2>
<p>Here’s how Qwen compares with other AI assistants:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Tool</td><td>Strength</td><td>Limitation for DevOps</td></tr>
</thead>
<tbody>
<tr>
<td>ChatGPT / Claude</td><td>Great for explanations</td><td>No direct environment access</td></tr>
<tr>
<td>GitHub Copilot</td><td>Excellent for code completion</td><td>Limited to IDE context</td></tr>
<tr>
<td>Cursor</td><td>Powerful editor integration</td><td>Focused on development, not ops</td></tr>
<tr>
<td><strong>Qwen</strong></td><td>Built for coding + DevOps</td><td>Direct CLI + toolchain access</td></tr>
</tbody>
</table>
</div><p><strong>Qwen stands out because it:</strong></p>
<ul>
<li><p>Executes commands directly in your environment</p>
</li>
<li><p>Integrates with <strong>MCP out of the box</strong></p>
</li>
<li><p>Switches between AI models depending on complexity</p>
</li>
<li><p>Supports specialized <strong>agents</strong> for tasks like compliance checks, log analysis, and cost optimization</p>
</li>
</ul>
<hr />
<h2 id="heading-qwen-in-action-a-real-example">Qwen in Action: A Real Example</h2>
<p>Imagine your Kubernetes cluster is broken. Here’s how Qwen helps:</p>
<ol>
<li><p><strong>Diagnose</strong>:</p>
<pre><code class="lang-plaintext"> qwen ask "List failing pods in production namespace"
</code></pre>
<p> → Instantly fetches cluster state using Kubernetes CLI integration.</p>
</li>
<li><p><strong>Research</strong>:</p>
<pre><code class="lang-plaintext"> qwen ask "What are the best practices for fixing ImagePullBackOff errors in ECR?"
</code></pre>
<p> → Pulls real-time guidance from AWS Documentation MCP.</p>
</li>
<li><p><strong>Fix</strong>:</p>
<pre><code class="lang-plaintext"> qwen apply terraform plan ./ec2_setup.tf
</code></pre>
<p> → Deploys infrastructure securely with Terraform MCP + compliance scanning.</p>
</li>
</ol>
<p>In minutes, you move from <strong>problem → context → solution → fix</strong>, without leaving the CLI.</p>
<hr />
<h2 id="heading-key-features-of-qwen">Key Features of Qwen</h2>
<ul>
<li><p><strong>MCP Integration</strong>: Bridges AI with your AWS, Terraform, and toolchain environments.</p>
</li>
<li><p><strong>Prompt-Aware Execution</strong>: AI doesn’t just generate commands—it explains them before execution.</p>
</li>
<li><p><strong>Specialized Models</strong>:</p>
<ul>
<li><p><code>qwen3-coder-plus</code>: Complex debugging &amp; deep analysis</p>
</li>
<li><p><code>grok-code-fast-1</code>: Fast responses for quick lookups</p>
</li>
</ul>
</li>
<li><p><strong>Agent Ecosystem</strong>: Deploy AI agents for compliance, monitoring, cost optimization, or log analysis.</p>
</li>
</ul>
<hr />
<h2 id="heading-who-should-use-qwen">Who Should Use Qwen?</h2>
<ul>
<li><p><strong>DevOps Engineers</strong> → Automate troubleshooting and incident response.</p>
</li>
<li><p><strong>Cloud Architects</strong> → Optimize infrastructure with AI-driven insights.</p>
</li>
<li><p><strong>Platform Teams</strong> → Build self-healing systems with specialized AI agents.</p>
</li>
<li><p><strong>Developers</strong> → Get environment-aware debugging without endless context switching.</p>
</li>
</ul>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>Qwen represents the next step in AI-powered DevOps: <strong>not just advice, but action.</strong></p>
<p>Instead of juggling logs, dashboards, and docs, you can ask Qwen one question and get a <strong>verified, executable solution</strong> tailored to your environment.</p>
<p>The result?</p>
<ul>
<li><p>Faster fixes</p>
</li>
<li><p>Fewer mistakes</p>
</li>
<li><p>More time for strategy and innovation</p>
</li>
</ul>
<hr />
<p>In short: <strong>Qwen brings clarity to the chaos of DevOps.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Understanding GitHub Actions Concurrency]]></title><description><![CDATA[When working with GitHub Actions, you may face situations where multiple workflow runs overlap, consume unnecessary resources, or even cause conflicts in deployment. This is where concurrency comes into play.
Concurrency in GitHub Actions allows you ...]]></description><link>https://clouddecode.in/understanding-github-actions-concurrency</link><guid isPermaLink="true">https://clouddecode.in/understanding-github-actions-concurrency</guid><category><![CDATA[concurrency]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[github workflow]]></category><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Abhay Patil]]></dc:creator><pubDate>Sat, 20 Sep 2025 11:58:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758369428932/83e8dcac-cb0e-4677-8385-873474557ca3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When working with GitHub Actions, you may face situations where multiple workflow runs overlap, consume unnecessary resources, or even cause conflicts in deployment. This is where <strong>concurrency</strong> comes into play.</p>
<p>Concurrency in GitHub Actions allows you to control how workflow runs are handled when a new run is triggered before a previous one finishes. By defining concurrency rules, you can decide whether to cancel, queue, or allow multiple runs simultaneously.</p>
<hr />
<h2 id="heading-why-concurrency-matters">Why Concurrency Matters</h2>
<p>Imagine the following scenarios:</p>
<ul>
<li><p><strong>CI pipelines</strong>: A developer pushes multiple commits rapidly. Without concurrency control, each commit triggers a workflow, resulting in unnecessary duplicate builds.</p>
</li>
<li><p><strong>Deployments</strong>: Two workflows triggered close together may attempt to deploy to the same environment at the same time, leading to inconsistent states.</p>
</li>
<li><p><strong>Resource Management</strong>: Avoiding multiple parallel runs saves GitHub-hosted runner minutes and keeps pipelines efficient.</p>
</li>
</ul>
<hr />
<h2 id="heading-basic-concurrency-syntax">Basic Concurrency Syntax</h2>
<p>You can define concurrency in your workflow YAML file using the <code>concurrency</code> key:</p>
<pre><code class="lang-plaintext">name: CI Pipeline

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest
    concurrency:
      group: ci-build-${{ github.ref }}
      cancel-in-progress: true

    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Build
        run: echo "Running build for $GITHUB_REF"
</code></pre>
<hr />
<h2 id="heading-key-components">Key Components</h2>
<h3 id="heading-1-group">1. <code>group</code></h3>
<ul>
<li><p>Defines the "bucket" that runs belong to.</p>
</li>
<li><p>Can be static (e.g., <code>"deploy"</code>) or dynamic (e.g., <code>"deploy-${{ github.ref }}"</code>).</p>
</li>
<li><p>Runs in the same group respect concurrency rules.</p>
</li>
</ul>
<h3 id="heading-2-cancel-in-progress">2. <code>cancel-in-progress</code></h3>
<ul>
<li><p>If <code>true</code>: cancels any currently running jobs in the same group before starting a new one.</p>
</li>
<li><p>If <code>false</code> (default): waits until the current run in the group finishes before starting the next one.</p>
</li>
</ul>
<hr />
<h2 id="heading-examples">Examples</h2>
<h3 id="heading-example-1-cancel-previous-runs">Example 1: Cancel Previous Runs</h3>
<p>Useful for CI pipelines where only the latest commit matters:</p>
<pre><code class="lang-plaintext">concurrency:
  group: ci-${{ github.ref }}
  cancel-in-progress: true
</code></pre>
<h3 id="heading-example-2-sequential-deployments">Example 2: Sequential Deployments</h3>
<p>Ensures only one deployment per environment happens at a time:</p>
<pre><code class="lang-plaintext">concurrency:
  group: deploy-production
  cancel-in-progress: false
</code></pre>
<h3 id="heading-example-3-matrix-jobs-with-concurrency">Example 3: Matrix Jobs with Concurrency</h3>
<pre><code class="lang-plaintext">jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node: [14, 16, 18]
    concurrency:
      group: test-${{ matrix.node }}
      cancel-in-progress: true
</code></pre>
<hr />
<h2 id="heading-best-practices">Best Practices</h2>
<ol>
<li><p><strong>Use meaningful group names</strong><br /> Use <code>${{ github.ref }}</code> or <code>${{ github.workflow }}</code> to scope groups appropriately.</p>
</li>
<li><p><strong>Cancel aggressively for CI</strong><br /> Saves time and runner minutes when frequent commits occur.</p>
</li>
<li><p><strong>Don’t cancel for deployments</strong><br /> Deployment pipelines should finish in order to maintain state consistency.</p>
</li>
<li><p><strong>Mix with environments</strong><br /> You can combine concurrency with environment protection rules for more reliable workflows.</p>
</li>
</ol>
<hr />
<h2 id="heading-docker-example-with-concurrency">Docker Example with Concurrency</h2>
<p>Suppose you are building and pushing a Docker image on every push to <code>main</code>. Without concurrency, multiple builds may try to push at the same time.</p>
<pre><code class="lang-plaintext">name: Docker Build &amp; Push

on:
  push:
    branches:
      - main

jobs:
  docker-build:
    runs-on: ubuntu-latest
    concurrency:
      group: docker-main
      cancel-in-progress: true

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Log in to DockerHub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Build and Push
        run: |
          docker build -t myapp:latest .
          docker push myapp:latest
</code></pre>
<p>Here, if multiple pushes happen rapidly, only the latest Docker build will continue.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Concurrency in GitHub Actions is a powerful feature to:</p>
<ul>
<li><p>Avoid duplicate work,</p>
</li>
<li><p>Prevent deployment conflicts,</p>
</li>
<li><p>Save runner minutes.</p>
</li>
</ul>
<p>By properly configuring <code>group</code> and <code>cancel-in-progress</code>, you can make your pipelines faster, cheaper, and more reliable.</p>
]]></content:encoded></item><item><title><![CDATA[Organization-level Environment Variables and Secrets in GitHub Actions]]></title><description><![CDATA[When teams scale, managing workflows across multiple repositories becomes more complex. Each project often shares common configuration values, cloud credentials, or deployment secrets. Instead of duplicating these across repositories, GitHub provides...]]></description><link>https://clouddecode.in/organization-level-environment-variables-and-secrets-in-github-actions</link><guid isPermaLink="true">https://clouddecode.in/organization-level-environment-variables-and-secrets-in-github-actions</guid><category><![CDATA[Git]]></category><category><![CDATA[Actions]]></category><category><![CDATA[secrets]]></category><category><![CDATA[GitHub Actions]]></category><category><![CDATA[variables]]></category><dc:creator><![CDATA[Abhay Patil]]></dc:creator><pubDate>Wed, 17 Sep 2025 16:36:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758179969955/8d0e323a-1521-4b46-a94c-7a03c4815360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When teams scale, managing workflows across multiple repositories becomes more complex. Each project often shares common configuration values, cloud credentials, or deployment secrets. Instead of duplicating these across repositories, GitHub provides a centralized solution: <strong>organization-level environment variables and secrets</strong>.</p>
<p>In this article, we’ll explore how they work, when to use them, and how to integrate them into your CI/CD pipelines.</p>
<hr />
<h2 id="heading-why-organization-level-variables-and-secrets">Why Organization-level Variables and Secrets?</h2>
<p>Repository-level variables and secrets are useful, but they only apply to a single repository. If you manage multiple repositories within the same organization (for example, microservices architecture), repeating the same values across repositories is inefficient and error-prone.</p>
<p>Organization-level variables and secrets allow you to:</p>
<ul>
<li><p>Define once and use in <strong>all repositories</strong> within the organization.</p>
</li>
<li><p>Standardize configurations across projects.</p>
</li>
<li><p>Reduce maintenance overhead.</p>
</li>
<li><p>Strengthen security by managing credentials in a single place.</p>
</li>
</ul>
<hr />
<h2 id="heading-types-of-organization-level-data">Types of Organization-level Data</h2>
<ol>
<li><p><strong>Organization Variables</strong></p>
<ul>
<li><p>Non-sensitive values such as <code>DOCKER_REGISTRY_URL</code>, <code>NODE_VERSION</code>, or <code>DEPLOY_REGION</code>.</p>
</li>
<li><p>Available in all workflows in repositories under the organization.</p>
</li>
<li><p>Accessed using <code>${{ vars.VAR_NAME }}</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Organization Secrets</strong></p>
<ul>
<li><p>Encrypted, hidden values like API keys, service account credentials, and cloud tokens.</p>
</li>
<li><p>Available to all repositories in the organization, unless access is restricted.</p>
</li>
<li><p>Accessed using <code>${{ secrets.SECRET_NAME }}</code>.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-how-to-set-them">How to Set Them</h2>
<ol>
<li><p>Go to your GitHub <strong>organization’s page</strong>.</p>
</li>
<li><p>Navigate to: <strong>Settings → Secrets and variables → Actions</strong>.</p>
</li>
<li><p>You’ll see two tabs:</p>
<ul>
<li><p><strong>Variables</strong> → for plain-text values.</p>
</li>
<li><p><strong>Secrets</strong> → for sensitive data.</p>
</li>
</ul>
</li>
<li><p>Choose whether they should apply to <strong>all repositories</strong> or to a <strong>selected set of repositories</strong>.</p>
</li>
</ol>
<hr />
<h2 id="heading-example-use-case-multi-service-deployment-with-docker">Example Use Case: Multi-service Deployment with Docker</h2>
<p>Imagine an organization managing multiple microservices, each in its own repository. They all deploy Docker images to the same registry. Instead of configuring credentials in every repo, you can define them once at the organization level.</p>
<p><strong>Organization Variables:</strong></p>
<ul>
<li><p><code>DOCKER_REGISTRY =</code> <a target="_blank" href="http://ghcr.io/my-org"><code>ghcr.io/my-org</code></a></p>
</li>
<li><p><code>DEPLOY_ENV = production</code></p>
</li>
</ul>
<p><strong>Organization Secrets:</strong></p>
<ul>
<li><p><code>DOCKER_USERNAME</code></p>
</li>
<li><p><code>DOCKER_PASSWORD</code></p>
</li>
</ul>
<p><strong>Workflow Example:</strong></p>
<pre><code class="lang-plaintext">name: Build and Push Docker

on: [push]

jobs:
  docker:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Log in to Docker registry
        run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login ${{ vars.DOCKER_REGISTRY }} -u ${{ secrets.DOCKER_USERNAME }} --password-stdin

      - name: Build Docker image
        run: docker build -t ${{ vars.DOCKER_REGISTRY }}/myservice:${{ github.sha }} .

      - name: Push Docker image
        run: docker push ${{ vars.DOCKER_REGISTRY }}/myservice:${{ github.sha }}
</code></pre>
<h3 id="heading-whats-happening-here">What’s happening here?</h3>
<ul>
<li><p>Organization-level variable <code>${{ vars.DOCKER_REGISTRY }}</code> ensures all repositories use the same registry.</p>
</li>
<li><p>Secrets <code>${{ secrets.DOCKER_USERNAME }}</code> and <code>${{ secrets.DOCKER_PASSWORD }}</code> handle authentication securely.</p>
</li>
<li><p>Any new repository added to the org can reuse this configuration instantly.</p>
</li>
</ul>
<hr />
<h2 id="heading-benefits-of-organization-level-variables-and-secrets">Benefits of Organization-level Variables and Secrets</h2>
<ul>
<li><p><strong>Centralized Management</strong>: Define once, use everywhere.</p>
</li>
<li><p><strong>Security</strong>: Sensitive data is encrypted and scoped properly.</p>
</li>
<li><p><strong>Consistency</strong>: Prevents mismatches across repositories.</p>
</li>
<li><p><strong>Scalability</strong>: Ideal for organizations with dozens of repositories.</p>
</li>
</ul>
<hr />
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ul>
<li><p>Use <strong>organization variables</strong> for non-sensitive shared values.</p>
</li>
<li><p>Use <strong>organization secrets</strong> for sensitive credentials shared across repositories.</p>
</li>
<li><p>Control which repositories have access for tighter security.</p>
</li>
<li><p>Great for scaling CI/CD pipelines across multiple projects.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Organization-level environment variables and secrets simplify configuration management at scale. By centralizing values and credentials, you can eliminate redundancy, reduce risk, and improve maintainability across all repositories.</p>
<p>If your team is managing multiple repositories under one organization, adopting organization-level variables and secrets is a best practice for efficiency and security.</p>
]]></content:encoded></item><item><title><![CDATA[Repository-level Environment Variables and Secrets in GitHub Actions]]></title><description><![CDATA[When working with GitHub Actions, environment variables and secrets are powerful tools that help you configure workflows in a flexible and secure way. While workflow, job, and step-level variables are useful, sometimes you need variables that are ava...]]></description><link>https://clouddecode.in/repository-level-environment-variables-and-secrets-in-github-actions</link><guid isPermaLink="true">https://clouddecode.in/repository-level-environment-variables-and-secrets-in-github-actions</guid><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Environment variables]]></category><category><![CDATA[secrets]]></category><dc:creator><![CDATA[Abhay Patil]]></dc:creator><pubDate>Mon, 15 Sep 2025 16:27:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757950989949/1ef2f17d-9fbf-4f6e-bd5a-12ee60ea7a05.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When working with GitHub Actions, environment variables and secrets are powerful tools that help you configure workflows in a flexible and secure way. While workflow, job, and step-level variables are useful, sometimes you need variables that are available across all workflows in a repository. This is where <strong>repository-level environment variables and secrets</strong> come in.</p>
<p>In this article, we’ll break down how repository-level variables and secrets work, why they are useful, and how you can manage them effectively. We’ll also see how to use them in a real-world example: building and pushing a Docker image.</p>
<hr />
<h2 id="heading-what-are-repository-level-variables">What are Repository-level Variables?</h2>
<p><strong>Repository-level environment variables</strong> are key-value pairs defined in your repository’s settings. They are accessible to all workflows in that repository, without needing to redefine them in every workflow file.</p>
<p>For example:</p>
<ul>
<li><p>Common Docker tags</p>
</li>
<li><p>Application environment names (like <code>STAGING</code>, <code>PROD</code>)</p>
</li>
<li><p>URLs or constants that rarely change</p>
</li>
</ul>
<p>They’re stored in <strong>plain text</strong>, so they are suitable for non-sensitive data.</p>
<hr />
<h2 id="heading-what-are-repository-level-secrets">What are Repository-level Secrets?</h2>
<p><strong>Secrets</strong> are similar to variables, but they are <strong>encrypted and hidden</strong>. They are designed for storing sensitive data such as:</p>
<ul>
<li><p>API keys</p>
</li>
<li><p>Database passwords</p>
</li>
<li><p>Cloud provider credentials</p>
</li>
<li><p>DockerHub tokens</p>
</li>
</ul>
<p>Secrets are masked in logs and cannot be retrieved once set, making them the secure way to pass sensitive information into workflows.</p>
<hr />
<h2 id="heading-how-to-set-them">How to Set Them</h2>
<ol>
<li><p>Go to your GitHub repository.</p>
</li>
<li><p>Navigate to:<br /> <strong>Settings → Secrets and variables → Actions</strong></p>
</li>
<li><p>You’ll find two tabs:</p>
<ul>
<li><p><strong>Variables</strong> → for non-sensitive values</p>
</li>
<li><p><strong>Secrets</strong> → for sensitive values</p>
</li>
</ul>
</li>
<li><p>Add your desired key-value pairs.</p>
</li>
</ol>
<p>For example:</p>
<ul>
<li><p>Variable: <code>DOCKER_IMAGE_NAME = myapp</code></p>
</li>
<li><p>Secret: <code>DOCKERHUB_TOKEN = &lt;your token&gt;</code></p>
</li>
</ul>
<hr />
<h2 id="heading-using-them-in-workflows">Using Them in Workflows</h2>
<p>Once defined, you can access these directly in your YAML workflow:</p>
<pre><code class="lang-plaintext">name: Docker Build and Push

on: [push]

jobs:
  docker:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Log in to DockerHub
        run: echo "${{ secrets.DOCKERHUB_TOKEN }}" | docker login -u ${{ secrets.DOCKERHUB_USER }} --password-stdin

      - name: Build Docker image
        run: docker build -t ${{ vars.DOCKER_IMAGE_NAME }}:${{ github.sha }} .

      - name: Push Docker image
        run: docker push ${{ vars.DOCKER_IMAGE_NAME }}:${{ github.sha }}
</code></pre>
<hr />
<h2 id="heading-key-points-to-remember">Key Points to Remember</h2>
<ul>
<li><p>Use <strong>repository-level variables</strong> for constants that don’t need to be hidden.</p>
</li>
<li><p>Use <strong>secrets</strong> for anything sensitive.</p>
</li>
<li><p>Variables are available using <code>${{</code> <a target="_blank" href="http://vars.NAME"><code>vars.NAME</code></a> <code>}}</code>.</p>
</li>
<li><p>Secrets are available using <code>${{</code> <a target="_blank" href="http://secrets.NAME"><code>secrets.NAME</code></a> <code>}}</code>.</p>
</li>
<li><p>Both variables and secrets are available in all workflows in the repository.</p>
</li>
</ul>
<hr />
<h2 id="heading-real-world-example-docker-deployment">Real-world Example: Docker Deployment</h2>
<p>Imagine you’re deploying a service using DockerHub. Instead of hardcoding image names and credentials in multiple workflow files:</p>
<ul>
<li><p>Store <code>DOCKER_IMAGE_NAME</code> as a variable.</p>
</li>
<li><p>Store <code>DOCKERHUB_USER</code> and <code>DOCKERHUB_TOKEN</code> as secrets.</p>
</li>
<li><p>Reference them directly in your workflows.</p>
</li>
</ul>
<p>This keeps your workflows clean, reusable, and secure.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Repository-level variables and secrets are essential for building maintainable and secure GitHub Actions workflows. They allow you to centralize configuration, avoid duplication, and keep sensitive data safe. Whether you’re building Docker images, deploying to cloud services, or running CI pipelines, understanding how to use repository-level variables and secrets will save you time and headaches.</p>
]]></content:encoded></item><item><title><![CDATA[Environment Variables in GitHub Actions (with Docker Example)]]></title><description><![CDATA[Automation is at the heart of modern software delivery, and GitHub Actions has become a go-to solution for CI/CD pipelines. One of the most powerful features in GitHub Actions is the ability to manage environment variables (envs). These variables hel...]]></description><link>https://clouddecode.in/environment-variables-in-github-actions-with-docker-example</link><guid isPermaLink="true">https://clouddecode.in/environment-variables-in-github-actions-with-docker-example</guid><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Environment variables]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Abhay Patil]]></dc:creator><pubDate>Sat, 13 Sep 2025 05:21:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757740729952/39d5e632-349d-481e-8c7a-a9538935a8f4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Automation is at the heart of modern software delivery, and GitHub Actions has become a go-to solution for CI/CD pipelines. One of the most powerful features in GitHub Actions is the ability to manage environment variables (envs). These variables help control behavior, store configuration, and keep sensitive data secure.</p>
<p>In this article, we’ll explore different ways to declare and use environment variables in GitHub Actions, and we’ll tie it all together with a Docker build and push example.</p>
<hr />
<h2 id="heading-what-are-environment-variables">What are Environment Variables?</h2>
<p>Environment variables are key-value pairs that provide configuration settings to jobs, steps, or even the whole workflow. They can be used to:</p>
<ul>
<li><p>Pass configurations (for example, Node.js version, Docker image name)</p>
</li>
<li><p>Control workflow behavior</p>
</li>
<li><p>Store sensitive information (API keys, tokens)</p>
</li>
</ul>
<hr />
<h2 id="heading-types-of-environment-variables-in-github-actions">Types of Environment Variables in GitHub Actions</h2>
<h3 id="heading-1-workflow-level">1. Workflow-level</h3>
<p>Defined at the top of the workflow, available in all jobs and steps.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">env:</span>
  <span class="hljs-attr">WORKFLOW_ENV:</span> <span class="hljs-string">"workflow-scope"</span>
</code></pre>
<h3 id="heading-2-job-level">2. Job-level</h3>
<p>Defined under a job, only available within that job.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">env:</span>
      <span class="hljs-attr">JOB_ENV:</span> <span class="hljs-string">"job-scope"</span>
</code></pre>
<h3 id="heading-3-step-level">3. Step-level</h3>
<p>Defined under a step, only available within that step.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Step</span> <span class="hljs-string">with</span> <span class="hljs-string">env</span>
    <span class="hljs-attr">env:</span>
      <span class="hljs-attr">STEP_ENV:</span> <span class="hljs-string">"step-scope"</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">echo</span> <span class="hljs-string">"Step var: $STEP_ENV"</span>
</code></pre>
<h3 id="heading-4-matrix-level">4. Matrix-level</h3>
<p>Useful for testing across multiple environments (for example, OS or language versions).</p>
<pre><code class="lang-yaml"><span class="hljs-attr">strategy:</span>
  <span class="hljs-attr">matrix:</span>
    <span class="hljs-attr">os:</span> [<span class="hljs-string">ubuntu-latest</span>, <span class="hljs-string">windows-latest</span>]
    <span class="hljs-attr">version:</span> [<span class="hljs-number">14</span>, <span class="hljs-number">16</span>]
</code></pre>
<p>Use with <code>${{ matrix.os }}</code> or <code>${{ matrix.version }}</code>.</p>
<h3 id="heading-5-dynamic-environment-variables">5. Dynamic Environment Variables</h3>
<p>Created at runtime with <code>GITHUB_ENV</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">echo</span> <span class="hljs-string">"BUILD_ID=$<span class="hljs-template-variable">{{ github.run_id }}</span>"</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">$GITHUB_ENV</span>
</code></pre>
<h3 id="heading-6-secrets">6. Secrets</h3>
<p>Used for sensitive values like tokens or API keys.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">env:</span>
  <span class="hljs-attr">DOCKER_PASSWORD:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_PASSWORD</span> <span class="hljs-string">}}</span>
</code></pre>
<hr />
<h2 id="heading-real-world-example-docker-build-amp-push">Real-world Example: Docker Build &amp; Push</h2>
<p>Let’s bring it together with a Docker pipeline.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Docker</span> <span class="hljs-string">CI/CD</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">main</span> ]

<span class="hljs-attr">env:</span>
  <span class="hljs-attr">IMAGE_NAME:</span> <span class="hljs-string">myapp</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">docker:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">strategy:</span>
      <span class="hljs-attr">matrix:</span>
        <span class="hljs-attr">version:</span> [<span class="hljs-string">"1.0"</span>, <span class="hljs-string">"2.0"</span>]

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v3</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Log</span> <span class="hljs-string">in</span> <span class="hljs-string">to</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Hub</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v2</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_USERNAME</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_PASSWORD</span> <span class="hljs-string">}}</span>

      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Docker</span> <span class="hljs-string">image</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          docker build . -t ${{ secrets.DOCKER_USERNAME }}/$IMAGE_NAME:${{ matrix.version }}
</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Docker</span> <span class="hljs-string">image</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
          <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_USERNAME</span> <span class="hljs-string">}}/$IMAGE_NAME:${{</span> <span class="hljs-string">matrix.version</span> <span class="hljs-string">}}</span>
</code></pre>
<h3 id="heading-explanation-of-this-example">Explanation of this example:</h3>
<ul>
<li><p>Workflow-level: <code>IMAGE_NAME</code> is available everywhere.</p>
</li>
<li><p>Matrix-level: Builds multiple versions of the Docker image.</p>
</li>
<li><p>Secrets: Docker credentials are securely injected from GitHub Secrets.</p>
</li>
</ul>
<hr />
<h2 id="heading-visualizing-env-scopes">Visualizing Env Scopes</h2>
<pre><code class="lang-plaintext">Workflow → Job → Step
         ↓        ↓
     Matrix    Dynamic/Secrets
</code></pre>
<p>This flow shows how variables cascade down from workflow to job to step, while matrix, dynamic, and secrets act as overlays.</p>
<hr />
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ul>
<li><p>Use workflow-level environment variables for common configurations.</p>
</li>
<li><p>Use job-level environment variables to scope variables to a specific job.</p>
</li>
<li><p>Use step-level environment variables sparingly for unique cases.</p>
</li>
<li><p>Use matrix environment variables to test multiple versions or environments.</p>
</li>
<li><p>Use dynamic environment variables when values need to be generated at runtime.</p>
</li>
<li><p>Use secrets for sensitive information such as credentials and tokens.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Environment variables are a cornerstone of GitHub Actions workflows. Whether you’re building, testing, or deploying with Docker, managing environment variables correctly ensures your pipeline is clean, secure, and maintainable.</p>
<p>By combining these scopes effectively, you can design workflows that adapt easily to new environments and use cases.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Splat Expressions in Terraform]]></title><description><![CDATA[When writing Terraform code to manage cloud infrastructure, we often deal with multiple similar resources — for example, a group of EC2 instances, multiple storage accounts, or several subnets.
So how do we easily access the same attribute (like ID o...]]></description><link>https://clouddecode.in/splat-expressions-in-terraform-45bba3bc306a</link><guid isPermaLink="true">https://clouddecode.in/splat-expressions-in-terraform-45bba3bc306a</guid><category><![CDATA[Terraform]]></category><category><![CDATA[splatexpressions]]></category><dc:creator><![CDATA[clouddecode]]></dc:creator><pubDate>Sun, 15 Jun 2025 16:34:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/hGV2TfOh0ns/upload/67567ce3ea4552f1cb24c0bc8e96a8c7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When writing Terraform code to manage cloud infrastructure, we often deal with <strong>multiple similar resources</strong> — for example, a group of EC2 instances, multiple storage accounts, or several subnets.</p>
<p>So how do we <strong>easily access the same attribute (like ID or name) from all of them</strong>?</p>
<p>This is where Terraform’s <strong>splat expressions</strong> shine.</p>
<p>In this blog, we’ll break down:</p>
<ul>
<li><p>What splat expressions are</p>
</li>
<li><p>When and why to use them</p>
</li>
<li><p>Simple examples using <code>count</code> and <code>for_each</code></p>
</li>
<li><p>When to use <code>for</code> loop instead</p>
</li>
<li><p>Tips and best practices</p>
</li>
</ul>
<h2 id="heading-what-is-a-splat-expression"><strong>What Is a Splat Expression?</strong></h2>
<p>In Terraform, a <strong>splat expression</strong> uses the asterisk symbol (<code>*</code>) to quickly <strong>extract values from a list or map of resources</strong>.</p>
<p>In short: <em>Use</em> <code>[*]</code> <em>to grab the same property from</em> <strong><em>all resources in a group*</em></strong>.*</p>
<h2 id="heading-scenario-1-using-count-to-create-multiple-resources"><strong>Scenario 1: Using</strong> <code>count</code> to Create Multiple Resources</h2>
<p>Let’s say you want to create 3 EC2 instances.</p>
<pre><code class="lang-plaintext">resource "aws_instance" "example" {
  count         = 3
  ami           = "ami-12345678"
  instance_type = "t2.micro"
}
</code></pre>
<p>Each instance will have its own ID, IP, and so on.</p>
<p>Now, if you want to get all 3 instance IDs, instead of writing:</p>
<pre><code class="lang-plaintext">[
  aws_instance.example[0].id,
  aws_instance.example[1].id,
  aws_instance.example[2].id
]
</code></pre>
<p>You can just write:</p>
<pre><code class="lang-plaintext">output "instance_ids" {
  value = aws_instance.example[*].id
}
</code></pre>
<p><strong>This is a splat expression.</strong></p>
<p>It goes through every instance in the list and pulls out the <code>.id</code> value — and gives you a list like:</p>
<pre><code class="lang-plaintext">["i-abc123", "i-def456", "i-ghi789"]
</code></pre>
<h2 id="heading-scenario-2-using-foreach-to-create-multiple-resources"><strong>Scenario 2: Using</strong> <code>for_each</code> to Create Multiple Resources</h2>
<p>Let’s create multiple S3 buckets using <code>for_each</code>:</p>
<pre><code class="lang-plaintext">resource "aws_s3_bucket" "example" {
  for_each = toset(["my-bucket-1", "my-bucket-2"])
  bucket   = each.key
  acl      = "private"
}
</code></pre>
<p>Now you want the <strong>IDs of all buckets</strong>. Since <code>for_each</code> uses a <strong>map</strong>, you can’t directly use the <code>[*]</code> shortcut. Instead, combine it with <code>values()</code>:</p>
<pre><code class="lang-plaintext">output "bucket_ids" {
  value = values(aws_s3_bucket.example)[*].id
}
</code></pre>
<p>Here:</p>
<ul>
<li><p><code>values(...)</code> turns the map into a list</p>
</li>
<li><p><code>[*].id</code> extracts all IDs from that list</p>
</li>
</ul>
<p><strong>Behind the Scenes — What Does</strong> <code>[*]</code> Do?</p>
<p>A splat expression is just a <strong>shortcut</strong>. It’s equivalent to a loop that collects a specific attribute from every item in the list.</p>
<p>So this:</p>
<pre><code class="lang-plaintext">aws_instance.example[*].id
</code></pre>
<p>Is roughly like:</p>
<pre><code class="lang-plaintext">[for instance in aws_instance.example : instance.id]
</code></pre>
<p><strong>When to Use a</strong> <code>for</code> Expression Instead</p>
<p>If you want to do something <strong>more custom</strong>, like filtering or transforming values, use a <code>for</code> loop:</p>
<pre><code class="lang-plaintext">output "instance_names" {
  value = [for inst in aws_instance.example : "instance-${inst.id}"]
}
</code></pre>
<p>This gives you a custom list like:</p>
<pre><code class="lang-plaintext">["instance-i-abc123", "instance-i-def456", ...]
</code></pre>
<h1 id="heading-summary"><strong>Summary</strong></h1>
<p>FeatureDescription<code>[*]</code> (splat)Shortcut to extract a field from a list of resourcesWith <code>count</code>Directly use <code>resource_name[*].field</code>With <code>for_each</code>Use <code>values(resource_name)[*].field</code>Prefer <code>for</code> loopWhen you need filtering or custom formatting</p>
<h1 id="heading-final-tip"><strong>Final Tip</strong></h1>
<p>If you’re working with many dynamic resources, splat expressions make your Terraform code <strong>clean, readable, and easy to maintain</strong>.</p>
<p>They save time and reduce repetition — especially when you’re managing infrastructure at scale.</p>
]]></content:encoded></item><item><title><![CDATA[Terraform + Azure Storage Backend: The Right Way to Migrate State]]></title><description><![CDATA[While configuring a remote backend in Terraform using Azure Storage, I hit an error that might look familiar:
If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
If you wish to store the current configuration...]]></description><link>https://clouddecode.in/terraform-azure-storage-backend-the-right-way-to-migrate-state-498ac02cf500</link><guid isPermaLink="true">https://clouddecode.in/terraform-azure-storage-backend-the-right-way-to-migrate-state-498ac02cf500</guid><dc:creator><![CDATA[clouddecode]]></dc:creator><pubDate>Sun, 13 Apr 2025 17:17:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/FHnnjk1Yj7Y/upload/5f82e867037590abdf3eb94c6396499c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While configuring a remote backend in Terraform using Azure Storage, I hit an error that might look familiar:</p>
<pre><code class="lang-plaintext">If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
</code></pre>
<p>This happened because I updated the <code>backend</code> block in <a target="_blank" href="http://main.tf"><code>main.tf</code></a> <strong>before</strong> moving the existing <code>terraform.tfstate</code> file to Azure storage.</p>
<p><strong>✅ Correct sequence to avoid this issue:</strong></p>
<ol>
<li><strong>Upload the existing state file manually</strong> to Azure Blob Storage:</li>
</ol>
<pre><code class="lang-plaintext">az storage blob upload \
  --account-name &lt;storage-account&gt; \
  --container-name &lt;container&gt; \
  --name &lt;key&gt;.tfstate \
  --file terraform.tfstate
</code></pre>
<p><strong>2. Then add the backend block</strong> in <a target="_blank" href="http://main.tf"><code>main.tf</code></a>:</p>
<pre><code class="lang-plaintext">terraform {
  backend "azurerm" {
    resource_group_name  = "..."
    storage_account_name = "..."
    container_name       = "..."
    key                  = "prod.terraform.tfstate"
  }
}
</code></pre>
<ul>
<li><p><strong>Run the reconfigure command</strong> to connect Terraform to the new backend:</p>
</li>
<li><p><code>terraform init</code></p>
</li>
<li><p><strong>Now you’re good to go</strong>:</p>
</li>
<li><p><code>terraform plan</code></p>
</li>
</ul>
<p><strong>Tip</strong>: Always move your state before updating backend configs — it avoids migration prompts and keeps your infrastructure consistent.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Access Tokens vs. Refresh Tokens in Kubernetes Kubeconfig]]></title><description><![CDATA[🔹 Have you ever faced authentication failures in Kubernetes?🔹 Wondered why your kubectl commands suddenly stop working?🔹 Confused about how tokens in kubeconfig actually work?
If so, you’re not alone! Kubernetes authentication can be tricky, espec...]]></description><link>https://clouddecode.in/understanding-access-tokens-vs-refresh-tokens-in-kubernetes-kubeconfig-f787e2e3f160</link><guid isPermaLink="true">https://clouddecode.in/understanding-access-tokens-vs-refresh-tokens-in-kubernetes-kubeconfig-f787e2e3f160</guid><category><![CDATA[accesstoken]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[refreshtoken]]></category><dc:creator><![CDATA[clouddecode]]></dc:creator><pubDate>Sun, 23 Feb 2025 05:16:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/hpjSkU2UYSU/upload/990d50216d3eb64da13afbddf6d06adf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🔹 <strong>Have you ever faced authentication failures in Kubernetes?</strong><br />🔹 <strong>Wondered why your</strong> <code>kubectl</code> <strong>commands suddenly stop working?</strong><br />🔹 <strong>Confused about how tokens in</strong> <code>kubeconfig</code> <strong>actually work?</strong></p>
<p>If so, you’re not alone! <strong>Kubernetes authentication</strong> can be tricky, especially when dealing with tokens. One of the most common sources of confusion is the difference between an <strong>Access Token</strong> and a <strong>Refresh Token</strong> in a <code>kubeconfig</code> file.</p>
<p>This article will break down their roles, differences, and how they function in real-world Kubernetes authentication.</p>
<p><strong>🔹 What is an Access Token in Kubernetes?</strong></p>
<p>An <strong>Access Token</strong> is a credential that is <strong>passed with every request</strong> to the Kubernetes API server to prove the identity of the user. It is typically a <strong>JWT (JSON Web Token)</strong> issued by an <strong>Identity Provider (IdP)</strong> such as:</p>
<p>✅ <strong>Azure Active Directory (AAD) — for AKS</strong><br />✅ <strong>Google Cloud IAM — for GKE</strong><br />✅ <strong>AWS IAM/OIDC — for EKS</strong><br />✅ <strong>OpenID Connect (OIDC) providers like Keycloak</strong></p>
<p><strong>Key Characteristics of an Access Token:</strong></p>
<p>🔹 <strong>Short-lived</strong> — Typically expires in minutes or hours.<br />🔹 <strong>Used for API authentication</strong> — Sent in the request header for every <code>kubectl</code> command.<br />🔹 <strong>Stored in the</strong> <code>kubeconfig</code> <strong>file</strong> under <code>users[].user.token</code>.<br />🔹 <strong>Requires renewal</strong> once expired.</p>
<p><strong>Example: Access Token in Kubeconfig</strong></p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Config
users:
  - name: my-cluster-user
    user:
      token: eyJhbGciOiJIUzI1...
</code></pre>
<p>Here, the <code>token</code> is an <strong>Access Token</strong> that <code>kubectl</code> sends with every API request.</p>
<h2 id="heading-how-access-tokens-work-in-kubernetes-authentication"><strong>How Access Tokens Work in Kubernetes Authentication</strong></h2>
<p>1️⃣ You log in using an authentication method (<code>kubectl login</code>, <code>az aks get-credentials</code>, etc.).<br />2️⃣ A <strong>short-lived Access Token</strong> is issued and stored in <code>kubeconfig</code>.<br />3️⃣ Every time you run a command (<code>kubectl get pods</code>), this token is sent to the Kubernetes API server.<br />4️⃣ Once the token <strong>expires</strong>, the request <strong>fails with a 401 Unauthorized error</strong>.</p>
<p>At this point, Kubernetes does <strong>not</strong> automatically renew the token — this is where a <strong>Refresh Token</strong> comes in.</p>
<h2 id="heading-what-is-a-refresh-token-in-kubernetes"><strong>🔹 What is a Refresh Token in Kubernetes?</strong></h2>
<p>A <strong>Refresh Token</strong> is a long-lived credential that is <strong>not used for direct authentication</strong> but can be exchanged for a new <strong>Access Token</strong> when the current one expires.</p>
<p><strong>Key Characteristics of a Refresh Token:</strong></p>
<p>🔹 <strong>Long-lived</strong> — Can last days, weeks, or longer.<br />🔹 <strong>Not sent with API requests</strong> — Only used for obtaining a new Access Token.<br />🔹 <strong>Not stored in</strong> <code>kubeconfig</code> – Instead, managed by external authentication mechanisms.<br />🔹 <strong>Used in OIDC-based authentication</strong> – Works with authentication plugins to renew tokens seamlessly.</p>
<h2 id="heading-how-refresh-tokens-work-in-kubernetes-authentication">How Refresh Tokens Work in Kubernetes Authentication</h2>
<p>1️⃣ When an <strong>Access Token expires</strong>, Kubernetes denies API requests.<br />2️⃣ If you are using an <strong>OIDC provider or authentication plugin</strong>, <code>kubectl</code> automatically requests a new Access Token using the <strong>Refresh Token</strong>.<br />3️⃣ The new <strong>Access Token</strong> replaces the expired one in <code>kubeconfig</code>, allowing commands to continue working.<br />4️⃣ If the <strong>Refresh Token itself expires</strong>, you must reauthenticate manually (e.g., using <code>az aks get-credentials</code> or <code>kubectl oidc login</code>).</p>
<h2 id="heading-key-differences-access-token-vs-refresh-token"><strong>Key Differences: Access Token vs. Refresh Token</strong></h2>
<p>FeatureAccess TokenRefresh Token<strong>Purpose</strong>Authenticates API requestsUsed to get a new Access Token<strong>Lifetime</strong>Short-lived (minutes/hours)Long-lived (days/weeks)<strong>Usage</strong>Sent with each <code>kubectl</code> API requestUsed when the Access Token expires<strong>Storage in</strong> <code>kubeconfig</code>Stored under <code>users[].user.token</code>Typically not stored, managed externally<strong>Renewal</strong>Cannot renew itself, expires quicklyUsed to get a fresh Access Token<strong>Security Risk</strong>If exposed, can be used for API accessIf exposed, can be used to generate new tokens</p>
<h2 id="heading-real-world-example-kubernetes-authentication-with-azure-kubernetes-service-aks"><strong>Real-World Example: Kubernetes Authentication with Azure Kubernetes Service (AKS)</strong></h2>
<p>If you use <strong>Azure Kubernetes Service (AKS)</strong>, you’ve likely run this command:</p>
<pre><code class="lang-plaintext">az aks get-credentials --resource-group my-rg --name my-cluster
</code></pre>
<p>Here’s what happens behind the scenes:</p>
<p>✅ <strong>Step 1:</strong> Azure AD issues an <strong>Access Token</strong> and stores it in <code>kubeconfig</code>.<br />✅ <strong>Step 2:</strong> You run <code>kubectl get pods</code>, and the <strong>Access Token</strong> is sent to the API server.<br />✅ <strong>Step 3:</strong> If the token expires, Azure CLI can <strong>automatically renew it</strong> using a <strong>Refresh Token</strong>.<br />✅ <strong>Step 4:</strong> If the Refresh Token also expires, you must re-run <code>az aks get-credentials</code> to reauthenticate.</p>
<p>This automatic renewal is handled by Azure’s <strong>authentication plugin</strong>, making the process seamless for developers.</p>
<h2 id="heading-why-does-this-matter"><strong>Why Does This Matter?</strong></h2>
<p>✅ <strong>Prevents authentication failures</strong> — Knowing how tokens work helps avoid downtime.<br />✅ <strong>Improves security</strong> — Refresh Tokens reduce the risk of long-lived access credentials being compromised.<br />✅ <strong>Enhances troubleshooting</strong> — If <code>kubectl</code> stops working, checking the token expiration can quickly diagnose the issue.<br />✅ <strong>Cloud-agnostic knowledge</strong> – The same concepts apply to AKS, GKE, EKS, and other Kubernetes clusters using OIDC authentication.</p>
<h2 id="heading-best-practices-for-managing-tokens-in-kubernetes"><strong>Best Practices for Managing Tokens in Kubernetes</strong></h2>
<p>🔹 <strong>Use short-lived Access Tokens</strong> — Avoid long-lived API tokens that pose security risks.<br />🔹 <strong>Enable automatic token renewal</strong> — Use an OIDC authentication plugin to refresh tokens seamlessly.<br />🔹 <strong>Monitor expiration times</strong> — If <code>kubectl</code> fails unexpectedly, check token validity.<br />🔹 <strong>Avoid storing sensitive tokens in scripts or hardcoded files</strong> – Use secrets management solutions.</p>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>Understanding <strong>Access Tokens</strong> vs. <strong>Refresh Tokens</strong> is crucial for anyone managing Kubernetes clusters. By leveraging <strong>OIDC authentication</strong> and <strong>token refresh mechanisms</strong>, you can ensure a <strong>seamless and secure</strong> experience when working with <code>kubectl</code> and cloud-based Kubernetes services.</p>
]]></content:encoded></item><item><title><![CDATA[Struggling with Resource Limits in Kubernetes? Here’s What You Need to Know!]]></title><description><![CDATA[Kubernetes is a powerful container orchestration platform, but managing resources efficiently is key to ensuring fair resource distribution and avoiding performance bottlenecks. Two important resource management mechanisms in Kubernetes are LimitRang...]]></description><link>https://clouddecode.in/struggling-with-resource-limits-in-kubernetes-heres-what-you-need-to-know-b60e73087448</link><guid isPermaLink="true">https://clouddecode.in/struggling-with-resource-limits-in-kubernetes-heres-what-you-need-to-know-b60e73087448</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[clouddecode]]></dc:creator><pubDate>Sun, 02 Feb 2025 06:24:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Oaqk7qqNh_c/upload/59c82d18bf81139bd98d58bed797f410.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes is a powerful container orchestration platform, but managing resources efficiently is key to ensuring fair resource distribution and avoiding performance bottlenecks. Two important resource management mechanisms in Kubernetes are <strong>LimitRange</strong> and <strong>ResourceQuota</strong>. Understanding their differences and use cases is crucial for optimizing cluster performance.</p>
<h2 id="heading-what-is-limitrange"><strong>What is LimitRange?</strong></h2>
<p><strong>LimitRange</strong> is used to control the resource consumption of individual containers or pods within a namespace. It ensures that containers do not overconsume CPU, memory, or ephemeral storage, which could lead to instability in the cluster.</p>
<h3 id="heading-how-limitrange-works"><strong>How LimitRange Works</strong></h3>
<ul>
<li><p>Sets <strong>default</strong> CPU/memory requests and limits for containers.</p>
</li>
<li><p>Defines <strong>minimum</strong> and <strong>maximum</strong> resource constraints per container or pod.</p>
</li>
<li><p>Ensures that resource usage is balanced within the namespace.</p>
</li>
</ul>
<h3 id="heading-example-limitrange-yaml"><strong>Example LimitRange YAML</strong></h3>
<pre><code class="lang-plaintext">apiVersion: v1
kind: LimitRange
metadata:
  name: container-limits
  namespace: my-namespace
spec:
  limits:
  - default:
      cpu: "500m"
      memory: "256Mi"
    defaultRequest:
      cpu: "250m"
      memory: "128Mi"
    type: Container
</code></pre>
<h3 id="heading-why-use-limitrange"><strong>Why Use LimitRange?</strong></h3>
<p>✅ Prevents pods from consuming excessive resources.<br />✅ Ensures fair resource distribution among workloads.<br />✅ Provides default resource requests and limits for containers.</p>
<h3 id="heading-what-is-resourcequota"><strong>What is ResourceQuota?</strong></h3>
<p><strong>ResourceQuota</strong> is used to <strong>limit the total resource consumption</strong> within a namespace. It prevents any single team or application from monopolizing cluster resources, ensuring fair distribution.</p>
<h3 id="heading-how-resourcequota-works"><strong>How ResourceQuota Works</strong></h3>
<ul>
<li><p>Enforces <strong>global limits</strong> on CPU, memory, storage, and object counts (pods, services, PVCs, etc.) at the namespace level.</p>
</li>
<li><p>Helps administrators <strong>prevent over-provisioning</strong> of cluster resources.</p>
</li>
<li><p>Ensures <strong>multi-tenant clusters</strong> have controlled resource usage.</p>
</li>
</ul>
<h3 id="heading-example-resourcequota-yaml"><strong>Example ResourceQuota YAML</strong></h3>
<pre><code class="lang-plaintext">apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
  namespace: my-namespace
spec:
  hard:
    pods: "10"
    requests.cpu: "2"
    requests.memory: "4Gi"
    limits.cpu: "4"
    limits.memory: "8Gi"
</code></pre>
<h3 id="heading-why-use-resourcequota"><strong>Why Use ResourceQuota?</strong></h3>
<p>✅ Prevents a single namespace from consuming all cluster resources.<br />✅ Ensures fair distribution among multiple applications.<br />✅ Helps administrators enforce resource policies.</p>
<h3 id="heading-key-differences-between-limitrange-and-resourcequota"><strong>Key Differences Between LimitRange and ResourceQuota</strong></h3>
<p>FeatureLimitRangeResourceQuota<strong>Scope</strong>Pod/Container LevelNamespace Level<strong>Purpose</strong>Limits resources per container/podLimits total namespace resource usage<strong>Controls</strong>Default &amp; max CPU/memory for containersMax CPU/memory, pods, services, PVCs, etc.<strong>Use Case</strong>Prevents a single pod from consuming all resourcesPrevents a namespace from consuming all cluster resources<strong>Example</strong>Set default CPU/memory per podLimit total CPU/memory for namespace</p>
<h3 id="heading-when-to-use-what"><strong>When to Use What?</strong></h3>
<ul>
<li><p><strong>Use</strong> <code>LimitRange</code> if you want to <strong>enforce per-container limits</strong> to avoid rogue workloads.</p>
</li>
<li><p><strong>Use</strong> <code>ResourceQuota</code> if you want to <strong>limit total resources per namespace</strong> to ensure fair distribution.</p>
</li>
</ul>
<p>💡 <strong>Pro Tip:</strong> You can use both together! Apply <code>ResourceQuota</code> to control overall namespace limits and <code>LimitRange</code> to ensure fair pod-level resource allocation.</p>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>Efficient resource allocation is crucial for maintaining a stable Kubernetes environment. By implementing <strong>LimitRange</strong> and <strong>ResourceQuota</strong> effectively, you can prevent resource overuse, avoid unexpected application crashes, and ensure a well-balanced cluster.</p>
<p><strong>How do you manage resource limits in your Kubernetes clusters? Share your thoughts in the comments!</strong></p>
]]></content:encoded></item></channel></rss>