<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Anup's Blog]]></title><description><![CDATA[Anup's Blog]]></description><link>https://blog.anupkafle.com.np</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 23:19:57 GMT</lastBuildDate><atom:link href="https://blog.anupkafle.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[IAM Permissions Not Working in EKS? Here's How to Fix It (With IRSA)
]]></title><description><![CDATA[You have a working Amazon EKS cluster and deployed pods. Now, your pods need secure, fine-grained access to AWS services like S3, DynamoDB, or Secrets Manager, but:

You don't want to hardcode AWS cre]]></description><link>https://blog.anupkafle.com.np/iam-permissions-not-working-in-eks-here-s-how-to-fix-it-with-irsa</link><guid isPermaLink="true">https://blog.anupkafle.com.np/iam-permissions-not-working-in-eks-here-s-how-to-fix-it-with-irsa</guid><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Tue, 01 Jul 2025 21:55:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/6166737e-9c11-4ffb-a8b7-8cd6ba9de627.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You have a working Amazon EKS cluster and deployed pods. Now, your pods need secure, fine-grained access to AWS services like S3, DynamoDB, or Secrets Manager, but:</p>
<ul>
<li><p>You don't want to hardcode AWS credentials in pods.</p>
</li>
<li><p>You want to follow AWS best practices using IAM roles for Kubernetes service accounts (IRSA).</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/b243a616-3c5b-430d-a1e3-eebb275eeddf.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Solution: Use IRSA (IAM Roles for Service Accounts):</strong></p>
<p>With IRSA, each pod uses a Kubernetes service account that is linked to an IAM role, which defines what that pod can access in AWS.</p>
<p>This blog walks through enabling IRSA for your existing pods.</p>
<p><strong>Assumptions</strong></p>
<ul>
<li><p>You have an EKS cluster already running </p>
</li>
<li><p>Your pods are already deployed </p>
</li>
<li><p>You're using AWS CLI, kubectl, and eksctl</p>
</li>
<li><p>You have access to create IAM roles and policies</p>
</li>
</ul>
<p><strong>Step-by-Step Guide</strong></p>
<p><strong>Step 1: Verify OIDC Provider for Your EKS Cluster</strong></p>
<pre><code class="language-shell">eksctl utils associate-iam-oidc-provider \
  --region &lt;aws-region&gt; \
  --cluster &lt;cluster-name&gt; \
  --approve
</code></pre>
<p> <strong>Step 2: Create an IAM Policy with Required Permissions</strong></p>
<p>For example, for S3 read access:</p>
<ul>
<li>Create s3-read-policy.json:</li>
</ul>
<pre><code class="language-json">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["&lt;service-action&gt;"],
      "Resource": ["&lt;arn&gt;"]
    }
  ]
}
</code></pre>
<pre><code class="language-json">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::rambunct-app"]
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject"],
      "Resource": ["arn:aws:s3:::rambunct-app/*"]
    }
  ]
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/ec32c347-821e-408d-837a-db6b98cf6937.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create a policy</strong></li>
</ul>
<pre><code class="language-shell">aws iam create-policy \
  --policy-name &lt;your-policy-name&gt; \
  --policy-document file://s3-read-policy.json
</code></pre>
<p><strong>Step 3: Create a Kubernetes Service Account Linked to IAM Role</strong></p>
<p>This account will be used by your pod.</p>
<pre><code class="language-shell">eksctl create iamserviceaccount \
  --name &lt;serviceaccount-name&gt; \
  --namespace &lt;namespace&gt; \
  --cluster &lt;cluster-name&gt; \
  --region &lt;region&gt; \
  --attach-policy-arn arn:aws:iam::&lt;account-id&gt;:policy/&lt;policy-name&gt; \
  --approve \
  --override-existing-serviceaccounts
</code></pre>
<p><strong>Step 4: Patch the Pod (or Deployment) to Use the New Service Account</strong></p>
<p><strong>If you’re using plain pods (not Deployments):</strong></p>
<pre><code class="language-shell">kubectl delete pod &lt;your-pod&gt;
</code></pre>
<p><strong>Then update your YAML ;</strong></p>
<pre><code class="language-yaml">spec:
  serviceAccountName: &lt;your-serviceaccount&gt;
</code></pre>
<p><strong>Apply it:</strong></p>
<pre><code class="language-shell">kubectl apply -f &lt;your-pod&gt;
</code></pre>
<p> <strong>Step 5: Verify from Pod Logs</strong></p>
<pre><code class="language-shell">kubectl logs &lt;your-pod&gt;
</code></pre>
<h3><strong>Conclusion</strong></h3>
<p>By using IAM Roles for Service Accounts (IRSA), you can give your existing pods secure access to AWS services without compromising credentials. This is a production-grade method to connect workloads running in EKS with services like S3, Secrets Manager, and DynamoDB.</p>
<p>If your app needs AWS access, don’t use long-lived keys use IRSA. It's more secure, scalable, and AWS-native.</p>
]]></content:encoded></item><item><title><![CDATA[ Multi-Tenancy on AWS EKS: Separating Teams with Namespaces and IAM]]></title><description><![CDATA[As your team and projects grow, managing a single EKS cluster can get complicated. You'll likely have different teams like a "Backend Team" and a "Frontend Team" all needing to deploy their applicatio]]></description><link>https://blog.anupkafle.com.np/multi-tenancy-on-aws-eks-separating-teams-with-namespaces-and-iam</link><guid isPermaLink="true">https://blog.anupkafle.com.np/multi-tenancy-on-aws-eks-separating-teams-with-namespaces-and-iam</guid><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Tue, 06 May 2025 08:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/ca658d99-2fe4-4a46-a742-473fe4ae9cd5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As your team and projects grow, managing a single EKS cluster can get complicated. You'll likely have different teams like a "<strong>Backend Team</strong>" and a "<strong>Frontend Team</strong>" all needing to deploy their applications. If everyone deploys to the same place, things can get messy. Resources can be over-consumed, security becomes a concern, and it's hard to tell who owns what.</p>
<p>The solution to this common problem is multi-tenancy. Multi-tenancy is the practice of having multiple tenants (in this case, teams or applications) share the same EKS cluster while being logically isolated from each other. This is a highly efficient and cost-effective way to manage your Kubernetes infrastructure.</p>
<p>This blog post will walk you through a simple yet powerful way to achieve multi-tenancy on AWS EKS using two fundamental concepts: Kubernetes Namespaces and AWS IAM for EKS. By the end, you will have a practical setup that provides clear separation and security for different teams.</p>
<h3><strong>The Problem: A Single, Shared Cluster</strong></h3>
<p>Imagine a scenario where both the Backend Team and Frontend Team deploy their applications to the default namespace.</p>
<ul>
<li><p><strong>Chaos</strong>: Deployments might accidentally have the same names, leading to conflicts.</p>
</li>
<li><p><strong>Resource Hogs</strong>: The Frontend Team's application could suddenly get a lot of traffic and use up all the cluster's CPU, making the Backend Team's services slow or unresponsive.</p>
</li>
<li><p><strong>Security Risks</strong>: An engineer from the Backend Team might have permissions to accidentally delete a critical service belonging to the Frontend Team.</p>
</li>
<li><p><strong>No Ownership</strong>: When something goes wrong, it's difficult to quickly figure out which team is responsible.</p>
</li>
</ul>
<h3><strong>The Solution: Namespaces + IAM</strong></h3>
<p>Our solution is to create a dedicated namespace for each team. A namespace is a virtual partition inside a Kubernetes cluster. It gives you a way to divide cluster resources and provide a scope for names. But just creating a namespace isn't enough; we need to enforce who can access what. This is where AWS IAM comes in.</p>
<p>AWS EKS has a powerful feature that lets you map AWS IAM roles directly to Kubernetes RBAC (Role-Based Access Control) permissions. This allows you to say, "The IAM role for the Backend Team's developers can only access the backend-team namespace, and nothing else."</p>
<p>Here's the plan:</p>
<ul>
<li><p><strong>Create an EKS Cluster:</strong> We'll start with a standard EKS cluster.</p>
</li>
<li><p><strong>Create Namespaces:</strong> We'll create two namespaces, backend-team and frontend-team.</p>
</li>
<li><p><strong>Create IAM Roles:</strong> We'll set up two IAM roles, BackendTeamDeveloper and FrontendTeamDeveloper.</p>
</li>
<li><p><strong>Map IAM to RBAC:</strong> We'll use EKS's built-in access management to grant each IAM role specific permissions for their respective namespaces.</p>
</li>
<li><p><strong>Deploy and Demo:</strong> We'll deploy a simple application for each team and demonstrate how they are isolated.</p>
</li>
</ul>
<h3><strong>Practical Demo: Step-by-Step</strong></h3>
<p>This demo assumes you have the AWS CLI and kubectl configured.</p>
<p><strong>Step 1: Create an EKS Cluster</strong></p>
<pre><code class="language-yaml">apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: multi-tenancy-demo
  region: us-east-1
  version: "1.28"

managedNodeGroups:
- name: standard-nodes
  instanceType: t3.medium
  desiredCapacity: 2
</code></pre>
<p>Now, create the cluster:</p>
<pre><code class="language-shell">eksctl create cluster -f cluster.yaml
</code></pre>
<p><strong>Step 2: Create Namespaces</strong></p>
<p>Once your cluster is active, let's create a dedicated namespace for each team.</p>
<pre><code class="language-shell">kubectl create namespace backend-team
kubectl create namespace frontend-team
</code></pre>
<p>You can verify the namespaces are created with:</p>
<pre><code class="language-shell">kubectl get namespaces
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/7111a968-d03c-487e-ad46-09745f6721ea.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Step 3: Create IAM Roles</strong></p>
<p>In the AWS Console, navigate to the IAM service and create two IAM roles:</p>
<ul>
<li><p>BackendTeamDeveloper</p>
</li>
<li><p>FrontendTeamDeveloper</p>
</li>
</ul>
<p>These roles are what your team members will assume to interact with the cluster. For this demo, you can leave the roles without any specific permissions initially. The EKS access management will handle the permissions for the cluster itself.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/d7198816-3112-4906-affe-30fa50826f4b.png" alt="" style="display:block;margin:0 auto" />

<h4><strong>Step 4: Map IAM Roles to Namespaces and Kubernetes Role RoleBinding</strong></h4>
<p>This is the most critical step for security. We'll use EKS's Access Management feature to map our IAM roles to specific permissions within the cluster.</p>
<p>You need to use the AWS CLI to create an Access Entry and Access Policy for each team. Replace 123456789 with your AWS account ID.</p>
<pre><code class="language-shell"># For the Backend Team 

eksctl create accessentry --cluster multi-tenancy-demo \ --principal-arn arn:aws:iam::123456789:role/BackendTeamDeveloper \ --type STANDARD

# For the Frontend Team 

eksctl create accessentry --cluster multi-tenancy-demo \ --principal-arn arn:aws:iam::123456789:role/FrontendTeamDeveloper \ --type STANDARD
</code></pre>
<p>Now, we'll create the RBAC resources.</p>
<pre><code class="language-yaml"># This Role defines permissions for the backend team within their namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: backend-team-developer-role
  namespace: backend-team
rules:
- apiGroups: ["", "apps", "extensions"] 
  resources: ["pods", "deployments", "services", "configmaps"] 
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list"]
---
# This RoleBinding links the IAM principal to the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: backend-team-developer-rolebinding
  namespace: backend-team
subjects:
- kind: User
  name: arn:aws:iam::123456789:role/BackendTeamDeveloper # This must match the principal ARN
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: backend-team-developer-role
  apiGroup: rbac.authorization.k8s.io
</code></pre>
<pre><code class="language-yaml"># This Role defines permissions for the frontend team within their namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: frontend-team-developer-role
  namespace: frontend-team
rules:
- apiGroups: ["", "apps", "extensions"]
  resources: ["pods", "deployments", "services", "configmaps"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# This RoleBinding links the IAM principal to the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: frontend-team-developer-rolebinding
  namespace: frontend-team
subjects:
- kind: User
 name: arn:aws:iam::123456789:role/FrontendTeamDeveloper # This must match the principal ARN
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: frontend-team-developer-role
  apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Apply these files to your cluster:</p>
<pre><code class="language-shell">kubectl apply -f backend-team-rbac.yaml
kubectl apply -f frontend-team-rbac.yaml
</code></pre>
<h3>Step 5: Demoing the Isolation</h3>
<p>Let's see how this works in practice.First, assume the BackendTeamDeveloper IAM role using the AWS CLI.</p>
<pre><code class="language-shell">aws sts assume-role --role-arn arn:aws:iam::123456789012:role/BackendTeamDeveloper --role-session-name BackendTeamSession
</code></pre>
<p>Use the returned credentials to configure your AWS CLI session and verify identity</p>
<pre><code class="language-shell">aws sts get-caller-identity
</code></pre>
<p>Now, try to deploy an Nginx server into the backend-team namespace.</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-app
  namespace: backend-team
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-app
  template:
    metadata:
      labels:
        app: backend-app
    spec:
      containers:
      - name: nginx
        image: nginx:latest
</code></pre>
<p>Deploy the application:</p>
<pre><code class="language-shell">kubectl apply -f backend-app.yaml
</code></pre>
<p>This should work perfectly. The pod will be created in the backend-team namespace.</p>
<p>Now, with the same IAM role (BackendTeamDeveloper), try to list pods in the frontend-team namespace:</p>
<pre><code class="language-plaintext">Error from server (Forbidden): pods is forbidden: User "arn:aws:sts::123456789:assumed-role/BackendTeamDeveloper/..." cannot list resource "pods" in API group "" in the namespace "frontend-team"
</code></pre>
<p>Again, a Forbidden error confirms our security model is working perfectly.</p>
<h3><strong>Conclusion</strong></h3>
<p>This hands-on guide confirms that secure multi-tenancy on AWS EKS is built on a two-step process: using eksctl to map IAM roles to Kubernetes identities, then using Kubernetes RBAC to enforce granular permissions. The Forbidden errors we encountered were the ultimate proof of a successful setup, confirming that our security model is fully functional and providing a robust foundation for shared EKS clusters.</p>
]]></content:encoded></item><item><title><![CDATA[VPC-Connected AWS Lambdas Are Slower ?  Here’s How to Fixed It]]></title><description><![CDATA[Serverless computing with AWS Lambda is one of the most powerful ways to run code in the cloud without managing servers. But when a Lambda function is placed inside a VPC (Virtual Private Cloud) to co]]></description><link>https://blog.anupkafle.com.np/vpc-connected-aws-lambdas-are-slower-here-s-how-to-fixed-it</link><guid isPermaLink="true">https://blog.anupkafle.com.np/vpc-connected-aws-lambdas-are-slower-here-s-how-to-fixed-it</guid><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Thu, 27 Mar 2025 09:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/a6d9384c-467c-483e-9a37-ddbbf1d3e14f.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Serverless computing with AWS Lambda is one of the most powerful ways to run code in the cloud without managing servers. But when a Lambda function is placed inside a VPC (Virtual Private Cloud) to connect to services like Amazon RDS, ElastiCache, or private APIs, many developers notice one frustrating change:</p>
<blockquote>
<p>⚠️ Cold starts become slower historically increasing from ~100 ms to nearly 1 second.</p>
</blockquote>
<p>This slowdown has long been considered one of the trade-offs of using Lambda with VPCs. But how bad is it today? Does AWS still suffer from large cold start delays inside a VPC?</p>
<p>To find out, we ran a real-world experiment comparing cold starts of the same Lambda function inside a VPC and outside a VPC, measured execution times, and explored why these differences happen and how to fix them.</p>
<h3>Q )Why Lambda Cold Starts Can Be Slower in a VPC ?</h3>
<p>A “cold start” occurs when AWS spins up a new execution environment for your Lambda function. This usually happens when the function hasn’t been invoked for a while or when scaling up to handle more requests.</p>
<p>When a Lambda function runs outside a VPC, AWS handles networking internally, so the container can start almost immediately.</p>
<p>When it’s placed inside a VPC, however, Lambda must first:</p>
<ul>
<li><p>Create and attach an Elastic Network Interface (ENI) to your VPC subnet</p>
</li>
<li><p>Assign private IP addresses</p>
</li>
<li><p>Apply security groups</p>
</li>
</ul>
<p>This ENI setup adds latency to the cold start. Historically, this could add 600–1200 ms to initialization.</p>
<p>AWS has since improved this process with Hyperplane ENIs, which reuse network interfaces more efficiently but the only way to know the real impact is to measure it yourself.</p>
<h3>Step 1 : Writing a Sample Lambda Function</h3>
<p>Let’s create a simple Python Lambda function to measure cold start time and container reuse.</p>
<pre><code class="language-python">import time
import datetime

# This runs once when the container is initialized (cold start)
INIT_TIME = time.time()

def lambda_handler(event, context):
    start = time.time()
    # Measure how long the container has been alive
    uptime_since_init = start - INIT_TIME

    # Simulate a small workload
    time.sleep(0.1)

    end = time.time()
    execution_duration = end - start

    return {
        "statusCode": 200,
        "timestamp": datetime.datetime.utcnow().isoformat() + "Z",
        "execution_duration": f"{execution_duration:.4f} seconds",
        "uptime_since_init": f"{uptime_since_init:.4f} seconds",
        "message": "Hello from Lambda!"
    }
</code></pre>
<h3>What the Code Does ?</h3>
<ul>
<li><p><strong>INIT_TIME</strong> is set once when the Lambda container starts , this is how we detect cold starts.</p>
</li>
<li><p><strong>uptime_since_init</strong> tells us how long the container has been alive:</p>
<ul>
<li><p>Near 0 → cold start</p>
</li>
<li><p>Large number → warm start</p>
</li>
</ul>
</li>
<li><p><strong>execution_duration</strong> measures how long the function itself takes to execute.</p>
</li>
<li><p>We also add a small time.sleep(0.1) to simulate a minimal workload.</p>
</li>
</ul>
<h3>Step 2 : Deploy Two Versions of the Lambda</h3>
<p>To compare performance:</p>
<ol>
<li><p>Lambda A : Outside VPC: Create a Lambda and leave the VPC setting as “No VPC.”</p>
</li>
<li><p>Lambda B : Inside VPC: Create another Lambda in a private subnet of a VPC.</p>
</li>
</ol>
<p>Both functions used 128 MB memory and the same code above.</p>
<h3>Step 3 : Invoke Both Functions 10 Times</h3>
<p>Invoke each function 10 times at 30-second intervals. We focus on the first invocation (cold start) and subsequent ones (warm starts).</p>
<p><strong>Results : Outside VPC</strong></p>
<p>First invocation (cold start):</p>
<pre><code class="language-plaintext">Init Duration: 114.98 ms
execution_duration: ~0.1001 seconds
uptime_since_init: ~0.0058 seconds
</code></pre>
<p>Subsequent invocations (warm):</p>
<pre><code class="language-plaintext">Init Duration: — (not shown)
execution_duration: ~0.1001 seconds
uptime_since_init: increasing
</code></pre>
<p>Outside a VPC, cold starts were about 115 ms, and warm invocations stayed around 102 ms consistently</p>
<p><strong>Results : Inside VPC.</strong></p>
<p>First invocation (cold start):</p>
<pre><code class="language-plaintext">Init Duration: 820 ms
execution_duration: ~0.100 s
uptime_since_init: ~0.006 s
</code></pre>
<p>Subsequent invocations (warm):</p>
<pre><code class="language-plaintext">Init Duration: — (not shown)
execution_duration: ~0.100 s
uptime_since_init: increasing
</code></pre>
<p>Inside a VPC, cold starts were about 820 ms, and warm invocations stayed around 102 ms consistently</p>
<h3>Side-by-Side Comparison</h3>
<table>
<thead>
<tr>
<th>Environment</th>
<th>Cold Start (Init Duration)</th>
<th>Warm Execution Time</th>
</tr>
</thead>
<tbody><tr>
<td>Outside VPC</td>
<td>~115 ms</td>
<td>~102 ms</td>
</tr>
<tr>
<td>Inside VPC</td>
<td>~802 ms</td>
<td>~102 ms</td>
</tr>
</tbody></table>
<h3>Observation</h3>
<p>Lambda functions outside a VPC start up nearly 7× faster during cold starts. Once warm, both perform about the same but that cold start latency can significantly impact real-world applications.</p>
<h3>Q)Why the Gap Exists ?</h3>
<p>The big gap is almost entirely due to ENI setup inside a VPC. AWS must create and attach a network interface before the function can run. That adds several hundred milliseconds of delay especially if the Lambda has been idle or deployed to a new subnet.</p>
<p>Even with AWS’s Hyperplane ENIs and reuse improvements, ENI creation still happens under certain conditions (e.g. first invocation, long idle times), and you’ll feel that delay.</p>
<h3>Step 4 : Reducing Cold Start Latency Even More</h3>
<p>If your Lambda must run inside a VPC (for example, to reach a database), here’s how to reduce the impact:</p>
<ol>
<li>Enable Provisioned Concurrency</li>
</ol>
<p>Provisioned concurrency keeps execution environments warm and ready:</p>
<pre><code class="language-shell">aws lambda put-provisioned-concurrency-config 
--function-name MyVpcLambda 
--qualifier prod 
--provisioned-concurrent-executions 5
</code></pre>
<p>This eliminates most cold start delays, even inside a VPC.</p>
<p>2. Use VPC Endpoints</p>
<p>If your Lambda calls other AWS services (e.g., S3, DynamoDB), create VPC endpoints to avoid NAT Gateway latency:</p>
<p>3. Increase Memory Size</p>
<p>Higher memory allocation gives your Lambda more CPU power, reducing cold start times. Even increasing from 128 MB → 512 MB can cut cold start time by 30–40%.</p>
<p><strong>Key Takeaways</strong></p>
<ul>
<li><p>Cold starts happen when AWS spins up a new container for your Lambda.</p>
</li>
<li><p>Lambdas outside a VPC are significantly faster (~100–150 ms) because they don’t require ENI setup.</p>
</li>
<li><p>Lambdas inside a VPC can be 5–8× slower (~600–1200 ms) during cold starts due to ENI creation.</p>
</li>
<li><p>Improvements like Hyperplane ENIs help, but they don’t eliminate the problem.</p>
</li>
<li><p>Provisioned Concurrency, VPC Endpoints, and memory tuning can help reduce latency inside a VPC.</p>
</li>
</ul>
<h3>Conclusion</h3>
<p>If low latency is critical, keeping Lambda functions outside a VPC is the best choice. They start faster, respond more quickly, and avoid the overhead of ENI setup. Running Lambda inside a VPC is necessary only when connecting to private resources but you should expect and plan for slower cold starts in that scenario. Even with AWS’s improvements, ENI creation can still add hundreds of milliseconds to startup time.</p>
]]></content:encoded></item><item><title><![CDATA[Deep Dive & Step-By-Step: Deploying a Flask App to Amazon ECS with GitHub Actions & OpenID Connect]]></title><description><![CDATA[Modern developers want fast CI/CD but without storing long-lived AWS keys.By combining Amazon ECS (Fargate), Amazon ECR, GitHub Actions, and OpenID Connect (OIDC) you can deploy securely and automatic]]></description><link>https://blog.anupkafle.com.np/deep-dive-step-by-step-deploying-a-flask-app-to-amazon-ecs-with-github-actions-openid-connect</link><guid isPermaLink="true">https://blog.anupkafle.com.np/deep-dive-step-by-step-deploying-a-flask-app-to-amazon-ecs-with-github-actions-openid-connect</guid><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Mon, 03 Feb 2025 09:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/6944beb1-b7f2-4784-a2b0-06eb2d6dc60d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Modern developers want fast CI/CD but without storing long-lived AWS keys.By combining Amazon ECS (Fargate), Amazon ECR, GitHub Actions, and OpenID Connect (OIDC) you can deploy securely and automatically.</p>
<p>This blog explains how to deploy a containerized app to ECS with GitHub Actions CI/CD and why OIDC is the modern, secure way to let your pipeline assume AWS IAM roles.</p>
<h2><strong>System Architecture</strong></h2>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/5ced0650-5853-4a4a-8830-5fd8a1585592.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Step 1 : Build and Containerize the App</strong></h3>
<p><strong>Why</strong>: ECS runs containers; your app must be packaged into one. So, lets create minimal Flask app.</p>
<pre><code class="language-python">#app.py
from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "🚀 Hello from my-app on Amazon ECS using Github actions auto deployment!"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=80)
</code></pre>
<pre><code class="language-plaintext">#requirements.txt
flask
</code></pre>
<pre><code class="language-dockerfile">#Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 80
CMD ["python", "app.py"]
</code></pre>
<h2><strong>Step 2 : Set Up AWS Infrastructure</strong></h2>
<p>We need a place to store images (ECR), a cluster to run them (ECS), and a network.</p>
<h3>2.1 Create ECR Repository</h3>
<pre><code class="language-shell">aws ecr create-repository --repository-name my-app
</code></pre>
<p>Gives a URI like:</p>
<p><a href="http://XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-app">XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-app</a></p>
<p>ECR is a private Docker registry. ECS will pull images from here.</p>
<h3>2.2 Create ECS Cluster</h3>
<pre><code class="language-shell">aws ecs create-cluster --cluster-name my-app-cluster --region &lt;aws-region&gt;
</code></pre>
<p>A cluster is just a logical group of capacity (in Fargate you don’t manage servers).</p>
<h3><strong>2.3 Define Task</strong></h3>
<p>Create ecs-task-def.json:</p>
<pre><code class="language-json">{
  "family": "my-app-task",
  "executionRoleArn": "arn:aws:iam::&lt;ACCOUNT_ID&gt;:role/ecsTaskExecutionRole",
  "networkMode": "awsvpc",
  "containerDefinitions": [
    {
      "name": "my-app",
      "image": "&lt;ACCOUNT_ID&gt;.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
      "cpu": 256,
      "memory": 512,
      "essential": true,
      "portMappings": [{ "containerPort": 80 }]
    }
  ],
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "256",
  "memory": "512"
}
</code></pre>
<p>Register:</p>
<pre><code class="language-shell">aws ecs register-task-definition --cli-input-json file://ecs-task-def.json
</code></pre>
<p>The <strong>execution role</strong> lets the task pull from ECR and send logs. The image is a placeholder; GitHub Actions will later push and update it.</p>
<h3><strong>2.4 Networking (Default VPC)</strong></h3>
<pre><code class="language-shell"># Get default VPC
aws ec2 describe-vpcs --filters "Name=isDefault,Values=true" --query "Vpcs[0].VpcId" --output text
# Get subnets in that VPC
aws ec2 describe-subnets --filters "Name=vpc-id,Values=&lt;VPC_ID&gt;" --query "Subnets[*].SubnetId" --output text
# Get default SG
aws ec2 describe-security-groups --filters "Name=vpc-id,Values=&lt;VPC_ID&gt;" --query "SecurityGroups[?GroupName=='default'].GroupId" --output text
# Open HTTP if needed
aws ec2 authorize-security-group-ingress --group-id &lt;SG_ID&gt; --protocol tcp --port 80 --cidr 0.0.0.0/0
</code></pre>
<h3><strong>2.5 Create Service</strong></h3>
<pre><code class="language-shell">aws ecs create-service \
  --cluster my-app-cluster \
  --service-name my-app-service \
  --task-definition my-app-task \
  --desired-count 1 \
  --launch-type FARGATE \
  --network-configuration "awsvpcConfiguration={subnets=[subnet-aaa,subnet-bbb],securityGroups=[sg-xxx],assignPublicIp=ENABLED}"
</code></pre>
<p><strong>Service</strong> keeps 1 running copy, updates when a new task definition revision arrives.</p>
<h2><strong>Step 3 : Configure OpenID Connect (OIDC)</strong></h2>
<p><strong>Why:</strong> avoid putting permanent AWS keys in GitHub.</p>
<h3><strong>3.1 Add GitHub as OIDC Identity Provider</strong></h3>
<ul>
<li><p>In IAM → Identity providers → Add provider</p>
</li>
<li><p>Type: <strong>OpenID Connect</strong></p>
</li>
<li><p>URL: <a href="https://token.actions.githubusercontent.com">https://token.actions.githubusercontent.com</a></p>
</li>
<li><p>Audience: <a href="http://sts.amazonaws.com">sts.amazonaws.com</a></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/8d3db4c7-8c54-47c2-a2c9-7bf8a9dca5f4.png" alt="" style="display:block;margin:0 auto" />

<p>Attach a <strong>trust policy</strong> like:</p>
<pre><code class="language-json">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::&lt;ACCOUNT_ID&gt;:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringLike": {
          "token.actions.githubusercontent.com:sub": "repo:your-username/ecsapp:ref:refs/heads/main"
        },
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/29ff0233-5237-4497-901c-3f3e53845fbd.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/4a335722-1e76-4084-9f90-e50c74fb46fd.png" alt="" style="display:block;margin:0 auto" />

<p>Attach only what’s needed (I am giving full access but you can give only the required access):</p>
<ul>
<li>AmazonECS_FullAccess , AmazonEC2ContainerRegistryFullAccess</li>
</ul>
<h3><strong>3.3 How OIDC Works at Runtime</strong></h3>
<ol>
<li><p>GitHub runner asks <a href="http://token.actions.githubusercontent.com">token.actions.githubusercontent.com</a> for a JWT.</p>
</li>
<li><p>JWT includes:<br />iss=<a href="https://token.actions">https://token.actions</a>...<br />sub=repo:you/ecsapp:ref:refs/heads/main<br />aud=<a href="http://sts.amazonaws.com">sts.amazonaws.com</a></p>
</li>
<li><p>Action configure-aws-credentials calls AssumeRoleWithWebIdentity with JWT.</p>
</li>
<li><p>AWS STS validates signature + claims + trust policy.</p>
</li>
<li><p>STS returns <strong>temporary keys</strong> (valid ~1h).</p>
</li>
<li><p>These keys are used by the workflow for ECR/ECS.</p>
</li>
</ol>
<h2><strong>Step 4 : GitHub Actions Workflow</strong></h2>
<p>Create .github/workflows/deployment.yaml:</p>
<pre><code class="language-yaml">name: Deploy my-app to Amazon ECS

on:
  push:
    branches: [ master ]

jobs:
  deploy:                   
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::943653636298:role/GitHub_Actions_Role
          aws-region: us-east-1

      - name: Login to Amazon ECR
        id: ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build and push Docker image
        run: |
          IMAGE_URI=\({{ steps.ecr.outputs.registry }}/my-app:\){{ github.sha }}
          docker build -t $IMAGE_URI .
          docker push $IMAGE_URI

      - name: Render Amazon ECS task definition
        id: taskdef
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
          task-definition: ecs-task-def.json
          container-name: my-app
          image: \({{ steps.ecr.outputs.registry }}/my-app:\){{ github.sha }}

      - name: Deploy Amazon ECS task definition
        uses: aws-actions/amazon-ecs-deploy-task-definition@v2
        with:
          task-definition: ${{ steps.taskdef.outputs.task-definition }}
          service: my-app-service
          cluster: my-app-cluster
          wait-for-service-stability: true
</code></pre>
<p>Push to GitHub:</p>
<pre><code class="language-shell">git add .
git commit -m "add CI/CD"
git push origin master
</code></pre>
<h2><strong>Step 5 : First Deployment Flow</strong></h2>
<ol>
<li><p>Push → triggers workflow.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/3b9f0e3d-a786-4959-b626-2571fb1ac091.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>GitHub gets OIDC token → AWS STS returns temporary creds.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/3230b952-a292-4abb-85af-555959fb6709.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>Docker image builds and is pushed to ECR.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/3ed26232-c5b3-4e48-953f-27d743c7469c.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>ECS task definition updated with new image tag.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/6fc7439b-d4b3-4c01-a26a-883db1621158.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>ECS service does a rolling deployment → new container goes live.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/b766f1e5-f5ec-4f2c-a595-ce77ee88c6b7.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/ecd1c650-4041-4dd1-90ab-229148fe9183.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/0039de16-72f8-4722-8c2d-469c786f9013.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>Check ECS console → Cluster → Service → Task → copy Public IP → open in browser.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338fc85e594fa4ba3b90a69/8f927063-650f-4506-ae45-41409b84ef53.png" alt="" style="display:block;margin:0 auto" /></li>
</ol>
<h2>Conclusion</h2>
<p>With this setup, every push to your GitHub repository triggers a secure, automated deployment to Amazon ECS. Using OpenID Connect, your workflow gets short-lived AWS credentials on demand, eliminating static keys and reducing risk. ECS Fargate runs your containers without managing servers, while ECR stores immutable images for each build.This approach is simple, scalable, and secure  a modern DevOps pattern that combines fast CI/CD with least-privilege, keyless authentication.</p>
]]></content:encoded></item><item><title><![CDATA[Learn Bash Scripting: A Beginner's Guide]]></title><description><![CDATA[Learn Bash Scripting: A Beginner's Guide
Table of Contents

Introduction

Bash Scripting Basics

Variables and User Input

Conditional Statements

Loops and Iterations

Functions

Working with Files

Scripting Best Practices

Conclusion



1. Introdu...]]></description><link>https://blog.anupkafle.com.np/learn-bash-scripting-a-beginners-guide</link><guid isPermaLink="true">https://blog.anupkafle.com.np/learn-bash-scripting-a-beginners-guide</guid><category><![CDATA[Bash]]></category><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community Builder]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Fri, 06 Dec 2024 23:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732989141842/22f11a9e-cbfc-47d9-9474-24d0d46f4204.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-learn-bash-scripting-a-beginners-guide"><strong>Learn Bash Scripting: A Beginner's Guide</strong></h1>
<h2 id="heading-table-of-contents"><strong>Table of Contents</strong></h2>
<ol>
<li><p>Introduction</p>
</li>
<li><p>Bash Scripting Basics</p>
</li>
<li><p>Variables and User Input</p>
</li>
<li><p>Conditional Statements</p>
</li>
<li><p>Loops and Iterations</p>
</li>
<li><p>Functions</p>
</li>
<li><p>Working with Files</p>
</li>
<li><p>Scripting Best Practices</p>
</li>
<li><p>Conclusion</p>
</li>
</ol>
<hr />
<h2 id="heading-1-introduction"><strong>1. Introduction</strong></h2>
<p>Bash scripting is a powerful tool for automating tasks in Linux and macOS environments. It enables users to write programs in the Bash shell to perform system tasks such as file manipulation, process control, and automating repetitive tasks. This guide will walk you through the basics of Bash scripting, starting from the fundamentals and progressing to more advanced topics.</p>
<hr />
<h2 id="heading-2-bash-scripting-basics"><strong>2. Bash Scripting Basics</strong></h2>
<h3 id="heading-what-is-bash"><strong>What is Bash?</strong></h3>
<p>Bash (Bourne Again SHell) is the default shell in many Unix-like operating systems. It allows users to write scripts for automating tasks and processing commands directly in the terminal.</p>
<h3 id="heading-creating-your-first-bash-script"><strong>Creating Your First Bash Script</strong></h3>
<p>To create a Bash script, simply write commands in a text file and save it with a <code>.sh</code> extension. Here’s the basic structure of a script:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># This is a comment</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, World!"</span>
</code></pre>
<ul>
<li><p><code>#!/bin/bash</code>: This line specifies the path to the Bash interpreter.</p>
</li>
<li><p><code>echo "Hello, World!"</code>: This command outputs the text "Hello, World!" to the terminal.</p>
</li>
</ul>
<h4 id="heading-sample-output"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Hello, World!
</code></pre>
<h3 id="heading-making-a-script-executable"><strong>Making a Script Executable</strong></h3>
<p>Before you can run your script, you need to make it executable:</p>
<pre><code class="lang-bash">chmod +x script-name.sh
</code></pre>
<p>To run the script, use:</p>
<pre><code class="lang-bash">./script-name.sh
</code></pre>
<hr />
<h2 id="heading-3-variables-and-user-input"><strong>3. Variables and User Input</strong></h2>
<h3 id="heading-defining-variables"><strong>Defining Variables</strong></h3>
<p>Variables in Bash are used to store data. To assign a value to a variable, use the following syntax:</p>
<pre><code class="lang-bash">name=<span class="hljs-string">"Anup"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, <span class="hljs-variable">$name</span>"</span>
</code></pre>
<h4 id="heading-sample-output-1"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Hello, Anup
</code></pre>
<h3 id="heading-user-input"><strong>User Input</strong></h3>
<p>To get input from the user, you can use the <code>read</code> command:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Enter your name:"</span>
<span class="hljs-built_in">read</span> name
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, <span class="hljs-variable">$name</span>!"</span>
</code></pre>
<h4 id="heading-sample-output-2"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Enter your name:
Anup
Hello, Anup!
</code></pre>
<hr />
<h2 id="heading-4-conditional-statements"><strong>4. Conditional Statements</strong></h2>
<p>Conditional statements allow you to execute code based on certain conditions.</p>
<h3 id="heading-if-statements"><strong>If Statements</strong></h3>
<p>The <code>if</code> statement checks if a condition is true. If it is, the code inside the <code>if</code> block is executed.</p>
<pre><code class="lang-bash">number=12
<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$number</span> -gt 10 ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"The number is greater than 10"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<h4 id="heading-sample-output-3"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">The number is greater than 10
</code></pre>
<h3 id="heading-else-and-elif"><strong>Else and Elif</strong></h3>
<p>You can extend an <code>if</code> statement with <code>else</code> and <code>elif</code> (else if) to handle multiple conditions:</p>
<pre><code class="lang-bash">number=10
<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$number</span> -gt 10 ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"The number is greater than 10"</span>
<span class="hljs-keyword">elif</span> [ <span class="hljs-variable">$number</span> -eq 10 ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"The number is equal to 10"</span>
<span class="hljs-keyword">else</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"The number is less than 10"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<h4 id="heading-sample-output-4"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">The number is equal to 10
</code></pre>
<h3 id="heading-case-statements"><strong>Case Statements</strong></h3>
<p>The <code>case</code> statement is useful when you have many conditions to check:</p>
<pre><code class="lang-bash">choice=<span class="hljs-string">"apple"</span>
<span class="hljs-keyword">case</span> <span class="hljs-variable">$choice</span> <span class="hljs-keyword">in</span>
  <span class="hljs-string">"apple"</span>)
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"You selected apple"</span>
    ;;
  <span class="hljs-string">"banana"</span>)
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"You selected banana"</span>
    ;;
  *)
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Invalid selection"</span>
    ;;
<span class="hljs-keyword">esac</span>
</code></pre>
<h4 id="heading-sample-output-5"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">You selected apple
</code></pre>
<hr />
<h2 id="heading-5-loops-and-iterations"><strong>5. Loops and Iterations</strong></h2>
<p>Loops are essential for repeating tasks, especially when dealing with lists or conditions.</p>
<h3 id="heading-for-loop"><strong>For Loop</strong></h3>
<p>A <code>for</code> loop repeats a command for a specified number of times or through a list:</p>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> {1..5}; <span class="hljs-keyword">do</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Iteration <span class="hljs-variable">$i</span>"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<h4 id="heading-sample-output-6"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
</code></pre>
<h3 id="heading-while-loop"><strong>While Loop</strong></h3>
<p>A <code>while</code> loop continues to execute as long as a given condition is true:</p>
<pre><code class="lang-bash">count=1
<span class="hljs-keyword">while</span> [ <span class="hljs-variable">$count</span> -le 5 ]; <span class="hljs-keyword">do</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Iteration <span class="hljs-variable">$count</span>"</span>
  ((count++))
<span class="hljs-keyword">done</span>
</code></pre>
<h4 id="heading-sample-output-7"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
</code></pre>
<h3 id="heading-until-loop"><strong>Until Loop</strong></h3>
<p>An <code>until</code> loop is the opposite of a <code>while</code> loop—it runs until the condition becomes true:</p>
<pre><code class="lang-bash">count=1
until [ <span class="hljs-variable">$count</span> -gt 5 ]; <span class="hljs-keyword">do</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Iteration <span class="hljs-variable">$count</span>"</span>
  ((count++))
<span class="hljs-keyword">done</span>
</code></pre>
<h4 id="heading-sample-output-8"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
</code></pre>
<hr />
<h2 id="heading-6-functions"><strong>6. Functions</strong></h2>
<p>Functions help organize your code into reusable blocks.</p>
<h3 id="heading-defining-a-function"><strong>Defining a Function</strong></h3>
<p>To define a function, use the following syntax:</p>
<pre><code class="lang-bash"><span class="hljs-function"><span class="hljs-title">greet</span></span>() {
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, <span class="hljs-variable">$1</span>"</span>
}
greet <span class="hljs-string">"Anup"</span>
</code></pre>
<h4 id="heading-sample-output-9"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Hello, Anup
</code></pre>
<h3 id="heading-returning-values-from-functions"><strong>Returning Values from Functions</strong></h3>
<p>You can also return values from functions using <code>echo</code> and capture them using command substitution:</p>
<pre><code class="lang-bash"><span class="hljs-function"><span class="hljs-title">add</span></span>() {
  result=$(( <span class="hljs-variable">$1</span> + <span class="hljs-variable">$2</span> ))
  <span class="hljs-built_in">echo</span> <span class="hljs-variable">$result</span>
}

sum=$(add 5 7)
<span class="hljs-built_in">echo</span> <span class="hljs-string">"The sum is <span class="hljs-variable">$sum</span>"</span>
</code></pre>
<h4 id="heading-sample-output-10"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">The sum is 12
</code></pre>
<hr />
<h2 id="heading-7-working-with-files"><strong>7. Working with Files</strong></h2>
<p>Bash provides several commands for file manipulation. Here are a few common operations:</p>
<h3 id="heading-creating-files"><strong>Creating Files</strong></h3>
<p>To create an empty file, use the <code>touch</code> command:</p>
<pre><code class="lang-bash">touch newfile.txt
</code></pre>
<h3 id="heading-reading-files"><strong>Reading Files</strong></h3>
<p>You can read the contents of a file using <code>cat</code>:</p>
<pre><code class="lang-bash">cat file.txt
</code></pre>
<h4 id="heading-sample-output-11"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">This is a file <span class="hljs-keyword">for</span> blog.anupkafle.com.np.
</code></pre>
<h3 id="heading-redirecting-output-to-a-file"><strong>Redirecting Output to a File</strong></h3>
<p>You can redirect the output of a command to a file using <code>&gt;</code>:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Welcome to blog.anupkafle.com.np"</span> &gt; output.txt
</code></pre>
<h3 id="heading-appending-to-files"><strong>Appending to Files</strong></h3>
<p>To append data to a file, use <code>&gt;&gt;</code>:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"New line"</span> &gt;&gt; output.txt
</code></pre>
<hr />
<h2 id="heading-8-scripting-best-practices"><strong>8. Scripting Best Practices</strong></h2>
<h3 id="heading-use-comments"><strong>Use Comments</strong></h3>
<p>Always add comments to explain your code. This makes it easier to understand and maintain.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># This script prints a greeting</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, World!"</span>
</code></pre>
<h4 id="heading-sample-output-12"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">Hello, World!
</code></pre>
<h3 id="heading-use-meaningful-variable-names"><strong>Use Meaningful Variable Names</strong></h3>
<p>Choose variable names that describe their purpose. Avoid single-letter variables unless absolutely necessary.</p>
<pre><code class="lang-bash">user_name=<span class="hljs-string">"Anup"</span>
</code></pre>
<h3 id="heading-error-handling"><strong>Error Handling</strong></h3>
<p>Use proper error handling to ensure your script behaves as expected even when something goes wrong. The <code>exit</code> command can be used to terminate a script with an exit status:</p>
<pre><code class="lang-bash"> <span class="hljs-keyword">if</span> [ ! -f <span class="hljs-string">"file.txt"</span> ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"File not found!"</span>
  <span class="hljs-built_in">exit</span> 1
<span class="hljs-keyword">fi</span>
</code></pre>
<h4 id="heading-sample-output-13"><strong>Sample Output:</strong></h4>
<pre><code class="lang-bash">File not found!
</code></pre>
<h3 id="heading-test-your-scripts"><strong>Test Your Scripts</strong></h3>
<p>Always test your scripts with different inputs and edge cases to ensure they work as expected.</p>
<hr />
<h2 id="heading-9-conclusion"><strong>9. Conclusion</strong></h2>
<p>Bash scripting is an essential skill for automating tasks and simplifying repetitive operations on Unix-based systems. By understanding the basics such as variables, conditionals, loops, and functions, you can write efficient and powerful scripts to manage your system more effectively.</p>
<p>Remember to always test your scripts, use comments, and follow best practices to ensure your code is clean, readable, and maintainable.</p>
<p>Happy scripting!</p>
]]></content:encoded></item><item><title><![CDATA[Journey with Git and GitHub]]></title><description><![CDATA[As a DevOps engineer, managing code effectively and ensuring smooth integration, testing, and deployment processes are essential responsibilities. Git and GitHub are the foundation for these workflows, providing the tools necessary for seamless colla...]]></description><link>https://blog.anupkafle.com.np/journey-with-git-and-github</link><guid isPermaLink="true">https://blog.anupkafle.com.np/journey-with-git-and-github</guid><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Sat, 30 Nov 2024 13:03:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732971715491/fdc27d2e-b61d-4aae-be24-73f5a4c82949.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a DevOps engineer, managing code effectively and ensuring smooth integration, testing, and deployment processes are essential responsibilities. Git and GitHub are the foundation for these workflows, providing the tools necessary for seamless collaboration, automation, and deployment. In this blog, we'll explore a DevOps engineer's journey with Git and GitHub, from creating repositories to deploying code in a production environment.</p>
<h3 id="heading-starting-with-version-control-understanding-git-basics">Starting with Version Control: Understanding Git Basics</h3>
<p>Q. <strong>What is Git?</strong></p>
<p>Git is a distributed version control system that allows developers to track changes, collaborate on code, and manage multiple versions of a project efficiently.</p>
<p>Q. <strong>What is a Version Control System (VCS)?</strong></p>
<p>A <strong>Version Control System (VCS)</strong> is a software tool that helps developers manage changes to source code or other files over time. It enables collaboration, tracks changes, and provides a history of modifications, allowing teams to work on projects efficiently while minimizing conflicts.</p>
<h3 id="heading-installing-and-setting-up-git-a-guide-for-windows-and-linux"><strong>Installing and Setting Up Git: A Guide for Windows and Linux</strong></h3>
<p><strong>Installing Git on Windows</strong></p>
<h4 id="heading-step-1-download-git">Step 1: Download Git</h4>
<ul>
<li><p>Visit the official <a target="_blank" href="https://git-scm.com/">Git webs</a><a target="_blank" href="https://git-scm.com/">ite</a> and download the latest version for Windows.</p>
</li>
<li><p>Choose the appropriate version for your system (32-bit or 64-bit).</p>
</li>
</ul>
<h4 id="heading-step-2-install-git"><strong>Step 2: Install Git</strong></h4>
<ol>
<li><p>Open the downloaded installer and follow the setup wizard.</p>
</li>
<li><p>Choose your preferred editor for Git (default is Vim, but you can select Notepad++ or VS Code).</p>
</li>
</ol>
<h4 id="heading-step-3-verify-installation"><strong>Step 3: Verify Installation</strong></h4>
<ul>
<li><p>Open Command Prompt or PowerShell.</p>
</li>
<li><p>Type the following command to verify the installation:</p>
<pre><code class="lang-bash">  git --version
</code></pre>
</li>
<li><p>You should see the installed Git version.</p>
</li>
</ul>
<h4 id="heading-step-4-configure-git"><strong>Step 4: Configure Git</strong></h4>
<ul>
<li><p>Set your name and email:</p>
<pre><code class="lang-bash">  git config --global user.name <span class="hljs-string">"Your Name"</span>
  git config --global user.email <span class="hljs-string">"hello@anupkafle.com.np"</span> // Enter your Email
</code></pre>
</li>
</ul>
<p><strong>Installing Git on Linux</strong></p>
<h4 id="heading-step-1-update-your-system"><strong>Step 1: Update Your System</strong></h4>
<ul>
<li><p>Before installing Git, ensure your system is up-to-date.</p>
<pre><code class="lang-bash">  sudo apt update &amp;&amp; sudo apt upgrade
</code></pre>
</li>
</ul>
<h4 id="heading-step-2-install-git-1"><strong>Step 2: Install Git</strong></h4>
<ul>
<li><p>The installation commands vary based on your Linux distribution:</p>
<p>  <strong>For Ubuntu/Debian:</strong></p>
<pre><code class="lang-bash">  sudo apt install git
</code></pre>
<p>  <strong>For Fedora/RHEL:</strong></p>
<pre><code class="lang-bash">  sudo dnf install git
</code></pre>
<p>  <strong>For Arch Linux:</strong></p>
<pre><code class="lang-bash">  sudo pacman -S git
</code></pre>
</li>
</ul>
<h4 id="heading-step-3-verify-installation-1"><strong>Step 3: Verify Installation</strong></h4>
<ul>
<li><p>Check if Git is installed and its version:</p>
<pre><code class="lang-bash">  git --version
</code></pre>
</li>
</ul>
<h4 id="heading-step-4-configure-git-1"><strong>Step 4: Configure Git</strong></h4>
<ul>
<li><p>Similar to Windows, set your global username and email:</p>
<pre><code class="lang-bash">  git config --global user.name <span class="hljs-string">"Your Name"</span>
  git config --global user.email <span class="hljs-string">"hello@anupkafle.com.np"</span> // Enter your Email
</code></pre>
</li>
</ul>
<h3 id="heading-check-your-git-configuration">Check Your Git Configuration</h3>
<p>After configuring your name and email in Git, it’s important to verify the settings to ensure everything is set up correctly. These details are crucial as they are attached to every commit you make and help identify who made the changes.</p>
<h4 id="heading-check-your-global-configuration"><strong>Check Your Global Configuration</strong></h4>
<p>Run the following command to view your global Git configuration, including your name and email:</p>
<pre><code class="lang-bash">git config --global --list
</code></pre>
<p>You should see output similar to this:</p>
<pre><code class="lang-bash">msi@anup:~$ git config --global --list
user.name=anupkafle
user.email=hello@anupkafle.com.np
</code></pre>
<h4 id="heading-check-configuration-for-a-specific-repository"><strong>Check Configuration for a Specific Repository</strong></h4>
<p>If you want to check the configuration for a specific repository, navigate to the repository folder and use:</p>
<pre><code class="lang-bash">git config --list
</code></pre>
<p>This will show both global and repository-specific configurations. If you’ve overridden the global configuration for the repository, the repository-specific settings will appear here.</p>
<h3 id="heading-git-commands-with-detailed-explanations-and-examples">Git Commands with Detailed Explanations and Examples</h3>
<h2 id="heading-1-initializing-a-repository"><strong>1. Initializing a Repository</strong></h2>
<pre><code class="lang-bash">git init
</code></pre>
<h4 id="heading-what-it-does">What It Does:</h4>
<p>Creates a new Git repository in the current directory by initializing a <code>.git</code> folder.</p>
<h4 id="heading-example">Example:</h4>
<pre><code class="lang-bash">mkdir my-project
<span class="hljs-built_in">cd</span> my-project
git init
</code></pre>
<p>This initializes an empty Git repository in the <code>my-project</code> directory.</p>
<hr />
<h3 id="heading-2-cloning-a-repository"><strong>2. Cloning a Repository</strong></h3>
<h4 id="heading-command">Command:</h4>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> &lt;repository-url&gt;
</code></pre>
<h4 id="heading-what-it-does-1">What It Does:</h4>
<p>Copies a repository (and its history) from a remote location to your local machine.</p>
<h4 id="heading-example-1">Example:</h4>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/anupkafle/testrepo.git
</code></pre>
<p>This clones the repository at <a target="_blank" href="https://github.com/anupkafle/testrepo.git">https://github.com/anupkafle/testrepo.git</a> into a local folder named anupkafle.</p>
<hr />
<h3 id="heading-3-checking-the-status-of-your-repository"><strong>3. Checking the Status of Your Repository</strong></h3>
<h4 id="heading-command-1">Command:</h4>
<pre><code class="lang-bash">git status
</code></pre>
<h4 id="heading-what-it-does-2">What It Does:</h4>
<p>Shows the status of your working directory and staging area, including:</p>
<ul>
<li><p>Files modified but not staged.</p>
</li>
<li><p>Files staged but not committed.</p>
</li>
<li><p>Untracked files.</p>
</li>
</ul>
<h4 id="heading-example-2">Example:</h4>
<pre><code class="lang-bash">git status
</code></pre>
<p>Output might look like:</p>
<pre><code class="lang-bash">msi@msi:~/Desktop/my-project$ git status
On branch master

No commits yet

Untracked files:
  (use <span class="hljs-string">"git add &lt;file&gt;..."</span> to include <span class="hljs-keyword">in</span> what will be committed)
    file1.txt

nothing added to commit but untracked files present (use <span class="hljs-string">"git add"</span> to track)
</code></pre>
<hr />
<h3 id="heading-4-adding-files-to-the-staging-area"><strong>4. Adding Files to the Staging Area</strong></h3>
<h4 id="heading-command-2">Command:</h4>
<pre><code class="lang-bash">git add &lt;file-name&gt;    <span class="hljs-comment"># Add a specific file</span>
git add .              <span class="hljs-comment"># Add all files</span>
</code></pre>
<h4 id="heading-what-it-does-3">What It Does:</h4>
<p>Moves changes from the working directory to the staging area, marking them for the next commit.</p>
<h4 id="heading-example-3">Example:</h4>
<pre><code class="lang-bash">git add file1.txt
git add .
</code></pre>
<p>This stages the <em>file1.txt</em> file or all files in the directory, respectively.</p>
<hr />
<h3 id="heading-5-committing-changes"><strong>5. Committing Changes</strong></h3>
<h4 id="heading-command-3">Command:</h4>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"Your commit message"</span>
</code></pre>
<h4 id="heading-what-it-does-4">What It Does:</h4>
<p>Saves the changes in the staging area to the repository with a descriptive message.</p>
<h4 id="heading-example-4">Example:</h4>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"Added a new feature to the project"</span>
</code></pre>
<p>This commits all staged changes with the message "Added a new feature to the project."</p>
<p>Output might look like:</p>
<pre><code class="lang-bash">[master (root-commit) 2e0d35b] Added a new feature to the project
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 file1.txt
</code></pre>
<hr />
<h3 id="heading-6-viewing-commit-history"><strong>6. Viewing Commit History</strong></h3>
<h4 id="heading-command-4">Command:</h4>
<pre><code class="lang-bash">git <span class="hljs-built_in">log</span>
</code></pre>
<h4 id="heading-what-it-does-5">What It Does:</h4>
<p>Shows the commit history for the repository, including commit hashes, author details, dates, and messages.</p>
<h4 id="heading-example-5">Example:</h4>
<pre><code class="lang-bash">git <span class="hljs-built_in">log</span>
</code></pre>
<p>Output:</p>
<pre><code class="lang-bash">commit 2e0d35bf87987f7bdcd8bd9cc6ac6051e8b12ab6 (HEAD -&gt; master)
Author: anupkafle &lt;hello@anupkafle.com.np&gt;
Date:   Fri Nov 29 22:15:56 2024 +0100

    Added a new feature to the project
</code></pre>
<hr />
<h3 id="heading-7-creating-and-switching-branches"><strong>7. Creating and Switching Branches</strong></h3>
<h4 id="heading-commands">Commands:</h4>
<pre><code class="lang-bash">git branch &lt;branch-name&gt;       <span class="hljs-comment"># Create a new branch</span>
git checkout &lt;branch-name&gt;     <span class="hljs-comment"># Switch to an existing branch</span>
git checkout -b &lt;branch-name&gt;  <span class="hljs-comment"># Create and switch to a new branch</span>
</code></pre>
<h4 id="heading-what-they-do">What They Do:</h4>
<ul>
<li><p><strong>Branch creation:</strong> Creates a new branch to isolate work on a specific feature or bug.</p>
</li>
<li><p><strong>Switching branches:</strong> Moves you to another branch in the repository.</p>
</li>
</ul>
<h4 id="heading-example-6">Example:</h4>
<pre><code class="lang-bash">git checkout -b feature/new-feature
</code></pre>
<p>This creates and switches to a branch named <code>feature/new-feature</code>.</p>
<p>Output:</p>
<pre><code class="lang-bash">Switched to a new branch <span class="hljs-string">'feature/new-feature'</span>
</code></pre>
<hr />
<h3 id="heading-8-merging-branches"><strong>8. Merging Branches</strong></h3>
<h4 id="heading-command-5">Command:</h4>
<pre><code class="lang-bash">git merge &lt;branch-name&gt;
</code></pre>
<h4 id="heading-what-it-does-6">What It Does:</h4>
<p>Combines changes from another branch into the current branch.</p>
<h4 id="heading-example-7">Example:</h4>
<pre><code class="lang-bash">git checkout master
git merge feature/new-feature
</code></pre>
<p>This merges the <code>feature/new-feature</code> branch into the <code>main</code> branch.</p>
<hr />
<h3 id="heading-9-pushing-changes-to-a-remote-repository"><strong>9. Pushing Changes to a Remote Repository</strong></h3>
<h4 id="heading-command-6">Command:</h4>
<pre><code class="lang-bash">git push origin &lt;branch-name&gt;
</code></pre>
<h4 id="heading-what-it-does-7">What It Does:</h4>
<p>Uploads local commits from the specified branch to a remote repository.</p>
<h4 id="heading-example-8">Example:</h4>
<pre><code class="lang-bash">git push origin master
</code></pre>
<p>Pushes the <code>main</code> branch to the remote repository. To proceed, you must first create a repository and link it to a remote URL.</p>
<p>To link your local repository to a remote repository, use the following command:</p>
<pre><code class="lang-bash">git remote add origin &lt;remote-repository-url&gt;
</code></pre>
<hr />
<h3 id="heading-10-pulling-changes-from-a-remote-repository"><strong>10. Pulling Changes from a Remote Repository</strong></h3>
<h4 id="heading-command-7">Command:</h4>
<pre><code class="lang-bash">git pull
</code></pre>
<h4 id="heading-what-it-does-8">What It Does:</h4>
<p>Fetches and merges changes from the remote repository into the current branch.</p>
<h4 id="heading-example-9">Example:</h4>
<pre><code class="lang-bash">git pull
</code></pre>
<p>Updates the local branch with the latest changes from the remote branch.</p>
<hr />
<h3 id="heading-11-viewing-differences"><strong>11. Viewing Differences</strong></h3>
<h4 id="heading-command-8">Command:</h4>
<pre><code class="lang-bash">git diff
</code></pre>
<h4 id="heading-what-it-does-9">What It Does:</h4>
<p>Shows changes between the working directory and the repository (or between branches).</p>
<h4 id="heading-example-10">Example:</h4>
<pre><code class="lang-bash">git diff
</code></pre>
<p>Displays line-by-line differences for modified files.</p>
<p>Output:</p>
<pre><code class="lang-bash">msi@msi:~/Desktop/my-project$ git diff
diff --git a/file1.txt b/file1.txt
index e69de29..537e61e 100644
--- a/file1.txt
+++ b/file1.txt
@@ -0,0 +1 @@
+hi this is <span class="hljs-built_in">test</span> file
</code></pre>
<p>The <code>git diff</code> output shows the differences between the current working directory and the latest commit. In this case, the file <code>file1.txt</code> was modified, transitioning from an empty state (indicated by <code>e69de29</code>) to containing a single line, <code>hi this is test file</code>. The <code>+</code> symbol marks the addition of this line in the file.</p>
<h3 id="heading-12-discarding-changes"><strong>12. Discarding Changes</strong></h3>
<h4 id="heading-commands-1">Commands:</h4>
<pre><code class="lang-bash">git checkout -- &lt;file-name&gt;  <span class="hljs-comment"># Reverts changes to a file</span>
git reset &lt;file-name&gt;        <span class="hljs-comment"># Removes a file from staging</span>
</code></pre>
<h4 id="heading-what-they-do-1">What They Do:</h4>
<ul>
<li><p><strong>Reverting:</strong> Discards changes in the working directory.</p>
</li>
<li><p><strong>Resetting:</strong> Removes changes from the staging area without deleting them.</p>
</li>
</ul>
<h4 id="heading-example-11">Example:</h4>
<pre><code class="lang-bash">msi@msi:~/Desktop/my-project$ cat &gt; file1.txt 
Hello
msi@msi:~/Desktop/my-project$ cat file1.txt 
Hello
msi@msi:~/Desktop/my-project$ git checkout -- file1.txt
msi@msi:~/Desktop/my-project$ cat file1.txt 
msi@msi:~/Desktop/my-project$
</code></pre>
<hr />
<h3 id="heading-13-deleting-a-branch"><strong>13. Deleting a Branch</strong></h3>
<h4 id="heading-command-9">Command:</h4>
<pre><code class="lang-bash">git branch -d &lt;branch-name&gt;
</code></pre>
<h4 id="heading-what-it-does-10">What It Does:</h4>
<p>Deletes a branch that is no longer needed.</p>
<h4 id="heading-example-12">Example:</h4>
<pre><code class="lang-bash">msi@msi:~/Desktop/my-project$ git branch
  feature/new-feature
* master
msi@msi:~/Desktop/my-project$ git branch -d feature/new-feature
Deleted branch feature/new-feature (was 2e0d35b).
msi@msi:~/Desktop/my-project$ git branch
* master
msi@msi:~/Desktop/my-project$
</code></pre>
<p>Deletes the <code>feature/new-feature</code> branch.</p>
<hr />
<h3 id="heading-14-checking-remote-repositories"><strong>14. Checking Remote Repositories</strong></h3>
<h4 id="heading-command-10">Command:</h4>
<pre><code class="lang-bash">git remote -v
</code></pre>
<h4 id="heading-what-it-does-11">What It Does:</h4>
<p>Lists remote repositories linked to your local repository.</p>
<h4 id="heading-example-13">Example:</h4>
<pre><code class="lang-bash">git remote -v
</code></pre>
<p>Output:</p>
<pre><code class="lang-bash">msi@msi:~/Desktop/my-project$ git remote -v
origin    https://github.com/anupkafle/testrepo.git (fetch)
origin    https://github.com/anupkafle/testrepo.git (push)
</code></pre>
<hr />
<h3 id="heading-15-tagging-releases"><strong>15. Tagging Releases</strong></h3>
<h4 id="heading-command-11">Command:</h4>
<pre><code class="lang-bash">git tag -a &lt;tag-name&gt; -m <span class="hljs-string">"Tag message"</span>
</code></pre>
<h4 id="heading-what-it-does-12">What It Does:</h4>
<p>Marks a specific commit with a version number or release label.</p>
<h4 id="heading-example-14">Example:</h4>
<pre><code class="lang-bash">msi@msi:~/Desktop/my-project$ git tag -a v1.0.0 -m <span class="hljs-string">"Version 1.0.0"</span>
msi@msi:~/Desktop/my-project$ git <span class="hljs-built_in">log</span>
commit 2e0d35bf87987f7bdcd8bd9cc6ac6051e8b12ab6 (HEAD -&gt; master, tag: v1.0.0)
Author: anupkafle &lt;anupkafle24@gmail.com&gt;
Date:   Fri Nov 29 22:15:56 2024 +0100

    Added a new feature to the project
</code></pre>
<p>Creates and pushes a <code>v1.0.0</code> tag to the remote repository.</p>
<hr />
<h3 id="heading-16-stashing-changes"><strong>16. Stashing Changes</strong></h3>
<p>Git stash is a feature that allows you to temporarily save changes in your working directory that you don’t want to commit yet. This is useful when you need to switch branches or perform other tasks without losing your uncommitted changes. The changes are saved in a stack-like structure, and you can reapply them later.</p>
<h4 id="heading-command-12">Command:</h4>
<pre><code class="lang-bash">git stash
</code></pre>
<h4 id="heading-what-it-does-13">What It Does:</h4>
<p>Temporarily saves changes without committing.</p>
<h4 id="heading-example-15">Example:</h4>
<pre><code class="lang-bash">git stash
</code></pre>
<p>To apply stashed changes later:</p>
<pre><code class="lang-bash">git stash apply
</code></pre>
<hr />
<h3 id="heading-17-undoing-changes"><strong>17. Undoing Changes</strong></h3>
<h4 id="heading-command-13">Command:</h4>
<pre><code class="lang-bash">git reset --hard &lt;commit-hash&gt;
</code></pre>
<h4 id="heading-what-it-does-14">What It Does:</h4>
<p>Resets the repository to a specific commit, discarding all changes after it.</p>
<h4 id="heading-example-16">Example:</h4>
<pre><code class="lang-bash">git reset --hard 1a2b3c4d5e6f7g8h9i0j
</code></pre>
<p>Rolls back to the specified commit.</p>
<hr />
<h3 id="heading-18-help-and-documentation"><strong>18. Help and Documentation</strong></h3>
<h4 id="heading-command-14">Command:</h4>
<pre><code class="lang-bash">git <span class="hljs-built_in">help</span> &lt;<span class="hljs-built_in">command</span>&gt;
</code></pre>
<h4 id="heading-what-it-does-15">What It Does:</h4>
<p>Provides detailed documentation for any Git command.</p>
<h4 id="heading-example-17">Example:</h4>
<pre><code class="lang-bash">git <span class="hljs-built_in">help</span> commit
</code></pre>
<p>Displays help information for the <code>commit</code> command.</p>
<h3 id="heading-19git-rebase"><strong>19.Git Rebase</strong></h3>
<h4 id="heading-command-15">Command:</h4>
<pre><code class="lang-bash">git rebase &lt;branch-name&gt;
</code></pre>
<h4 id="heading-what-it-does-16">What It Does:</h4>
<ul>
<li><p>Integrates changes from one branch into another by <strong>moving</strong> the base of your branch to the latest commit on the target branch.</p>
</li>
<li><p>Keeps a linear commit history by replaying your changes on top of the target branch.</p>
</li>
</ul>
<h4 id="heading-example-18">Example:</h4>
<pre><code class="lang-bash">git rebase main
</code></pre>
<p>Rebases your current branch on top of the <code>main</code> branch.</p>
<h4 id="heading-use-case">Use Case:</h4>
<p>When you want to synchronize your feature branch with the latest updates from the <code>main</code> branch while avoiding merge commits.</p>
<hr />
<h3 id="heading-20-git-cherry-pick"><strong>20. Git Cherry-Pick</strong></h3>
<h4 id="heading-command-16">Command:</h4>
<pre><code class="lang-bash">git cherry-pick &lt;commit-hash&gt;
</code></pre>
<h4 id="heading-what-it-does-17">What It Does:</h4>
<ul>
<li>Applies a specific commit from one branch onto the current branch.</li>
</ul>
<h4 id="heading-example-19">Example:</h4>
<pre><code class="lang-bash">git cherry-pick abc1234
</code></pre>
<p>Applies the commit with hash <code>abc1234</code> onto your current branch.</p>
<h4 id="heading-use-case-1">Use Case:</h4>
<p>When you need a specific change from another branch without merging the entire branch.</p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Git is an indispensable tool for modern software development, enabling teams to collaborate efficiently, maintain code integrity, and streamline project management. Its powerful commands, such as <code>git commit</code>, <code>git branch</code>, <code>git rebase</code>, and <code>git stash</code>, provide flexibility and control over your codebase, making it easier to manage complex projects. By mastering Git’s workflows—whether it’s basic version control, branch management, or advanced rebasing techniques—you can significantly enhance your productivity and contribute to a cleaner, more maintainable project history.</p>
<p>Incorporating Git into your development process not only helps you work better individually but also ensures seamless teamwork, especially when paired with platforms like GitHub or GitLab. Understanding Git’s capabilities and best practices is a crucial step in your journey toward becoming a proficient developer or DevOps engineer. With a strong foundation in Git, you are equipped to tackle challenges in version control, collaborate effectively, and deliver high-quality software.</p>
<p><strong>FAQs</strong></p>
<ol>
<li><p>What is the difference between git pull, git fetch, and git clone?</p>
<p> <code>git pull</code>, <code>git fetch</code>, and <code>git clone</code> serve different purposes in Git workflows. <code>git pull</code> combines <code>git fetch</code> and <code>git merge</code>, fetching changes from the remote repository and immediately merging them into your current branch, making it ideal for quick synchronization. <code>git fetch</code>, on the other hand, only downloads changes from the remote repository without merging, allowing you to review updates before integrating them. Finally, <code>git clone</code> is used to create a local copy of an entire remote repository, typically when setting up a repository for the first time. Each command has its unique use case: pull for immediate updates, fetch for careful review, and clone for starting fresh.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[From DevOps to NoOps: The Future of Automation in Software Development]]></title><description><![CDATA[Introduction:
The evolution from DevOps to NoOps represents a shift toward eliminating traditional IT operations through extensive automation, thus reducing the operational burden on developers and allowing them to focus more on delivering higher-val...]]></description><link>https://blog.anupkafle.com.np/from-devops-to-noops-the-future-of-automation-in-software-development</link><guid isPermaLink="true">https://blog.anupkafle.com.np/from-devops-to-noops-the-future-of-automation-in-software-development</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWSCommunity]]></category><category><![CDATA[#AWSCommunityBuilders #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs #kubernetes]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Wed, 06 Nov 2024 23:00:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730218173080/bd7a2601-ec21-43f5-9905-47af74021a9b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction:</h2>
<p>The evolution from DevOps to NoOps represents a shift toward eliminating traditional IT operations through extensive automation, thus reducing the operational burden on developers and allowing them to focus more on delivering higher-value aspects of software development. Companies like Netflix, Airbnb, Spotify, and Slack are at the forefront of this movement, adopting various NoOps principles to streamline operations.</p>
<h3 id="heading-the-evolution-from-devops-to-noops">The Evolution from DevOps to NoOps:</h3>
<p>DevOps has transformed the collaboration between development and operational teams, optimizing both the creation and deployment phases of software. NoOps extends this philosophy to its logical endpoint: the automation of operational tasks to such an extent that the operational activities are almost invisible.</p>
<h3 id="heading-core-principles-of-noops">Core Principles of NoOps:</h3>
<ul>
<li><p><strong>Full Automation:</strong> Everything from resource allocation to application deployment and network adjustments is automated.</p>
</li>
<li><p><strong>Proactive Monitoring and Self-Healing:</strong> Systems are not only monitored automatically but can correct themselves without human intervention.</p>
</li>
<li><p><strong>Cloud-native Infrastructure:</strong> Emphasizes using solutions like serverless computing to minimize traditional operational management.</p>
</li>
</ul>
<h3 id="heading-purpose">Purpose:</h3>
<ol>
<li>The main purpose of this article is to perform performance comparision between NoOps and DevOps using AWS Services.</li>
</ol>
<h3 id="heading-background">Background:</h3>
<p>In this section we will be familiar with DevOps and NoOps cloud model in Amazon Web Service(AWS)</p>
<h3 id="heading-devops-based-model">DevOps Based Model</h3>
<p>In the AWS ecosystem, DevOps methodologies are greatly enhanced by services like Amazon EC2 and Amazon EKS, which cater to distinct needs while supporting rapid and efficient application development and deployment. Amazon EC2 (Elastic Compute Cloud) is a cornerstone of AWS, offering flexible, virtual computing environments that users can customize and control. EC2 allows for the configuration of processing power, memory, and storage that suits diverse application needs, supported by various pricing options including On-Demand, Reserved, and Spot Instances, which help optimize costs based on usage patterns. Additionally, EC2's integration with AWS security and networking services ensures robust protection and scalability for applications.</p>
<p>On the other hand, Amazon EKS (Elastic Kubernetes Service) streamlines the deployment, scaling, and management of containerized applications using Kubernetes, without the need for installing or operating Kubernetes control planes. This service is especially beneficial for modern applications designed around microservices architectures that require dynamic scaling and high availability. EKS seamlessly integrates with essential AWS services such as Elastic Load Balancing, Amazon VPC, and IAM, enhancing both the security and functionality of container deployments. Furthermore, although EKS introduces an additional cost for managing the Kubernetes infrastructure, it significantly reduces the complexity and overhead associated with manual Kubernetes management.</p>
<h3 id="heading-noops-based-model">NoOps Based Model</h3>
<p>The NoOps (no operations) model, exemplified by AWS Lambda and Function as a Service (FaaS), represents a paradigm shift in software development, minimizing or eliminating traditional operational tasks through automation and abstraction. AWS Lambda, a serverless computing service, allows developers to run code in response to events without managing servers, perfectly aligning with the NoOps approach by automating scaling, patching, and managing of compute resources. Lambda supports multiple programming languages, triggers functions via events from AWS services or third-party apps, and automatically scales to match demand. This model drastically reduces operational management, enabling rapid deployment and significant cost efficiency due to its pay-forwhat-you-use pricing structure. Lambda is ideal for a variety of applications, from web backends to real-time data processing and IoT services, facilitating a focus on creating business value rather than infrastructure management, thus accelerating innovation and efficiency in development processes.</p>
<h3 id="heading-methodology">Methodology</h3>
<p>To understand the performance differences between the DevOps and NoOps deployment strategies we will create an app that calculates the prime number between some intervals and deploy using AWS Lambda and AWS Elastic Kubernetes Service and test the load using Jmeter, an API testing tool by Apache.</p>
<h3 id="heading-prime-number-calculator">Prime Number Calculator:</h3>
<p>To effectively compare the performance and various metrics of NoOps and DevOps approaches, I utilized Amazon Web Services (AWS) to demonstrate each model. Specifically, AWS Elastic Kubernetes Service (EKS) was employed as an example of a DevOps-oriented service, and AWS Lambda was used to illustrate the NoOps model.</p>
<p>The application chosen for this comparison was a simple prime number calculator that identifies all prime numbers within a given range, specified by a start and an end interval. This application was developed using Flask, a lightweight web framework in Python.</p>
<p><strong>Here’s how the deployment proceeded:</strong></p>
<ul>
<li><p><strong>Development of the Flask Application:</strong> The application was coded in Python and designed to accept two parameters: 'start' and 'end'. It calculated and returned all prime numbers within this interval.</p>
</li>
<li><p><strong>Deployment on AWS Lambda (NoOps):</strong> The Flask application was packaged and deployed on AWS Lambda, which abstracts away most server management tasks. This deployment was used to evaluate how the NoOps approach simplifies operations, particularly in terms of server provisioning, scaling, and management.</p>
</li>
<li><p><strong>Deployment on AWS EKS (DevOps):</strong> The same Flask application was containerized using Docker and deployed on a Kubernetes cluster managed by AWS EKS. This setup highlighted the level of control and flexibility provided by a DevOps approach, which can be essential for handling complex application architectures that require fine-tuned scaling and management.</p>
</li>
<li><p><strong>Performance Metrics and Comparison:</strong> After deploying the application on both platforms, key performance metrics such as deployment time, scalability, response time, and resource utilization were analyzed and compared. This comparison aimed to illustrate the practical impacts of choosing NoOps versus DevOps in a cloud environment.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730217614135/df385758-7cdd-404c-aafe-25d72e8241e1.png" alt class="image--center mx-auto" /></p>
<p>Figure: API to calculate prime number</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730217643064/aaae388a-547c-4eef-a0c8-7de7da7abc44.png" alt class="image--center mx-auto" /></p>
<p>Table: Different metrics for lambda api on /api/primes endpoint</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730217671243/82bb3bff-8605-408d-acbe-dad020a64848.png" alt class="image--center mx-auto" /></p>
<p>Figure: Aggregate Graph for Lambda Endpoint</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730217698107/018ed124-a283-470f-864b-385536a632e7.png" alt class="image--center mx-auto" /></p>
<p>Table: Different metrics for Kubernetes api on /api/primes endpoint</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730217719068/5e307627-a265-4272-b30f-f419ab2dfa38.png" alt class="image--center mx-auto" /></p>
<p>Figure: Aggregate Graph for Lambda Endpoint</p>
<h3 id="heading-performance-metrics">Performance Metrics:</h3>
<ol>
<li><p><strong>Load Time, Connect Time, and Latency:</strong></p>
<ul>
<li><p>Lambda API: Load time and latency for the Lambda API are nearly identical, suggesting that the major component of load time is the actual processing (latency). There is a noticeable increase in both metrics as the number of primes (End value) increases, peaking at 100,000 with a load time and latency of 2146 ms.</p>
</li>
<li><p>Kubernetes API: Similarly, Kubernetes shows increased load time and latency with the number of primes, but notably, the values spike drastically at 100,000 primes to 24150 ms, suggesting a significant degradation in performance under high computational load.</p>
</li>
</ul>
</li>
<li><p><strong>Connect Time</strong></p>
<ul>
<li>The connect time for both Lambda and Kubernetes is relatively low compared to load times, indicating that the overhead for establishing a connection is minimal. Kubernetes does show slightly lower connection times, which could be due to differences in network configuration or the persistent nature of containers compared to the stateless deployment of Lambda.</li>
</ul>
</li>
<li><p><strong>Error Rates</strong></p>
<ul>
<li>The Lambda API shows an error rate of 2.97%, which suggests some failures under certain conditions. In contrast, the Kubernetes API shows a 0.00% error rate, indicating more stable performance in handling requests without failures.</li>
</ul>
</li>
<li><p><strong>Throughput</strong></p>
<ul>
<li>The Lambda API shows a higher throughput of 54.5 requests per second compared to Kubernetes' 3.3 requests per second. This significant difference suggests that Lambda can handle a greater number of requests concurrently, likely benefiting from AWS's auto-scaling capabilities inherent to Lambda services.</li>
</ul>
</li>
</ol>
<h3 id="heading-analysis">Analysis</h3>
<ul>
<li><p><strong>Scalability:</strong> Lambda demonstrates better scalability with higher throughput, suggesting it is more capable of handling spikes in traffic without manual intervention for scaling.</p>
</li>
<li><p><strong>Performance Under Load:</strong> Kubernetes API performance deteriorates significantly when dealing with a large number (100,000) of primes, indicating potential issues in handling heavy computational tasks efficiently.</p>
</li>
<li><p><strong>Reliability:</strong> Kubernetes shows a better error rate, indicating it might be more reliable for consistent performance, particularly in scenarios where error rates are critical.</p>
</li>
<li><p><strong>Cost Implications:</strong> While not explicitly mentioned, Lambda's pricing model (based on requests and compute time) could potentially be more cost-effective for applications with variable traffic. Kubernetes may incur higher costs due to continuous running of cluster nodes.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>The performance metrics comparison between AWS Lambda and AWS Kubernetes (EKS) for the <strong>/api/primes</strong> Flask API reveals distinct advantages and disadvantages depending on the specific use case and operational requirements:</p>
<ul>
<li><p><strong>AWS Lambda</strong> excels in scalability and throughput, demonstrating robust performance particularly under variable load conditions. It is capable of handling a high number of requests per second, indicating efficient use of resources and rapid auto-scaling. This makes Lambda an excellent choice for applications with unpredictable traffic patterns and those that do not require a persistent state. However, there is a concern with a higher error rate, which suggests that for extremely sensitive applications, more robust error handling or configuration may be needed.</p>
</li>
<li><p><strong>AWS Kubernetes (EKS),</strong> on the other hand, provides a more stable error-free operation as indicated by the zero error rate in the tests, but struggles with performance under heavy computational loads, as seen with the significant latency increase at 100,000 primes. This points to Kubernetes being more suitable for applications where consistent performance and more extensive customization are required. Additionally, it might be better for long-running processes where the overhead of starting up a new instance, as in the case of Lambda, could be detrimental.</p>
</li>
<li><p><strong>Cost-effectiveness will vary:</strong> Lambda may generally be more cost-efficient for workloads with high peaks and low troughs of activity due to its pay-per-use model. Kubernetes may entail higher baseline costs due to the need to run some resources continuously but could potentially offer cost savings through more detailed control over resource allocation and usage.</p>
</li>
<li><p><strong>Throughput and Performance Under Load:</strong> Lambda's superior throughput makes it suitable for high-demand applications, while Kubernetes may require additional optimizations and resource allocation strategies to handle similar loads effectively.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Simplifying User Authentication with Amazon Cognito and Google Integration]]></title><description><![CDATA[Introduction
User authentication is a critical component of modern web and mobile applications. Implementing secure and user-friendly authentication can be a complex task. However, with the powerful combination of Amazon Cognito and Google integratio...]]></description><link>https://blog.anupkafle.com.np/simplifying-user-authentication-with-amazon-cognito-and-google-integration</link><guid isPermaLink="true">https://blog.anupkafle.com.np/simplifying-user-authentication-with-amazon-cognito-and-google-integration</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWSCommunity]]></category><category><![CDATA[Cognito]]></category><category><![CDATA[#AWSCommunityBuilders #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs #kubernetes]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Mon, 07 Oct 2024 17:12:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728319857012/e482f471-768d-41ef-8ee4-f50ea2df8c6c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>User authentication is a critical component of modern web and mobile applications. Implementing secure and user-friendly authentication can be a complex task. However, with the powerful combination of Amazon Cognito and Google integration, developers can streamline the authentication process while leveraging the security and convenience of Google accounts. In this blog, we will explore how to integrate Google Sign-In with Amazon Cognito, allowing you to enhance your application's user experience and security.</p>
<h3 id="heading-why-choose-amazon-cognito">Why Choose Amazon Cognito?</h3>
<p>Amazon Cognito is a fully managed service by AWS that provides authentication, authorization, and user management for your applications. It offers several benefits:</p>
<ol>
<li><p>Scalability: Amazon Cognito can handle millions of users, ensuring your application scales effortlessly as your user base grows.</p>
</li>
<li><p>Security: It supports industry-standard protocols, including OpenID Connect and OAuth 2.0, ensuring secure authentication and authorization flows.</p>
</li>
<li><p>Flexibility: Amazon Cognito supports various authentication methods, including social identity providers like Google, enabling you to offer multiple login options to your users.</p>
</li>
<li><p>User Management: It provides comprehensive user management features, such as user registration, user profile management, and password resets, reducing the development effort required for these functionalities.</p>
</li>
</ol>
<h3 id="heading-steps-to-integrate-google-sign-in-with-amazon-cognito">Steps to Integrate Google Sign-In with Amazon Cognito:</h3>
<h3 id="heading-step1-set-up-google-developer-account"><strong>Step1: Set Up Google Developer Account:</strong></h3>
<ul>
<li><strong>Create a project in the Google Developers Console.</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686804317833/4df474d1-ed9c-4ca5-a001-6798cf2788ab.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Enable the Google Sign-In API for your project. Set up the OAuth consent screen to configure and register the application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686804571454/cfccc9f4-c31d-4ead-b883-d956eedbd88a.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Select the <strong>External</strong> and click on <strong>CREATE.</strong></p>
<p>App information, App logo, App domain, Developer contact information, Test users must be configured as per the requirement for registering the app.</p>
<p>Configure the authorized JavaScript origins and redirect URIs for your application.</p>
<ul>
<li><strong>Setup the Credentials to access the enabled APIs with OAuth client ID</strong></li>
</ul>
<p>In the application type, multiple options are available, as of now select <strong>Web application.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686805154593/dcac0639-f9a7-4ec2-b704-09420dc34c27.png?auto=compress,format&amp;format=webp" alt /></p>
<p>In the next click <strong>CREATE</strong> OAuth client ID. Authorized redirect URIs will be set up later.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686805125776/431bac4d-d6e3-4e17-b656-8c1d90ffb211.png?auto=compress,format&amp;format=webp" alt /></p>
<p>After this Client ID and Client Secret will be provided which will be used to set up google as federated identity provider in the step later.</p>
<h3 id="heading-step2-create-an-amazon-cognito-user-pool"><strong>Step2: Create an Amazon Cognito User Pool:</strong></h3>
<ul>
<li><p><strong>Create a user pool in Amazon Cognito to manage user registration and authentication.</strong></p>
<p>  Search the Amazon Cognito service and click on <strong>Create user pool</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686805308188/17dcd223-c690-4063-85c0-81f568104aed.png?auto=compress,format&amp;format=webp" alt /></p>
<p>In the authentication providers select <strong>Federated Identity providers</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686805440863/eff942f7-7b01-488c-9efe-d8da8ec6a428.png?auto=compress,format&amp;format=webp" alt /></p>
<ul>
<li><strong>In the Cognito user pool sign-in option choose the attribute that will be used to sign in. In the federated sign-in options select Google</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686805960103/79629a41-aa33-418f-b9f5-179a1e56fd15.png?auto=compress,format&amp;format=webp" alt /></p>
<ul>
<li><strong>Configure the user pool settings, such as password policies, email verification, user attributes, sign-up experience, and message delivery.</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686807497631/3d14b9e3-edf7-45c3-bb21-61bce3970898.png?auto=compress,format&amp;format=webp" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686807542189/c4312fd9-6d80-4eaa-b91c-474c976fddcb.png?auto=compress,format&amp;format=webp" alt /></p>
<ul>
<li><strong>In the Connect federated identity provider set up Google federation with this user pool.</strong> Provide the client ID and client secret obtained from the Google Developers Console. Authorized scopes can be selected as per choice for instance <strong>openid email profile</strong>.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686807626954/10dbef5a-6e98-4f97-955b-c2b674f8de3a.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now map the attributes between Google and user pool</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686807738532/6743f227-6d99-431f-b2f1-f3f171ce390c.png?auto=compress,format&amp;format=webp" alt /></p>
<ul>
<li><strong>Integrate App and set up client:</strong></li>
</ul>
<p>Provide the favorable user pool name</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686807821523/d2ab6f09-5d06-40b5-861f-f2c0cfa269b2.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Set up the app client and choose whether to generate client secret or not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686807841445/e0858586-5eee-4bd8-b95b-e9e1837a83e6.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Provide the domain name for Hosted UI and OAuth 2.0 endpoints. The domain name must be unique.</p>
<p>Set up the callback URL to redirect the user back after authentication,=.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686808241682/3b0b0cea-e099-4793-bed2-5df68116a478.png?auto=compress,format&amp;format=webp" alt /></p>
<p>In the advanced app client setting select identity provider and OAuth 2.0 grant types. The Implicit Grant is an OAuth 2.0 authorization flow used in web applications. It's suitable for JavaScript-based applications running in web browsers or environments where client secrets cannot be securely stored. In this flow, the client application directly obtains the access token from the authorization server without a separate token exchange step. It involves redirecting the user to the authorization server, authentication, and granting permission.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686808318118/869c50d9-32cc-4c1d-b3fe-c604c7012943.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Now, at last click on <strong>Create user pool</strong></p>
<h3 id="heading-step3set-up-authorised-redirect-uris"><strong>Step3:Set up Authorised redirect URI's.</strong></h3>
<p>Go to the Google developer console in the Authorised redirect URI's and provide the URI's as:</p>
<p><strong>Copy</strong></p>
<pre><code class="lang-bash">https://yourDomainPrefix.auth.region.amazoncognito.com/oauth2/idpresponse
</code></pre>
<p>Replace <strong>yourDomainPrefix</strong> and <strong>region</strong> accordingly from the values of user pool. and SAVE.</p>
<h3 id="heading-step4-check-the-hosted-ui"><strong>Step4: Check the Hosted UI</strong></h3>
<p>Select User pool created above and Click on <strong>App integration</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686808754114/f450bbb8-a4de-4853-a3da-2b10facee5a8.png?auto=compress,format&amp;format=webp" alt /></p>
<p>At the bottom select the corresponding <strong>app client name you</strong> created earlier</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686808905264/5a39da21-e461-4917-9e4f-c7ba0a48c2a3.png?auto=compress,format&amp;format=webp" alt /></p>
<p>In the hosted click on <strong>View Hosted UI</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686808964858/ad9efcc3-f028-48eb-8d72-2cc561fe6c89.png?auto=compress,format&amp;format=webp" alt /></p>
<p>The output can be seen as below where sign in with google is available.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686808607814/0409445b-b350-48a7-8f3d-bc343944e7b7.png?auto=compress,format&amp;format=webp" alt /></p>
<p>Successfully User Authentication with Amazon Cognito and Google Integration is done. This can also be integrated with own hosted UI's in own website.</p>
]]></content:encoded></item><item><title><![CDATA[Overcoming Scalability Challenges in a Shoe E-Commerce Platform with AWS Auto Scaling]]></title><description><![CDATA[As a developer, one of the most critical challenges you might face is ensuring your application can scale to meet demand. Scalability is no longer just a luxury; it's necessary in today's fast-paced digital environment. Whether you're building a soci...]]></description><link>https://blog.anupkafle.com.np/overcoming-scalability-challenges-in-a-shoe-e-commerce-platform-with-aws-auto-scaling</link><guid isPermaLink="true">https://blog.anupkafle.com.np/overcoming-scalability-challenges-in-a-shoe-e-commerce-platform-with-aws-auto-scaling</guid><category><![CDATA[AWS]]></category><category><![CDATA[autoscaling]]></category><category><![CDATA[autoscaling group]]></category><category><![CDATA[#AWSCommunityBuilders #CloudEngineering #CloudComputing #AmazonWebServices #AWSArchitecture #DevOps #CloudSolutions #CloudSecurity #InfrastructureAsCode #AWSCertification #Serverless #AWSCommunity #TechBlogs #CloudExperts #CloudMigration #CloudOps #AWSJobs #TechIndustry #CareerInTech #InnovationInCloud #devops #cloudengineerjobs #devopsjobs #azure #gcp #oci #cloudjobs #kubernetes]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Tue, 13 Aug 2024 04:58:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723524171038/4feab27f-7388-407c-b28b-b8482778b537.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a developer, one of the most critical challenges you might face is ensuring your application can scale to meet demand. Scalability is no longer just a luxury; it's necessary in today's fast-paced digital environment. Whether you're building a social media app, a financial service platform, or an e-commerce website specializing in shoes, the ability to scale seamlessly can make or break your application's success.</p>
<p>I experienced this firsthand while working on a shoe e-commerce platform that catered to customers looking for the latest trends in footwear. During regular days, the traffic was steady and manageable. However, the traffic would surge unpredictably whenever we ran promotional events, offered discounts, or had flash sales. Initially, we tried to handle this by provisioning more servers manually before each event, but this approach could have been more efficient and effective. We often found ourselves over-provisioning and wasting resources or under-provisioning and facing downtimes, which frustrated customers and led to lost sales.</p>
<p>The need for a dynamic solution that automatically adjusts resources based on real-time demand became apparent. This is where AWS Auto Scaling came into play, transforming how we handled traffic spikes and ensuring our platform could meet users' needs, no matter the load.</p>
<h3 id="heading-the-challenge-unpredictable-traffic-and-performance-bottlenecks"><strong>The Challenge: Unpredictable Traffic and Performance Bottlenecks</strong></h3>
<p>Our platform was designed to handle several hundred users concurrently during non-peak hours. But as soon as we announced a sale on popular shoe brands, the number of concurrent users would skyrocket, sometimes even reaching tens of thousands. This led to a range of issues:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723524306243/79935c72-1092-408b-8ff9-7ad5ded2201a.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Server Overload</strong>: Our EC2 instances struggled to manage the increased load, resulting in slow response times and, in some cases, server crashes.</p>
</li>
<li><p><strong>Manual Intervention</strong>: To prevent crashes, we manually added more instances before every significant event. This was labor-intensive and prone to errors, as predicting the resources needed required a lot of work.</p>
</li>
<li><p><strong>Cost Inefficiency</strong>: After the traffic spike subsided, the additional instances would sit idle, consuming resources unnecessarily and increasing our operational costs.</p>
</li>
</ul>
<p>We needed a scalable solution that could adapt to traffic in real-time, ensuring optimal performance without manual intervention or wasted resources.</p>
<h3 id="heading-the-solution-implementing-aws-auto-scaling"><strong>The Solution: Implementing AWS Auto Scaling</strong></h3>
<p>After researching various solutions, we decided to implement AWS Auto Scaling. This Feature automatically adjusts the number of Amazon EC2 instances based on the current demand, ensuring that the application can handle the load efficiently while optimizing costs.</p>
<p>Here's how we implemented AWS Auto Scaling to solve our scalability challenges:</p>
<ol>
<li><p><strong>Configuring Auto Scaling Groups</strong>: We created an Auto Scaling group (ASG) for our EC2 instances. The ASG allowed us to define the minimum and maximum number of instances running simultaneously. We set a minimum of two instances to ensure redundancy and a maximum of 20 instances, sufficient to handle our highest anticipated load.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723524422419/52162fe6-f3f2-485b-92d6-f78cdd19e3d6.png" alt class="image--center mx-auto" /></p>
<p> figure : Amazon EC2 Auto Scaling</p>
</li>
<li><p><strong>Setting Up Scaling Policies</strong>: The next step was to define scaling policies based on CPU utilization. For example, we set a policy to add an EC2 instance when the average CPU utilization across our instances exceeded 70%. Similarly, if CPU utilization dropped below 30%, the ASG would terminate unnecessary instances to reduce costs. These policies ensured that our infrastructure could automatically scale in response to real-time demand without human intervention.</p>
</li>
<li><p><strong>Integrating Elastic Load Balancing (ELB)</strong>: To distribute incoming traffic evenly across all available instances, we integrated Elastic Load Balancing (ELB) with our Auto Scaling group. ELB automatically routes traffic to the healthiest instances, ensuring no single instance is overwhelmed with requests. This improved our application's reliability and enhanced the user experience by providing faster response times.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723524752139/113cb7e4-2939-4e92-9f79-e8b2e3f36384.png" alt class="image--center mx-auto" /></p>
<p> <em>figure: ELB with Auto Scaling</em></p>
</li>
<li><p><strong>Monitoring and Optimization with CloudWatch</strong>: We used Amazon CloudWatch to monitor the performance of our Auto Scaling setup. CloudWatch provided real-time insights into CPU utilization, memory usage, and response times. Analyzing these metrics allowed us to fine-tune our scaling policies, ensuring our infrastructure was constantly optimized for performance and cost.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723524530790/e2be3025-28fa-4e3a-80f5-aaa07b4638a4.png" alt class="image--center mx-auto" /></p>
<p> <em>Image source :</em> <a target="_blank" href="https://aws.plainenglish.io/observability-in-aws-cloudwatch-ed60a2c4fdcd">plainenglish</a></p>
</li>
<li><p><strong>Scaling Across Multiple Availability Zones</strong>: To further enhance availability and fault tolerance, we configured our Auto Scaling group to launch instances across multiple Availability Zones (AZs). This ensured that even if one AZ experienced issues, our application would remain operational and continue serving users from other AZs.</p>
</li>
<li><p><strong>Utilizing Spot Instances for Cost Savings</strong>: To optimize costs further, we leveraged Spot Instances for non-critical workloads. Spot Instances are available at a significant discount compared to On-Demand Instances, allowing us to save on costs without compromising performance. We set up a mixed instances policy within our Auto Scaling group, combining On-Demand and Spot Instances to balance cost efficiency and availability.</p>
<p> <img src="https://miro.medium.com/v2/resize:fit:800/1*FTTiy27_uRMc-ZQWLAQWww.png" alt="AWS Spot Instance Guide: 7 Things You Should Know | by Jay Chapel | Medium" /></p>
</li>
</ol>
<p>figure: AWS EC2 Spot Instances</p>
<h3 id="heading-the-outcome-seamless-scaling-and-cost-efficiency"><strong>The Outcome: Seamless Scaling and Cost Efficiency</strong></h3>
<p>Implementing AWS Auto Scaling was a game-changer for our shoe e-commerce platform. The results were immediate and impactful:</p>
<ul>
<li><p><strong>Automatic Scaling</strong>: During our next promotional event featuring discounts on popular shoe brands, we witnessed the true power of AWS Auto Scaling. As traffic surged, the Auto Scaling group automatically added more EC2 instances to handle the load. Once the event ended and traffic returned to normal, the group scaled down, ensuring we only paid for the needed resources.</p>
</li>
<li><p><strong>Improved Performance</strong>: With Auto Scaling and ELB in place, our application handled peak traffic without downtime or slow response times. This led to a better user experience, higher customer satisfaction, and increased successful transactions.</p>
</li>
<li><p><strong>Cost Optimization</strong>: We significantly reduced our infrastructure costs by automating the scaling process and incorporating Spot Instances. We no longer had to over-provision resources "just in case" or worry about idle instances draining our budget.</p>
</li>
<li><p><strong>Operational Efficiency</strong>: The automation provided by AWS Auto Scaling freed up our team to focus on other critical tasks. We no longer had to manually adjust infrastructure before and after events, which reduced the risk of human error and allowed us to invest more time in developing new features and improvements for our platform.</p>
</li>
</ul>
<h3 id="heading-lessons-learned-and-best-practices"><strong>Lessons Learned and Best Practices</strong></h3>
<p>Through this experience, we learned several valuable lessons and developed best practices for using AWS Auto Scaling:</p>
<ol>
<li><p><strong>Start Small and Scale</strong>: Start with conservative scaling policies and closely monitor performance. Adjust policies gradually based on real-world data to avoid unnecessary costs or under-provisioning.</p>
</li>
<li><p><strong>Leverage Multiple Metrics</strong>: While CPU utilization is a common trigger for scaling, consider other metrics like memory usage, disk I/O, or request count, depending on your application's behavior.</p>
</li>
<li><p><strong>Use Mixed Instances for Cost Efficiency</strong>: Combining On-Demand and Spot Instances can provide the best balance between availability and cost. However, ensure that your application can handle the potential interruption of Spot Instances.</p>
</li>
<li><p><strong>Regularly Review and Optimize</strong>: AWS environments are dynamic, and so should your scaling policies. Review CloudWatch metrics regularly and adjust your Auto Scaling configuration to ensure it remains aligned with your business needs.</p>
</li>
<li><p><strong>Test Under Load</strong>: Before going live with a significant event, simulate high traffic conditions to test your Auto Scaling setup. This will help you identify bottlenecks or configuration issues before they impact real users.</p>
</li>
</ol>
<h3 id="heading-conclusion-future-proofing-your-applications-with-aws-auto-scaling"><strong>Conclusion: Future-Proofing Your Applications with AWS Auto Scaling</strong></h3>
<p>Scalability is a fundamental requirement for any successful application, especially in today's competitive landscape, where user expectations are high, and downtime can result in significant financial losses. AWS Auto Scaling provided a robust, automated solution to effortlessly scale our shoe e-commerce platform, ensuring we could handle any traffic spike while optimizing costs and maintaining high performance. For any developer or organization looking to future-proof their applications, AWS Auto Scaling is an essential tool in your arsenal. By embracing automation, you can build systems that are not only scalable but also resilient, cost-effective, and capable of delivering an excellent user experience regardless of the demand.</p>
]]></content:encoded></item><item><title><![CDATA[Using a Single Application Load Balancer for Multiple Microservices: A Cost-Saving Strategy]]></title><description><![CDATA[In today's cloud-driven world, managing costs is crucial to maintaining a sustainable and efficient environment. One of my clients recently faced a challenge with their AWS bill, particularly in their development environment. They were running 14-15 ...]]></description><link>https://blog.anupkafle.com.np/using-a-single-application-load-balancer-for-multiple-microservices-a-cost-saving-strategy</link><guid isPermaLink="true">https://blog.anupkafle.com.np/using-a-single-application-load-balancer-for-multiple-microservices-a-cost-saving-strategy</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[AWSCommunity]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Tue, 16 Jul 2024 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725777470642/c5e14177-589c-41d0-929a-8829b85bfb5b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's cloud-driven world, managing costs is crucial to maintaining a sustainable and efficient environment. One of my clients recently faced a challenge with their AWS bill, particularly in their development environment. They were running 14-15 Application Load Balancers (ALBs) per region to manage microservices, significantly inflating their costs. Recognizing an opportunity for optimization, I implemented a solution that drastically reduced their AWS bill by consolidating all their microservices under a single ALB in each region. This blog will walk you through achieving this cost-efficient setup using host-based routing.</p>
<h4 id="heading-the-problem-multiple-albs-leading-to-high-costs">The Problem: Multiple ALBs Leading to High Costs</h4>
<p>The client had a microservices architecture spread across multiple regions, and each microservice was behind its own ALB. While this setup ensured isolation and flexibility, it also came with a hefty price tag, especially in the development environment where cost optimization is often overlooked.</p>
<p>Each ALB incurs a base cost, plus additional charges based on the amount of traffic processed. With 14-15 ALBs per region, the costs quickly added up, making it clear that a more efficient solution was needed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725685509403/aa4b2a66-b47a-4fa4-b688-69cdb8d89364.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725685538335/8d460b6b-f473-4d1e-8fb8-018866f6e684.png" alt class="image--center mx-auto" /></p>
<p>Figure: Architecture for Deployed Microservices</p>
<p><strong>The Solution: A Single ALB with Host-Based Routing</strong></p>
<p>To address this, I proposed and implemented the use of a single ALB per region, utilizing host-based routing to manage traffic to the various microservices. This approach not only simplified the architecture but also significantly reduced costs. Here’s how you can do it too.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725777444880/1ba14a57-652d-4f4e-a06a-18c80f6612e5.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-by-step-implementation">Step-by-Step Implementation</h4>
<ol>
<li><p>Setup the Application Load Balancer</p>
<ul>
<li><p>First, create a single ALB in the AWS region where your microservices are hosted. This ALB will handle traffic for all the microservices in that region.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686144593/cfbd1e79-f2fc-4279-bb81-af2f80d57f52.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686242857/480b6107-3809-48f5-841d-59ef17dcb950.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Configure Listeners</p>
<ul>
<li><p>ALBs operate on Layer 7, allowing them to inspect the HTTP/HTTPS headers. Set up a listener on port 80 (HTTP) and redirect to 443(HTTPS) and also port 443 (HTTPS).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686427019/13003249-f3ff-47e1-b75d-002e231aed3c.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686597205/fb4d550c-fe72-414d-8f66-5ef56c6e4e4b.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686614130/82ec3436-f2d7-4527-855c-8989cadc9b84.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686739335/d15d0801-2759-4248-bb71-4cd5d405d6f1.png" alt class="image--right mx-auto mr-0" /></p>
<ul>
<li><p>Forward incoming traffic to the appropriate target groups based on host headers.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686917557/4ed1dda5-dab6-4e5e-8480-7eca36f6cbf0.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686943419/8f2ff987-b85b-4479-b60f-497f079b6b07.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725686991636/2fda69c2-ddcc-49c6-a1b6-2bed73009a87.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687015227/595a7369-c48c-44e1-aa25-653e5661bcad.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687039566/af2e8351-c6d8-44c6-922c-7cb1924cf880.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Repeat the same process for website 2 and others if any</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687121764/38b42a16-e6fa-4ad9-8c1a-2b9771e07bb7.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687147178/942336ba-6796-4ec0-bfc8-c4fca229b40b.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ul>
<ol start="3">
<li><p>DNS Configuration</p>
<ul>
<li><p>Update your DNS settings to point the different subdomains (e.g., <a target="_blank" href="http://service1.example.com"><code>service1.example.com</code></a>) to the ALB. This can be done using Route 53 or any other DNS service you’re using.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687405373/cd0a2255-4bb1-4fd8-9f97-754928246677.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687351654/b9d2d5fe-ad86-4c3f-b359-56b7a5e6a6f9.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Testing and Validation</p>
<ul>
<li><p>After setting up the ALB and routing rules, thoroughly test the setup to ensure that traffic is correctly routed to the respective microservices. Validate that each service is accessible through its designated domain.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687575368/6655ee8b-9628-44be-8437-40d87861cabc.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725687601759/82515ea3-8a9b-4deb-acce-29e36a634908.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-results-significant-cost-savings"><strong>The Results: Significant Cost Savings</strong></h3>
<p>  After implementing this single ALB setup, the client saw a substantial reduction in their AWS bill. By consolidating the load balancers, we eliminated the redundant costs associated with maintaining multiple ALBs per region. The simplified architecture also made it easier to manage and monitor the environment, leading to operational efficiencies.</p>
</li>
<li><h3 id="heading-conclusion">Conclusion</h3>
<p>  If you're managing a microservices architecture on AWS and are looking for ways to reduce costs, consider consolidating your Application Load Balancers using host-based routing. This approach not only cuts down on unnecessary expenses but also simplifies your architecture, making it easier to maintain and scale.</p>
<p>  This solution worked wonders for my client, and I’m confident it can do the same for others facing similar challenges. By sharing this implementation strategy, I hope to help others optimize their AWS environments and save on their cloud bills.</p>
</li>
</ul>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Simplifying Access: Configuring Password Authentication for AWS EC2 Instances]]></title><description><![CDATA[Enabling password authentication for AWS EC2 instances is a common requirement for users who prefer or need to use passwords instead of SSH key pairs for remote access. However, it's essential to note that using password authentication can introduce ...]]></description><link>https://blog.anupkafle.com.np/simplifying-access-configuring-password-authentication-for-aws-ec2-instances</link><guid isPermaLink="true">https://blog.anupkafle.com.np/simplifying-access-configuring-password-authentication-for-aws-ec2-instances</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community Builder]]></category><category><![CDATA[ec2]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Fri, 24 May 2024 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721923684484/916bdd70-d087-46af-8664-6281874f9c75.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Enabling password authentication for AWS EC2 instances is a common requirement for users who prefer or need to use passwords instead of SSH key pairs for remote access. However, it's essential to note that using password authentication can introduce security risks, and AWS recommends using SSH key pairs for enhanced security. If you still need to enable password authentication, follow these steps carefully.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p>An AWS account with access to EC2.</p>
</li>
<li><p>An existing EC2 instance running a Linux-based operating system.</p>
</li>
<li><p>SSH access to your EC2 instance using a key pair.</p>
</li>
</ul>
<h2 id="heading-step-by-step-guide">Step-by-Step Guide</h2>
<ol>
<li><p>Connect to Your EC2 Instance</p>
<p> First, you need to connect to your EC2 instance using SSH. Use the terminal (Linux/macOS) or an SSH client like PuTTY (Windows).</p>
<pre><code class="lang-bash"> ssh -i /path/to/your-key.pem ec2-user@your-instance-public-dns
</code></pre>
<p> Replace <code>/path/to/your-key.pem</code> with the path to your SSH key, and <code>ec2-user</code> with your instance's appropriate username (e.g., <code>ubuntu</code> for Ubuntu instances).</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721922870798/0b6b6d9e-6df1-4726-b339-ad82d799cfd3.png" alt class="image--center mx-auto" /></p>
<p>2. Switch to the Root User</p>
<p>Once logged in, switch to the root user to ensure you have the necessary permissions to make configuration changes.</p>
<pre><code class="lang-bash">sudo su -
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721922987848/7c2f68e8-83d0-40eb-a599-45fe9c0ece8f.png" alt class="image--center mx-auto" /></p>
<p>3. Edit the SSH Configuration File</p>
<p>Open the SSH configuration file using a text editor like <code>vi</code> or <code>nano</code>.</p>
<pre><code class="lang-bash">nano /etc/ssh/sshd_config
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721923034814/d8f3ab1c-e0c7-4040-80fd-343ef26b052c.png" alt class="image--center mx-auto" /></p>
<p>4. Modify SSH Configuration for Password Authentication</p>
<p>Find the following line in the <code>sshd_config</code> file:</p>
<pre><code class="lang-bash">PasswordAuthentication no
</code></pre>
<p>Change <code>no</code> to <code>yes</code>:</p>
<pre><code class="lang-bash">PasswordAuthentication yes
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721923121442/d5b65a77-c27e-4f13-8e75-e0d630b3b948.png" alt class="image--center mx-auto" /></p>
<p>Additionally, ensure that the following line is uncommented and set to <code>yes</code>:</p>
<pre><code class="lang-bash">ChallengeResponseAuthentication no
</code></pre>
<p><strong>5. Set a Password for the User</strong></p>
<p>You need to set a password for the user you wish to enable password authentication for. For example, to set a password for the <code>ec2-user</code>, run:</p>
<pre><code class="lang-bash">passwd ec2-user
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721923205386/d674dbcb-3e80-4f41-ad8e-ade45ee9891e.png" alt class="image--center mx-auto" /></p>
<p>You'll be prompted to enter and confirm a new password.</p>
<p>6. Restart the SSH Service</p>
<p>To apply the changes, restart the SSH service:</p>
<pre><code class="lang-bash">service sshd restart
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721923282023/d68ddf67-9aea-46e9-b8de-82cb7a87cec5.png" alt class="image--center mx-auto" /></p>
<p>7. Update Security Groups (Optional)</p>
<p>Ensure your EC2 instance's security group allows inbound SSH (port 22) traffic. You can do this through the AWS Management Console:</p>
<ol>
<li><p>Navigate to <strong>EC2 Dashboard</strong> &gt; <strong>Instances</strong>.</p>
</li>
<li><p>Select your instance and click on the <strong>Security</strong> tab.</p>
</li>
<li><p>Click on the <strong>Security Groups</strong> link.</p>
</li>
<li><p>Add or ensure an inbound rule exists for <strong>SSH</strong> with <strong>Source</strong> set to your preferred IP range.</p>
</li>
</ol>
<p>8. Test Password Authentication</p>
<p>Disconnect from the instance and attempt to reconnect using the password:</p>
<pre><code class="lang-bash">ssh ec2-user@your-instance-public-dns
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721923402149/7109af02-c33e-4703-885f-e3b6c87bbcf8.png" alt class="image--center mx-auto" /></p>
<p>Enter the password you set earlier when prompted.</p>
<h2 id="heading-important-security-considerations">Important Security Considerations</h2>
<ul>
<li><p><strong>Security Risks</strong>: Enabling password authentication increases the risk of brute-force attacks. Consider using complex passwords and limit the source IP range for SSH access.</p>
</li>
<li><p><strong>Alternative Authentication</strong>: Consider using Multi-Factor Authentication (MFA) or a bastion host to improve security.</p>
</li>
<li><p><strong>Logging and Monitoring</strong>: Enable logging and monitoring to detect unauthorized access attempts.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>While enabling password authentication on AWS EC2 instances is straightforward, it is crucial to understand the security implications and follow best practices to mitigate potential risks. Whenever possible, prefer using SSH key pairs for secure and efficient authentication.</p>
<p>By following the steps outlined in this guide, you can enable password authentication on your EC2 instances and ensure you have proper security measures.</p>
]]></content:encoded></item><item><title><![CDATA[Recovering Lost EC2 Key Pair: A Step-by-Step Guide to Creating a New Key Pair]]></title><description><![CDATA[Introduction:
Amazon Elastic Compute Cloud (EC2) is a powerful and flexible cloud computing service that allows users to run virtual servers in the cloud. EC2 instances are secured using key pairs, which consist of a public key that is stored on the ...]]></description><link>https://blog.anupkafle.com.np/recovering-lost-ec2-key-pair-a-step-by-step-guide-to-creating-a-new-key-pair</link><guid isPermaLink="true">https://blog.anupkafle.com.np/recovering-lost-ec2-key-pair-a-step-by-step-guide-to-creating-a-new-key-pair</guid><category><![CDATA[awscommunitybuilder]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Sun, 25 Feb 2024 15:57:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708876903383/30ee4139-3599-4dab-aaa1-29aafe5ea662.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction:</h1>
<p>Amazon Elastic Compute Cloud (EC2) is a powerful and flexible cloud computing service that allows users to run virtual servers in the cloud. EC2 instances are secured using key pairs, which consist of a public key that is stored on the instance and a private key that the user securely keeps. Losing access to the private key can be challenging, but fear not – Amazon Web Services (AWS) provides a straightforward process to recover from this predicament. In this blog post, we will guide you through recovering a lost EC2 key pair by creating a new one.</p>
<p><strong>Step1: Launch a New Temporary Instance:</strong></p>
<ul>
<li><p>In your AWS Management Console, create a new temporary instance.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708874962111/bc9666ac-71f0-4d71-9653-8fa75cb17f57.png" alt class="image--center mx-auto" /></p>
<p>  <strong>Step2: Create a New Key Pair:</strong></p>
<ul>
<li><p>Generate a new key pair and give it a name during the creation process.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875011658/30e7bd3b-c81e-47ae-a9e1-d004417f66a8.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875022868/48057771-c629-4aa2-b644-4b40b37eca87.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Step3: Stop the Old Instance:</strong></p>
<ul>
<li><p>Navigate to your old instance (<a target="_blank" href="http://blog.anupkafle.com.np">blog.anupkafle.com.np</a>) and stop it.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875113275/dab3f0be-8524-4058-aebd-613e78a44bc4.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step4: Go to your Volume:</strong></p>
<ul>
<li><p>After the instance is stopped, go to storage, and click on the volume ID.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875418054/dc87458f-b887-4cc0-ba5e-2eeb1f1344f3.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step5: Rename the volume Volume (optional):</strong></p>
<ul>
<li><p>Rename your volume so that we can recognize it easily.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875456796/d3720f6a-9417-429a-8988-bbffc6f4b6d8.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step6: Detach Old Volume:</strong></p>
<ul>
<li><p>In the Volume, go to Actions, and detach it.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875519396/408ca674-4454-41c1-a603-7aae558b6d62.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875545998/d5d6731f-9ed0-400d-9800-ed7cf59d6724.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step7: Attach Volume to Temporary Instance:</strong></p>
<ul>
<li><p>Attach the detached volume to your new temporary instance.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875555329/b11f3bb3-92ea-4ed6-98c1-ebf8452fe51f.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875669812/305d506d-ce97-4f33-afda-152640519d79.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step8: Connect to Temporary Instance:</strong></p>
<ul>
<li><p>Utilize AWS Instance Connect or SSH from your terminal to connect to the temporary instance.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875583390/977cb1bd-d3f2-4d59-bd2e-9ea48685abec.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875593709/47f1f448-a616-4d39-86c9-26e3761740be.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step9: Prepare for Disk Operations:</strong></p>
<ul>
<li><p>Create a directory:</p>
<p>  <code>mkdir -p /var/anupblog-disk</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875631234/47069ef4-ba21-47fc-886a-ef2a5d58aa9a.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step10: Mount Old Disk:</strong></p>
<ul>
<li><p>Mount the old disk to the temporary instance.</p>
<pre><code class="lang-bash">  mount -o nouuid /dev/xvdf1 /var/anupblog-disk
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875768710/27aaf3a0-d1a2-4880-b4a9-de4e3476d99d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Use <code>lsblk</code> to confirm the volume attachment.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875778046/83b72e63-4f8c-4b90-b531-b3be8f315807.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step11: Copy New Public Key to Mounted Disk:</strong></p>
<ul>
<li><p>Copy the new public key:</p>
<pre><code class="lang-bash">  cat /home/ec2-user/.ssh/authorized_keys &gt;&gt; /var/anupblog-disk/home/ec2-user/.ssh/authorized_keys
</code></pre>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875813638/52a2661e-b3dd-4159-8c74-63e54cdacc15.png" alt class="image--center mx-auto" /></p>
<p>    <strong>Step12: Unmount the Disk:</strong></p>
<ul>
<li><p>Safely unmount the disk:</p>
<pre><code class="lang-bash">  umount /var/anupblog-disk
</code></pre>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875841601/3b001027-36c1-49b0-bbf8-d99d3f1f011e.png" alt class="image--center mx-auto" /></p>
<p>    <strong>Step13: Detach Volume from Temporary Instance:</strong></p>
<ul>
<li><p>In the AWS Console, detach the volume from the temporary instance.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875954051/70eedba7-97b1-4e95-b669-e99cb431d9fe.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step14: Attach Volume to Old Instance:</strong></p>
<ul>
<li><p>Attach the volume to the old instance, ensuring the device name remains the same.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875968698/057732e0-3e8e-4e48-88ca-033d64164154.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708875975012/dc9cb76c-29fa-4916-9d50-435d39efc2bd.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step15: Start the Old Instance:</strong></p>
<ul>
<li><p>Start your old instance.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708876015556/d86970e4-2f58-4934-bf8c-84abf3a53a5d.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>    <strong>Step16: SSH into the Instance:</strong></p>
<ul>
<li>Using the key created for the temporary instance, SSH into your old instance.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708876028033/90e8a76f-6e00-4fa5-acbb-07ef305e3bda.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion:</h1>
<p>Losing access to an EC2 instance due to a lost key pair can be a nerve-wracking experience, but AWS provides a clear and effective process for recovery. By following the step-by-step guide outlined in this blog post, you can create a new key pair, associate it with your EC2 instance, and regain control of your virtual server. Remember to maintain secure practices for managing and storing your key pairs to prevent future issues.</p>
]]></content:encoded></item><item><title><![CDATA[Hostman Unleashed: Harnessing Simplicity and Expanding Horizons - Features and User-Suggested Enhancements]]></title><description><![CDATA[Introduction:
In the ever-evolving landscape of cloud service providers, finding a balance between user-friendliness and comprehensive support is crucial for meeting the diverse needs of users. This blog explores the experiences of users with Hostman...]]></description><link>https://blog.anupkafle.com.np/hostman-unleashed-harnessing-simplicity-and-expanding-horizons-features-and-user-suggested-enhancements</link><guid isPermaLink="true">https://blog.anupkafle.com.np/hostman-unleashed-harnessing-simplicity-and-expanding-horizons-features-and-user-suggested-enhancements</guid><category><![CDATA[Hostman]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[review]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Sun, 11 Feb 2024 17:06:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707670814088/cc3ea18f-3f01-4bad-8c1c-de74c1fd90ac.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Introduction:</strong></p>
<p>In the ever-evolving landscape of cloud service providers, finding a balance between user-friendliness and comprehensive support is crucial for meeting the diverse needs of users. This blog explores the experiences of users with <a target="_blank" href="https://hostman.com/">Hostman</a>, a cloud service provider lauded for its friendly interface, while also acknowledging feedback regarding documentation gaps and a somewhat limited service offering.</p>
<p><strong>Features and Strengths:</strong></p>
<ol>
<li><p><strong>User-Friendly Interface:</strong></p>
<ul>
<li><p>Hostman boasts a user-friendly interface, making it an ideal choice for beginners and startups entering the realm of cloud computing. The platform's simplicity facilitates easy onboarding and management of cloud resources, providing a seamless experience for users with varying levels of technical expertise.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707670902953/4c9adb9e-c03d-4447-8548-46f4d2f019f1.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Cloud Servers with Global Presence:</strong></p>
<ul>
<li>The platform offers dedicated computing resources through Cloud Servers, strategically located in Poland and the Netherlands. With plans to expand to additional regions, Hostman ensures reliable performance and accessibility for users across the globe.</li>
</ul>
</li>
<li><p><strong>Ready-Made Setups for Various Purposes:</strong></p>
<ul>
<li><p>Hostman provides over 25 ready-made setups with pre-installed environments and software, simplifying the setup process for different purposes. This feature allows users to deploy configurations tailored to their specific needs without extensive technical knowledge</p>
<p>  .</p>
</li>
</ul>
</li>
<li><p><strong>Instant Setup for Popular Databases:</strong></p>
<ul>
<li><p>The platform supports instant setup for popular database management systems, including MySQL, PostgreSQL, MongoDB, and Redis. This seamless integration enables efficient database management for applications and services.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707671042273/2e471e0f-7207-4d4c-9e56-713e29abe8c7.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Support for Various Development Frameworks:</strong></p>
<ul>
<li><p>Hostman supports various web development frameworks, such as React, Angular, Vue, Next.js, Ember, etc. Users can connect their repositories from platforms like Github, Gitlab, or Bitbucket, providing flexibility in testing and deploying projects.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707670948333/c6322d91-b776-42af-baea-f02c4d726947.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>"Refer a Friend" Program:</strong></p>
<p> Hostman sweetens the deal with its "Refer a Friend" program, where simplicity meets generosity. Share your unique referral link, and when your friend signs up and adds $10 or more to their account, they receive an extra $50, and you earn $100 – a win-win! There's no cap on the number of friends you can invite, so the more, the merrier. Just remember to fuel your account with a genuine $10 for those bonuses, use them across services, and note that they don't stack with other ongoing promos. Ready to amplify your Hostman experience? Start referring and unlocking rewards effortlessly!</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707726818887/a7e7c289-9d03-4ee0-93f2-484121f26fac.png" alt class="image--center mx-auto" /></p>
<p> Feel free to use my referral link: <a target="_blank" href="https://hostman.com/r/vk99958">https://hostman.com/r/vk99958</a> – a win-win opportunity awaits us both!</p>
<p> <strong>Feedback and Areas for Improvement:</strong></p>
<ol>
<li><p><strong>Limited Documentation:</strong></p>
<ul>
<li>Users express concern about the lack of extensive documentation, potentially hindering their ability to troubleshoot issues independently. Clear and comprehensive documentation could empower users to address common problems without solely relying on customer support.</li>
</ul>
</li>
<li><p><strong>Dependency on Support for Assistance:</strong></p>
<ul>
<li>Users note a need to contact support for assistance, indicating a potential reliance on customer support for issue resolution. Establishing a more self-service-oriented approach through enhanced documentation could reduce dependence on support for routine queries.</li>
</ul>
</li>
<li><p><strong>Limited Service Offering:</strong></p>
<ul>
<li>Hostman offers a limited range of services. Expanding the service offerings could attract a broader user base with diverse requirements, positioning Hostman as a more comprehensive solution for various business needs.</li>
</ul>
</li>
</ol>
</li>
</ol>
<p>        <strong>Conclusion:</strong></p>
<p>        Hostman presents a compelling case as a user-friendly cloud service provider with strengths in simplicity, global accessibility, and ready-made setups. Addressing feedback regarding documentation and expanding service offerings can further enhance the platform's appeal, providing users with a more self-sufficient and versatile experience. Hostman's journey illustrates the delicate balance required to cater to both beginners seeking simplicity and experienced users requiring comprehensive features in the dynamic landscape of cloud services.</p>
<p>    <strong>References:</strong></p>
<ol>
<li><p>Hostman Official Website. (<a target="_blank" href="https://www.hostman.com/">https://www.hostman.com/)</a></p>
</li>
<li><p>User Feedback from Hostman Community Forums and Social Media Platforms.</p>
</li>
<li><p>Industry Analyst Reports on Cloud Service Providers.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Amazon Q: Unleashing the Power of AI to Transform Workflows and Empower Organizations]]></title><description><![CDATA[Image : amazon
Introduction
A Comprehensive Look at Enabling New Workflows, Enhancing Observability, and Boosting Productivity In today's data-driven world, organizations constantly seek new ways to optimize their operations, enhance productivity, an...]]></description><link>https://blog.anupkafle.com.np/amazon-q-unleashing-the-power-of-ai-to-transform-workflows-and-empower-organizations</link><guid isPermaLink="true">https://blog.anupkafle.com.np/amazon-q-unleashing-the-power-of-ai-to-transform-workflows-and-empower-organizations</guid><category><![CDATA[amazonq]]></category><category><![CDATA[AWS]]></category><category><![CDATA[awcommunitybuilder]]></category><category><![CDATA[reInvent]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Sat, 02 Dec 2023 14:06:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701525729047/00d99f13-46bd-4efc-9b64-ee27cd5290d8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Image : <a target="_blank" href="https://aws.amazon.com/q/">amazon</a></p>
<h1 id="heading-introduction">Introduction</h1>
<p>A Comprehensive Look at Enabling New Workflows, Enhancing Observability, and Boosting Productivity In today's data-driven world, organizations constantly seek new ways to optimize their operations, enhance productivity, and extract deeper insights from their data. Amazon Q, a generative AI-powered assistant, emerges as a game-changer, revolutionizing how we interact with and optimize our work processes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701525591426/84b5dcf1-5d8e-4f3a-9243-a9f0bcbc0b91.png" alt class="image--center mx-auto" /></p>
<p>Let's delve into scenarios where Amazon Q can significantly impact your organization.</p>
<h1 id="heading-1-unleashing-new-workflow-horizons"><strong>1. Unleashing New Workflow Horizons</strong></h1>
<p>Amazon Q empowers users to establish entirely new workflows, transcending the limitations of traditional approaches. Its natural language processing (NLP) capabilities enable users to interact with it in plain English, asking questions and receiving immediate, actionable responses. This intuitive interface paves the way for creating novel workflows that were previously unimaginable.</p>
<p><strong>A practical example:</strong> Imagine our marketing team utilizing Amazon Q to generate personalized email campaigns tailored to specific customer segments. By analyzing customer data and preferences, Amazon Q can craft email content that resonates with each individual, enhancing the campaign's effectiveness.</p>
<h1 id="heading-2unveiling-workflow-blind-spots"><strong>2.Unveiling Workflow Blind Spots</strong></h1>
<p>Amazon Q introduces a paradigm shift in workflow observability, giving users unprecedented visibility into their work processes. By seamlessly integrating with various AWS services, Amazon Q can monitor workflow execution, identify potential bottlenecks, and proactively alert users to anomalies. This real-time visibility enables users to optimize workflows, minimize disruptions, and maximize efficiency.</p>
<p><strong>Envision this scenario:</strong> Consider a financial services organization utilizing Amazon Q to monitor its loan processing workflow. Amazon Q can identify patterns in loan applications, detect potential fraud attempts, and alert loan officers to potential issues. This proactive approach ensures timely interventions, prevents errors, and enhances customer satisfaction.</p>
<h1 id="heading-3unleashing-productivity-potential"><strong>3.Unleashing Productivity Potential</strong></h1>
<p>Amazon Q catalyzes productivity enhancement, transforming current workflows and enabling users to achieve more quickly. By automating repetitive tasks, providing real-time insights, and facilitating seamless collaboration, Amazon Q streamlines work processes, eliminates redundancies and empowers users to focus on higher-value activities. Imagine a customer service team employing Amazon Q to handle routine inquiries and resolve common issues. Amazon Q can answer customer questions, provide step-by-step troubleshooting guides, and even escalate complex issues to the appropriate personnel. This automation frees customer service representatives to focus on more complex cases, improving customer satisfaction and overall team productivity.</p>
<h1 id="heading-conclusion-embracing-the-future-of-work"><strong>Conclusion: Embracing the Future of Work</strong></h1>
<p>Amazon Q stands at the forefront of AI-powered workflow automation, offering a transformative solution for organizations seeking to enhance efficiency, gain deeper observability, and boost productivity. By enabling new workflows, providing real-time insights, and streamlining repetitive tasks, Amazon Q empowers organizations to achieve unprecedented operational excellence. As AI continues to evolve, Amazon Q is poised to revolutionize how we work, paving the way for a future of unparalleled productivity and innovation. Amazon Q represents a significant step in our journey towards a more efficient, data-driven, and human-centered work environment. By harnessing the power of AI, Amazon Q is empowering organizations to break free from traditional limitations and embrace a new era of work optimization and productivity.</p>
<h1 id="heading-references">References</h1>
<ol>
<li><p><a target="_blank" href="https://aws.amazon.com/q/">https://aws.amazon.com/q/</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/aws/introducing-amazon-q-a-new-generative-ai-powered-assistant-preview/">https://aws.amazon.com/blogs/aws/introducing-amazon-q-a-new-generative-ai-powered-assistant-preview/</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/">https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Demystifying Multipart Upload on Amazon S3]]></title><description><![CDATA[Introduction
Amazon S3 (Simple Storage Service) is a scalable, secure, and highly durable object storage service that Amazon Web Services (AWS) provides. It allows you to store and retrieve large amounts of data efficiently. One feature that makes Am...]]></description><link>https://blog.anupkafle.com.np/demystifying-multipart-upload-on-amazon-s3</link><guid isPermaLink="true">https://blog.anupkafle.com.np/demystifying-multipart-upload-on-amazon-s3</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[multipart upload]]></category><category><![CDATA[awcommunitybuilder]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Tue, 17 Oct 2023 14:32:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/MrVEedTZLwM/upload/c7df55a2fb9a667f93542afaa4b0eafe.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p><strong>Amazon S3</strong> (Simple Storage Service) is a scalable, secure, and highly durable object storage service that Amazon Web Services (AWS) provides. It allows you to store and retrieve large amounts of data efficiently. One feature that makes Amazon S3 powerful is multipart upload, particularly useful when dealing with large files or when network interruptions or errors are common. This blog will explore multipart upload, its advantages, and how to implement it on Amazon S3.</p>
<h3 id="heading-what-is-multipart-upload">What is Multipart Upload?</h3>
<p>Multipart upload is a feature of Amazon S3 that enables the efficient and reliable uploading of large objects by breaking them into smaller parts. These smaller parts are uploaded in parallel and assembled to create the final thing. This approach offers several benefits:</p>
<ol>
<li><p><strong>Resumable Uploads:</strong> If an error occurs during the upload process, you can retry uploading only the failed parts instead of the entire file. This is crucial for large files and unreliable network connections.</p>
</li>
<li><p><strong>Improved Throughput:</strong> Multipart uploads can significantly improve upload speeds as you can upload multiple parts simultaneously.</p>
</li>
<li><p><strong>Optimal Memory Usage:</strong> Uploading large files as a single object might consume a lot of memory. Multipart uploads allow you to work with smaller parts, reducing the memory footprint.</p>
</li>
<li><p><strong>Metadata Updates:</strong> You can set object metadata on individual parts, especially for custom metadata or permissions.</p>
</li>
</ol>
<h3 id="heading-when-to-use-multipart-upload">When to Use Multipart Upload?</h3>
<p>Multipart upload is particularly useful in the following scenarios:</p>
<ol>
<li><p><strong>Large Files:</strong> When uploading files larger than 100 MB, Amazon S3 recommends multipart upload.</p>
</li>
<li><p><strong>Unreliable Network ConnUnreliable Network Connections:ections:</strong> In situations where network interruptions are common, multipart upload minimizes the risk of data loss and allows you to retry uploading specific parts.</p>
</li>
<li><p><strong>Streaming Data:</strong> For applications that require streaming data directly to Amazon S3, multipart upload can be a more efficient choice.</p>
</li>
<li><p><strong>Custom Metadata:</strong> If you need to set specific metadata or access control settings for different parts of an object, multipart upload provides this flexibility.</p>
</li>
</ol>
<h3 id="heading-best-practices-for-multipart-upload"><strong>Best Practices for Multipart Upload</strong></h3>
<p>Here are some best practices to follow when implementing multipart upload on Amazon S3:</p>
<ol>
<li><p><strong>Part Size:</strong> Choose an optimal part size based on your use case. Smaller parts are better for unreliable connections, while larger parts can improve upload efficiency.</p>
</li>
<li><p><strong>Error Handling:</strong> Implement robust error handling, as failures can occur during any part of the multipart upload process.</p>
</li>
<li><p><strong>Optimal Concurrency:</strong> Consider the number of parts to upload in parallel depending on your available resources. AWS SDKs often provide concurrency options to fine-tune this.</p>
</li>
<li><p><strong>Monitoring and Logging:</strong> Use AWS CloudWatch and Amazon S3 access logs to monitor and log your multipart uploads for tracking and troubleshooting.</p>
</li>
<li><p><strong>Lifecycle Policies:</strong> Implement lifecycle policies to manage incomplete multipart uploads, ensuring that you don't leave unfinished uploads consuming storage</p>
</li>
</ol>
<h3 id="heading-how-to-implement-multipart-upload-on-amazon-s3"><strong>How to Implement Multipart Upload on Amazon S3</strong></h3>
<p>Let's go through the steps to implement a multipart upload on Amazon S3:</p>
<ol>
<li><p><strong>Split the Video File:</strong> Split the video file using the <code>split</code> command into smaller parts.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697551433093/35d0695e-cee2-4592-9b2e-464ac3577b12.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697551454433/25a31290-84fe-46e1-bba0-342e84386693.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Create an S3 Bucket:</strong> If you haven't already, create an S3 bucket where you want to upload the video parts. Replace <code>multipartupload-demo-oct</code> with your desired bucket name. Remember that S3 bucket names must be globally unique.</p>
<pre><code class="lang-json"> aws s3api create-bucket --bucket multipartupload-demo-oct
</code></pre>
</li>
<li><p><strong>Initiate a Multipart Upload:</strong> Initiate the multipart upload for the video file. You will receive an upload-id, which you will need for subsequent steps.</p>
<pre><code class="lang-json"> aws s3api create-multipart-upload --bucket your-unique-bucket-name --key yourfilename
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697551641294/389cdca5-b50f-45da-8c41-7dd0be97de56.png" alt class="image--center mx-auto" /></p>
<p> This command will return an upload-id like <code>nVQBtP3g0teF_D5JuOz16emWZAhS97y8wyuQx6Q0GRNSn7Ogaz6hkCyphgx4lhinf_2iAX5iWUwQeZo8JHLSBA--</code></p>
<ol>
<li><p><strong>Upload the Parts:</strong> Now, you'll upload the two video parts, <code>xaa</code> and <code>xab</code>, as individual parts of Amazon S3. You must specify the part number and provide the UploadId obtained in the previous step.</p>
<p> (I) Upload the first part (<code>xaa</code>):</p>
<p> This will return an ETag for the uploaded part, something like."<code>9d15c7757230b2f88a47906bdc254c07</code>"</p>
</li>
</ol>
</li>
</ol>
<pre><code class="lang-json">    aws s3api upload-part --bucket DOC-EXAMPLE-BUCKET --key large_test_file --part-number <span class="hljs-number">1</span> --body large_test_file<span class="hljs-number">.001</span> --upload-id
</code></pre>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697551847645/0af5fee4-eade-4d85-aced-4d00b2931c52.png" alt class="image--center mx-auto" /></p>
<p>(II) Upload the second part (<code>xab</code>):</p>
<p>This will also return an ETag for the second part, something like "b104d08a3208c1b16ce4dbccca9d8d34".</p>
<pre><code class="lang-json">aws s3api upload-part --bucket your-unique-bucket-name --key videoplayback.mp4 --part-number <span class="hljs-number">2</span> --body xab --upload-id nVQBtP3g0teF_D5JuOz16emWZAhS97y8wyuQx6Q0GRNSn7Ogaz6hkCyphgx4lhinf_2iAX5iWUwQeZo8JHLSBA--
</code></pre>
<ol>
<li><p><strong>List Etag:</strong> you can use the <code>aws s3api list-parts</code> command to list the ETags for the multipart upload parts. Here's the command you can use:</p>
<pre><code class="lang-json"> aws s3api list-parts --bucket your-unique-bucket-name --key videoplayback.mp4 --upload-id nVQBtP3g0teF_D5JuOz16emWZAhS97y8wyuQx6Q0GRNSn7Ogaz6hkCyphgx4lhinf_2iAX5iWUwQeZo8JHLSBA--
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697552276189/b98bc84a-cb38-4b91-9095-cf3af22393a0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Create a JSON file and keep the details in the JSON file</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697552332511/b4a12316-fabd-42f2-a0ef-505299a92f09.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697552373698/9162a12e-5154-4cc2-84fb-bfa4f0353ac2.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Complete the Multipart Upload:</strong> After uploading all parts, you must request Amazon S3 to complete the multipart upload. Use the upload-id and a list of ETags (ETags from the two parts) to indicate which parts belong to this upload.</p>
<pre><code class="lang-json"> aws s3api complete-multipart-upload --multipart-upload file:<span class="hljs-comment">//multipart.json --bucket multipartupload-demo-oct --key videoplayback.mp4 --upload-id nVQBtP3g0teF_D5JuOz16emWZAhS97y8wyuQx6Q0GRNSn7Ogaz6hkCyphgx4lhinf_2iAX5iWUwQeZo8JHLSBA--</span>
</code></pre>
<p> This command will complete the upload, and your video file, videoplayback.mp4, will be available in your S3 bucket.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697552551836/7b270051-953a-4109-a371-09ee0062b0f7.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>That's it! You've successfully performed a multipart upload for your video file using the AWS CLI. This approach ensures the efficient and reliable transfer of extensive data to Amazon S3, even when network interruptions occur.</p>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>Multipart upload on Amazon S3 is valuable for anyone working with large files, ensuring a reliable, efficient, and flexible data transfer process. By understanding the concept and following best practices, you can harness the full power of multipart upload, ensuring the secure storage of your data in the cloud, even under challenging circumstances.</p>
]]></content:encoded></item><item><title><![CDATA[Deploy a Static Website in AWS S3 Using Terraform]]></title><description><![CDATA[Individuals and businesses must have a web presence in today's digital world. Whether you're a developer trying to exhibit your portfolio or a firm launching a new product, rapidly and efficiently creating a website is critical. Amazon Web Services (...]]></description><link>https://blog.anupkafle.com.np/deploy-a-static-website-in-aws-s3-using-terraform</link><guid isPermaLink="true">https://blog.anupkafle.com.np/deploy-a-static-website-in-aws-s3-using-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS Community Builder]]></category><category><![CDATA[Devops]]></category><category><![CDATA[hashicorp]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Sat, 23 Sep 2023 08:24:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695457319540/5977abd0-70b5-4e7a-89ae-1d158fee67f0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Individuals and businesses must have a web presence in today's digital world. Whether you're a developer trying to exhibit your portfolio or a firm launching a new product, rapidly and efficiently creating a website is critical. Amazon Web Services (AWS) delivers a dependable and cost-effective hosting platform. Terraform is a sophisticated infrastructure-as-code tool for defining and provisioning AWS resources. This blog will walk you through deploying a static website in AWS S3 using Terraform.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before we dive into the deployment process, make sure you have the following prerequisites in place:</p>
<ol>
<li><p><strong>AWS Account</strong>: You need an AWS account to access AWS services.</p>
</li>
<li><p><strong>Terraform Installed</strong>: Download and install Terraform from the <a target="_blank" href="https://www.terraform.io/downloads.html">official website</a>.</p>
</li>
<li><p><strong>AWS CLI</strong>: Install the AWS Command Line Interface (CLI) and configure it with your AWS credentials. You can install it from <a target="_blank" href="https://aws.amazon.com/cli/">here</a>.</p>
</li>
<li><p><strong>Static Website Files</strong>: Prepare the static website files (HTML, CSS, JavaScript, images, etc.) you want to deploy. Place them in a directory.</p>
</li>
</ol>
<h2 id="heading-steps-to-deploy-a-static-website-in-aws-s3-using-terraform"><strong>Steps to Deploy a Static Website in AWS S3 Using Terraform</strong></h2>
<p>Let's start by setting up your Terraform configuration. Create a file named <code>main.tf</code> and add the following code:</p>
<h3 id="heading-step-1-aws-provider-configuration"><strong>Step 1: AWS Provider Configuration</strong></h3>
<pre><code class="lang-json">provider <span class="hljs-string">"aws"</span> {
  region     = <span class="hljs-attr">"us-east-1"</span> # Change this to your desired region
  access_key = <span class="hljs-attr">"your key"</span> # Add your AWS access key ID here
  secret_key = <span class="hljs-attr">"your secret key"</span> # Add your AWS secret access key here
}
</code></pre>
<p>This block configures the AWS provider, specifying the AWS region and your access key and secret key. It tells Terraform which cloud provider to interact with and how to authenticate.</p>
<h3 id="heading-step-2-s3-bucket-resource">Step 2: S3 Bucket Resource</h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = <span class="hljs-attr">"mydemostaticwebsite-2324413"</span> # give a unique bucket name
  tags = {
    Name = <span class="hljs-attr">"my-demo-static-website"</span>
  }
}
</code></pre>
<p>Here, you define an S3 bucket resource using the <code>aws_s3_bucket</code> resource type. It creates an S3 bucket with the specified name and sets tags to provide metadata for the bucket.</p>
<h3 id="heading-step-3-s3-bucket-website-configuration"><strong>Step 3: S3 Bucket Website Configuration</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket_website_configuration"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = aws_s3_bucket.my-static-website.id

  index_document {
    suffix = <span class="hljs-attr">"index.html"</span>
  }

  error_document {
    key = <span class="hljs-attr">"error.html"</span>
  }
}
</code></pre>
<p>This block configures the S3 bucket to act as a static website. It specifies the index document (the default file to load when accessing the website's root) and the error document (the page to display 404 errors).</p>
<h3 id="heading-step-4-s3-bucket-ownership-controls"><strong>Step 4: S3 Bucket Ownership Controls</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket_ownership_controls"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = aws_s3_bucket.my-static-website.id
  rule {
    object_ownership = <span class="hljs-attr">"BucketOwnerPreferred"</span>
  }
}
</code></pre>
<p>This block sets up ownership controls for the S3 bucket, ensuring that the bucket owner is preferred for objects within the bucket.</p>
<h3 id="heading-step-5-s3-bucket-public-access-block"><strong>Step 5: S3 Bucket Public Access Block</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket_public_access_block"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = aws_s3_bucket.my-static-website.id

  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}
</code></pre>
<p>Here, you configure the public access block settings for the S3 bucket. In this example, public access is allowed for various locations. You can adjust these settings based on your security requirements.</p>
<h3 id="heading-step-6-s3-bucket-acl"><strong>Step 6: S3 Bucket ACL</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket_acl"</span> <span class="hljs-string">"my-static-website"</span> {
  depends_on = [
    aws_s3_bucket_ownership_controls.my-static-website,
    aws_s3_bucket_public_access_block.my-static-website,
  ]

  bucket = aws_s3_bucket.my-static-website.id
  acl    = <span class="hljs-attr">"public-read"</span>
}
</code></pre>
<p>This block sets the bucket access control list (ACL) for the S3 bucket, allowing public read access. It depends on the previous two blocks to ensure proper ownership controls and public access settings.</p>
<h3 id="heading-step-7-create-an-indexhtml-and-upload-the-indexhtml-file-to-s3-bucket"><strong>Step 7: Create an index.html and upload the index.html File to S3 Bucket</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695455700271/d03f8a0b-1e16-4847-bcd6-f57882d8bd17.png" alt class="image--center mx-auto" /></p>
<p>Keep <code>index.html</code> in the same directory where <code>main.tf</code> is located.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_object"</span> <span class="hljs-string">"index_html"</span> {
  bucket       = aws_s3_bucket.my-static-website.id
  key          = <span class="hljs-attr">"index.html"</span>  # The name you want for the file in the S3 bucket
  source       = <span class="hljs-attr">"index.html"</span>  # The path to your local index.html file
  content_type = <span class="hljs-attr">"text/html"</span>

  # Make the object publicly accessible
  acl = <span class="hljs-attr">"public-read"</span>
}
</code></pre>
<p>This block uploads the <code>index.html</code> file from your local directory to the S3 bucket. It also sets the content type and makes the object publicly accessible.</p>
<h3 id="heading-step-8-s3-static-website-url-output"><strong>Step 8: S3 Static Website URL Output</strong></h3>
<pre><code class="lang-json">output <span class="hljs-string">"website_url"</span> {
  value = <span class="hljs-attr">"http://${aws_s3_bucket.my-static-website.bucket}.s3-website.us-east-1.amazonaws.com"</span>
}
</code></pre>
<p>This block defines an output variable that displays the URL of your static website after Terraform applies the configuration. The URL is constructed using the bucket name and the S3 website endpoint.</p>
<h3 id="heading-step-9-s3-bucket-policy"><strong>Step 9: S3 Bucket Policy</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket_policy"</span> <span class="hljs-string">"bucket-policy"</span> {
  bucket = aws_s3_bucket.my-static-website.id

  policy = &lt;&lt;POLICY
{
  <span class="hljs-attr">"Id"</span>: <span class="hljs-string">"Policy"</span>,
  <span class="hljs-attr">"Statement"</span>: [
    {
      <span class="hljs-attr">"Action"</span>: [
        <span class="hljs-string">"s3:DeleteObject"</span>,
        <span class="hljs-string">"s3:GetObject"</span>,
        <span class="hljs-string">"s3:ListBucket"</span>,
        <span class="hljs-string">"s3:PutObject"</span>,
        <span class="hljs-string">"s3:PutObjectAcl"</span>
      ],
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Resource"</span>: [
        <span class="hljs-string">"arn:aws:s3:::mydemostaticwebsite-2324413/*"</span>,
        <span class="hljs-string">"arn:aws:s3:::mydemostaticwebsite-2324413"</span>
        ],
      <span class="hljs-attr">"Principal"</span>: <span class="hljs-string">"*"</span>
    }
  ]
}
POLICY
}
</code></pre>
<p>This block defines an S3 bucket policy that allows various actions on objects within the bucket and sets the principal to <code>"*"</code> (anyone). This policy ensures public access to the bucket's contents.</p>
<h3 id="heading-step-10-overall-code"><strong>Step 10: Overall Code</strong></h3>
<p>The overall above code looks like this.</p>
<pre><code class="lang-json"># main.tf

# Configure the AWS provider with your credentials and desired region
provider <span class="hljs-string">"aws"</span> {
  region     = <span class="hljs-attr">"us-east-1"</span> # Change this to your desired region
  access_key = <span class="hljs-attr">"your key"</span>  # Add your AWS access key ID
  secret_key = <span class="hljs-attr">"your secret key"</span>  # Add your AWS secret access key here
}

# Create an S3 bucket and website configuration
resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = <span class="hljs-attr">"mydemostaticwebsite-2324413"</span> # give a unique bucket name
  tags = {
    Name = <span class="hljs-attr">"my-demo-static-website"</span>
  }
}

resource <span class="hljs-string">"aws_s3_bucket_website_configuration"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = aws_s3_bucket.my-static-website.id

  index_document {
    suffix = <span class="hljs-attr">"index.html"</span>
  }

  error_document {
    key = <span class="hljs-attr">"error.html"</span>
  }
}

resource <span class="hljs-string">"aws_s3_bucket_ownership_controls"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = aws_s3_bucket.my-static-website.id
  rule {
    object_ownership = <span class="hljs-attr">"BucketOwnerPreferred"</span>
  }
}

resource <span class="hljs-string">"aws_s3_bucket_public_access_block"</span> <span class="hljs-string">"my-static-website"</span> {
  bucket = aws_s3_bucket.my-static-website.id

  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}

resource <span class="hljs-string">"aws_s3_bucket_acl"</span> <span class="hljs-string">"my-static-website"</span> {
  depends_on = [
    aws_s3_bucket_ownership_controls.my-static-website,
    aws_s3_bucket_public_access_block.my-static-website,
  ]

  bucket = aws_s3_bucket.my-static-website.id
  acl    = <span class="hljs-attr">"public-read"</span>
}

# Upload index.html from the current directory to the S3 bucket and make it public
resource <span class="hljs-string">"aws_s3_object"</span> <span class="hljs-string">"index_html"</span> {
  bucket       = aws_s3_bucket.my-static-website.id
  key          = <span class="hljs-attr">"index.html"</span>  # The name you want for the file in the S3 bucket
  source       = <span class="hljs-attr">"index.html"</span>  # The path to your local index.html file
  content_type = <span class="hljs-attr">"text/html"</span>

  # Make the object publicly accessible
  acl = <span class="hljs-attr">"public-read"</span>
}

# S3 static website URL
output <span class="hljs-string">"website_url"</span> {
  value = <span class="hljs-attr">"http://${aws_s3_bucket.my-static-website.bucket}.s3-website.us-east-1.amazonaws.com"</span>
}

# S3 bucket policy
resource <span class="hljs-string">"aws_s3_bucket_policy"</span> <span class="hljs-string">"bucket-policy"</span> {
  bucket = aws_s3_bucket.my-static-website.id

  policy = &lt;&lt;POLICY
{
  <span class="hljs-attr">"Id"</span>: <span class="hljs-string">"Policy"</span>,
  <span class="hljs-attr">"Statement"</span>: [
    {
      <span class="hljs-attr">"Action"</span>: [
        <span class="hljs-string">"s3:DeleteObject"</span>,
        <span class="hljs-string">"s3:GetObject"</span>,
        <span class="hljs-string">"s3:ListBucket"</span>,
        <span class="hljs-string">"s3:PutObject"</span>,
        <span class="hljs-string">"s3:PutObjectAcl"</span>
      ],
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Resource"</span>: [
        <span class="hljs-string">"arn:aws:s3:::mydemostaticwebsite-2324413/*"</span>,
        <span class="hljs-string">"arn:aws:s3:::mydemostaticwebsite-2324413"</span>
        ],
      <span class="hljs-attr">"Principal"</span>: <span class="hljs-string">"*"</span>
    }
  ]
}
POLICY
}
</code></pre>
<h3 id="heading-step-11-deploying-your-static-website"><strong>Step 11: Deploying Your Static Website</strong></h3>
<ol>
<li><p>Open your terminal and navigate to the directory containing your <a target="_blank" href="http://main.tf"><code>main.tf</code></a> file.</p>
</li>
<li><p>Run the following Terraform commands:</p>
<pre><code class="lang-json"> terraform init
</code></pre>
<p> <code>terraform init</code> primarily initializes the AWS provider, installs the required provider plugin, validates the configuration files, sets up a local backend for state management, and prepares the project directory for further Terraform commands like <code>terraform plan</code> and <code>terraform apply</code>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695456143821/bca5105b-c599-4c59-aca0-aa9a2bf0fe35.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-json"> terraform apply --auto-approve
</code></pre>
<p> In the provided Terraform code, running <code>terraform apply --auto-approve</code> initiates applying infrastructure changes defined in the configuration without requiring manual confirmation. It automates the approval of changes, executes the planned actions, and provides the output, including the URL of the static website, as specified in the configuration. Use this flag cautiously and ensure you've reviewed your configuration carefully before applying changes, especially in production environments.</p>
<pre><code class="lang-json"> Apply complete! Resources: <span class="hljs-number">2</span> added, <span class="hljs-number">0</span> changed, <span class="hljs-number">0</span> destroyed.

 Outputs:

 website_url = <span class="hljs-string">"http://mydemostaticwebsite-2324413.s3-website.us-east-1.amazonaws.com"</span>
</code></pre>
<p> Once the deployment is complete, Terraform will display the URL of your static website. You can access your website using this URL.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695456559241/e180f766-1857-45b8-ba74-b96f641877ad.png" alt class="image--center mx-auto" /></p>
<p> On clicking the website_url, you can access your webpage.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p> In this blog post, you've learned how to create a static website on Amazon S3 using Terraform. By automating the infrastructure setup with Terraform, you can quickly deploy and manage your static websites on AWS. This approach provides a cost-effective and scalable solution for hosting your web content. Start building and deploying your static websites on AWS today!</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Streamlining User Management in Your Applications with Amazon Cognito]]></title><description><![CDATA[Amazon Cognito is a fully managed identity service that makes it easy to add user sign-up, sign-in, and access control to your web and mobile apps. It provides a secure and scalable way to authenticate users and supports social and enterprise identit...]]></description><link>https://blog.anupkafle.com.np/streamlining-user-management-in-your-applications-with-amazon-cognito</link><guid isPermaLink="true">https://blog.anupkafle.com.np/streamlining-user-management-in-your-applications-with-amazon-cognito</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cognito]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Sun, 04 Jun 2023 16:50:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897352035/7b20c6d2-8876-4ffc-94a7-9cfd39bb3e02.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon Cognito is a fully managed identity service that makes it easy to add user sign-up, sign-in, and access control to your web and mobile apps. It provides a secure and scalable way to authenticate users and supports social and enterprise identity providers.</p>
<p><strong>Benefits of Amazon Cognito:</strong></p>
<ul>
<li><p><strong>It's easy to use</strong>: Amazon Cognito provides a simple API that makes it easy to add user authentication and access control to your apps.</p>
</li>
<li><p><strong>It's secure</strong>: Amazon Cognito uses industry-standard security practices to protect your users' data.</p>
</li>
<li><p><strong>It's scalable:</strong> Amazon Cognito can handle millions of users and is designed to scale with your app. Support for multiple identity providers</p>
</li>
<li><p><strong>User pools:</strong> Amazon Cognito user pools are a great way to manage user accounts for your app. User pools provide a central place to store user data, such as passwords, email addresses, and phone numbers.</p>
</li>
<li><p>Federated identities: Amazon Cognito federated identities allow your app to access AWS resources without the user having to sign in to your app.</p>
<h1 id="heading-lets-dive-into-the-demonstration"><strong>Let’s dive into the Demonstration :</strong></h1>
</li>
</ul>
<p>Step 1: Go to the AWS Management Console Search for Cognito And Click on it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895612882/f309aca9-8e0e-4963-acfc-e459d3e0b07f.png" alt class="image--center mx-auto" /></p>
<p>Step 2: Click on Create user pool.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895634166/146b8ecd-3e76-46a2-b610-fafac7234a24.png" alt class="image--center mx-auto" /></p>
<p>Step 3: Under the Authentication Providers:</p>
<ul>
<li><p>Choose <strong>Cognito User pool</strong></p>
</li>
<li><p>Select <strong>Email</strong> from Cognito user pool sign-in options.</p>
</li>
<li><p>Click on <strong>Next</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895698390/d42b33ce-fc87-400e-8e4f-005df2c85d96.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Step 4: From the Password policy, select the <strong>Cognito defaults.</strong> You can also create your custom password policy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895736129/52fc57fd-7e91-407d-9597-71d16afa154d.png" alt class="image--center mx-auto" /></p>
<p>Step 5: Make sure that you Enable self-service account recovery.</p>
<ul>
<li><p>From the Delivery method for user account recovery messages, <strong>choose Email</strong></p>
</li>
<li><p>Click on <strong>Next</strong>.</p>
</li>
</ul>
<p>Step 6: From the Multi-factor authentication</p>
<ul>
<li><p>choose <strong>No MFA</strong></p>
</li>
<li><p>select on <strong>Enable self-service account recovery</strong></p>
</li>
<li><p>Select <strong>email only</strong> from Delivery method for user account recovery messages.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895886036/0152a6fb-2a38-4830-a927-f2d4a99839c7.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Step 7: Click on <strong>Next</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895925985/5fd8d607-fa77-496d-b107-6c70c3c07fde.png" alt class="image--center mx-auto" /></p>
<p>Step 8: Under Configure Sign-up experience, <strong>Enable self-registration</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685895964444/d35300da-a970-4baf-ad06-5df62683b26b.png" alt class="image--center mx-auto" /></p>
<p>Step 9: From the Configure message delivery</p>
<ul>
<li><p>Select <strong>send email with cognito</strong></p>
</li>
<li><p>Click on <strong>Next</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896023645/c7a09661-0dc4-4ede-aa88-e831c19e126c.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Step 10: Under Integrate your app</p>
<ul>
<li><p>Give a name to the user pool, For now we say it <strong>“demo-app-user-pool”</strong></p>
</li>
<li><p>Select the Use the <strong>Cognito hosted UI</strong></p>
</li>
<li><p>From Domain, Select the Use a cognito domain and enter the domain prefix, let's say <a target="_blank" href="https://mywebsite"><strong>https://mywebsite</strong></a></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896138171/0de8857f-9a24-461b-b95d-d37c25a45586.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896166302/9382bf16-e96f-4580-9a6a-664636081997.png" alt class="image--center mx-auto" /></p>
<p>Step 11: Under the Initial app client,</p>
<ul>
<li><p>choose <strong>Public client</strong> from App type</p>
</li>
<li><p>Give it a app client name let’s say “<strong>DemoApp</strong>”</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896271880/78b283b3-5e1e-46e1-baa4-ea618a9ff2c9.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Step 12: In the Client Secret we must provide the Callback URls so, For now</p>
<p><a target="_blank" href="https://localhost:800/logged_in.html">https://localhost:800/logged_in.html</a></p>
<p><strong>Note:</strong> We will create a file name <strong>logged_in.html</strong> that’s why we are using it at <a target="_blank" href="http://localhost">localhost</a>. When we enter our credentials we will be on this logged_in.html page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896320391/8f48857c-0eb4-4967-80c4-6341da911bae.png" alt class="image--center mx-auto" /></p>
<p>Step 13: Click on Next</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896339237/d363736f-bccb-4ffd-9d01-781927d8e36e.png" alt class="image--center mx-auto" /></p>
<p>Step 14: From the <strong>Review and Create</strong> go to the end and Click on <strong>Create user pool.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896404386/c8c39f8b-cdf6-4207-adcf-aa83e5b87ea2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896424887/f4cbdda6-ee38-4f7c-bf49-fd84fbb05bca.png" alt class="image--center mx-auto" /></p>
<p>Step15: You will see the User pool named “<strong>Demo-app-user-pool</strong>” Click on it.</p>
<ul>
<li><p>Select the App integration</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896534492/74e4a9a0-c7a9-4e69-805d-304a5e150775.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Step 16: Scroll down at the end you will see App Client list, and there you’ll find App client name “<strong>DemoApp</strong>”</p>
<ul>
<li><p>Click on it.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896573678/77a6bfa0-f369-41cb-b1cd-20ff2060bb55.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Step 17: Under the “<strong>DemoApp</strong>” you will see the “<strong>Hosted</strong> <strong>UI</strong>” and on the right side you’ll find “<strong>View Hosted UI</strong>”, <strong>Click on it.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896652457/8255de76-783f-4b13-8190-107b12bb4366.png" alt class="image--center mx-auto" /></p>
<p>You will redirect to the web browser. Now <strong>copy the Link</strong> of this login form page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896734295/8768016f-9b99-4b20-8d00-5b48513ca558.png" alt class="image--center mx-auto" /></p>
<p>Step 18: Open the VS code and make a folder and inside it create two files named <strong>index.html</strong> and <strong>logged_in.htm</strong>l Paste the following code in the index.html file</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">h3</span>&gt;</span>Welcome to my Website<span class="hljs-tag">&lt;/<span class="hljs-name">h3</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">a</span> <span class="hljs-attr">href</span>=<span class="hljs-string">"#"</span>&gt;</span>Register|Login <span class="hljs-tag">&lt;/<span class="hljs-name">a</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">html</span></span>
</code></pre>
<ul>
<li><p>On the <strong>href =”&lt; paste the link copied from the login page form&gt;”</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685896916119/762c71b9-d45c-4a90-8d0d-a3472c63cd50.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the logged_in.html Paste the following code</p>
<pre><code class="lang-xml">  <span class="hljs-tag">&lt;<span class="hljs-name">html</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Congratulations!!<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>  
     <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>You are logged in...<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">html</span>&gt;</span>
</code></pre>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897000520/0fb7aa61-e5b8-4c2a-9248-b21e37881fe2.png" alt class="image--center mx-auto" /></p>
<p>Step 19: Now open the terminal and type the following command.</p>
<ul>
<li><p><strong>python3 -m http.server</strong></p>
</li>
<li><p>Now, click on the (<strong>http://)</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897050464/2add32ee-e2ab-4498-bc28-52d1f91f9004.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p>You will be redirected to this page.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897089456/99491eaf-4427-4193-8191-51e397cb05ef.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click on the Register|Login and you will be redirected to the login and sign up page.</p>
</li>
<li><p>Since you are trying to access it for the first time you must Click on <strong>Sign</strong> <strong>up</strong> and you will receive a <strong>confirmation</strong> <strong>code</strong> in your mail and <strong>verify</strong> <strong>it</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897156436/dfa7f3bf-9ef4-41d3-bfc6-61e3cf422520.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897175240/42eae745-30e7-4ee3-b2b1-1dc35b6e1b30.png" alt class="image--center mx-auto" /></p>
<p>Step 19:After you verify you can navigate back to the <strong>“Demo-app-user-pool”</strong> and under “<strong>Users</strong>” you can see the user with status <strong>confirmed</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897207515/e1c114d0-c93c-47f0-aeab-b606f01440a3.png" alt class="image--center mx-auto" /></p>
<p>Step 20: when you enter your credentials you will redirect to the Logged_in.html page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685897227844/be79937a-5524-4006-a2c1-f70d07547734.jpeg" alt class="image--center mx-auto" /></p>
<p>Finally, You have successfully completed this demonstration. Now, you will be able to connect your website with Aws Cognito to add user sign-up, sign-in, and access control to your web and allow only authenticated users.</p>
]]></content:encoded></item><item><title><![CDATA["Scaling Microservices with AWS: Challenges Faced and Overcoming Them"]]></title><description><![CDATA[Microservices have become increasingly popular, allowing developers to break down complex applications into smaller, more manageable pieces. However, with the rise of microservices, scaling them has become a significant challenge. This is where Amazo...]]></description><link>https://blog.anupkafle.com.np/scaling-microservices-with-aws-challenges-faced-and-overcoming-them</link><guid isPermaLink="true">https://blog.anupkafle.com.np/scaling-microservices-with-aws-challenges-faced-and-overcoming-them</guid><category><![CDATA[aws lambda]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community Builder]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Fri, 21 Apr 2023 04:04:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/SYTO3xs06fU/upload/d70012083b4c0f2fb6a620754b2cd7ce.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microservices have become increasingly popular, allowing developers to break down complex applications into smaller, more manageable pieces. However, with the rise of microservices, scaling them has become a significant challenge. This is where Amazon Web Services (AWS) comes in. AWS provides a suite of tools that can help developers quickly scale microservices. This blog post will discuss some challenges i faced when scaling microservices and how AWS can help overcome them.</p>
<h3 id="heading-challenge-1-service-discovery">Challenge #1: Service Discovery</h3>
<p>In a microservices architecture, services need to communicate with each other, but as the number of services grows, it becomes challenging to manage and discover them. Service discovery is crucial for scaling microservices, and AWS solves this problem with Amazon Elastic Container Service (ECS). ECS is a container orchestration service that allows developers to manage, deploy, and scale Docker containers. ECS includes Amazon ECS Service Discovery, which allows services to discover and communicate with each other easily. ECS Service Discovery automatically registers new services and updates DNS records, making it easy to scale microservices without worrying about the underlying infrastructure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682049051525/2b3f7531-16d7-479e-a84a-a3b6787a6515.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-challenge-2-load-balancing">Challenge #2: Load Balancing</h3>
<p>Load balancing is another critical challenge when scaling microservices. AWS offers Elastic Load Balancing (ELB), a managed load balancing service that automatically distributes incoming application traffic across multiple targets, such as EC2 instances, containers, and IP addresses. ELB includes Application Load Balancer (ALB), which allows developers to route traffic based on content, URL, or IP address. ALB also provides features like SSL termination, health checks, and sticky sessions, making it easy to scale microservices without worrying about load balancing</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682050106281/f6c934d8-218c-4fc7-857b-8c081b0b1ea3.jpeg" alt class="image--center mx-auto" /></p>
<p>.</p>
<h3 id="heading-challenge-3-auto-scaling">Challenge #3: Auto Scaling</h3>
<p>Auto Scaling is a critical challenge when scaling microservices. AWS offers Auto Scaling, a service that automatically adjusts the number of EC2 instances or containers in response to changes in demand. Auto Scaling allows developers to set minimum and maximum thresholds for the number of instances or containers. It can automatically scale up or down based on CPU utilization, network traffic, or application metrics.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682050068033/b93ddb60-9f94-45b9-8400-5b125431183e.jpeg" alt class="image--center mx-auto" /></p>
<p>.</p>
<h3 id="heading-challenge-4-monitoring-and-logging">Challenge #4: Monitoring and Logging</h3>
<p>Monitoring and logging are essential for scaling microservices. AWS provides CloudWatch, a monitoring and logging service that allows developers to collect and track metrics, collect and monitor log files, and set alarms. CloudWatch provides detailed insights into application performance and helps developers diagnose and troubleshoot issues quickly. With CloudWatch, developers can confidently scale microservices, knowing they can monitor and log applications at scale.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682049200887/81610c8a-de47-4395-8311-1b239a631bd1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-challenge-5-security">Challenge #5: Security</h3>
<p>Security is another critical challenge when scaling microservices. AWS provides several security services, including Amazon Virtual Private Cloud (VPC), which allows developers to create a virtual network isolated from the internet and other AWS resources. VPC allows developers to set up security groups and network access control lists (ACLs) to control inbound and outbound traffic to and from their microservices. AWS also offers AWS Identity and Access Management (IAM), which provides granular control over who can access AWS resources and what actions they can perform.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682049372461/d45057d9-2648-406b-b569-cc57dd5d01c5.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Scaling microservices can be challenging, but with the tools and services provided by AWS, it can be made much more manageable. This blog post discussed some challenges when scaling microservices and how AWS can help overcome them. By leveraging AWS services such as ECS, ELB, Auto Scaling, CloudWatch, and VPC, we developers can confidently scale microservices, knowing they have the tools they need to ensure their applications perform reliably and securely.</p>
]]></content:encoded></item><item><title><![CDATA[Hosting static website on AWS Amplify]]></title><description><![CDATA[Introduction
AWS Amplify is a set of tools and services for building and deploying cloud-powered applications. It includes a library of pre-built UI components, a command-line interface for building and deploying your applications, and a backend plat...]]></description><link>https://blog.anupkafle.com.np/hosting-static-website-on-aws-amplify</link><guid isPermaLink="true">https://blog.anupkafle.com.np/hosting-static-website-on-aws-amplify</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Amplify]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Mon, 09 Jan 2023 16:43:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673282559447/554878e1-e01f-4e70-a54b-d40b6a05d295.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>AWS Amplify is a set of tools and services for building and deploying cloud-powered applications. It includes a library of pre-built UI components, a command-line interface for building and deploying your applications, and a backend platform for hosting and scaling your app. AWS Amplify makes it easy to build and deploy applications quickly, with a focus on improving the developer experience.</p>
<h1 id="heading-pricing">Pricing</h1>
<p>AWS Amplify is free to use, and many of the services and features it provides are also free. For example, the AWS Amplify CLI is free to use, as are the pre-built UI components and the backend platform.</p>
<p>However, some of the services and features provided by AWS Amplify do have costs associated with them. For example, if you use the storage or database features of AWS Amplify, you may be charged for the use of underlying AWS services like Amazon S3 or Amazon DynamoDB. Similarly, if you use the authentication or authorization features of AWS Amplify, you may be charged for the use of underlying AWS services like Amazon Cognito.</p>
<h1 id="heading-steps-for-hosting">Steps for Hosting</h1>
<ul>
<li><p>Sign in to the AWS Management Console and navigate to the Amplify service.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673280583419/9736ee88-5abe-44b9-8e11-9916f6a4390c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click the "Get started" button to start a new Amplify project.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673280626109/2f3c6fda-abd0-4220-8236-7c01e2d20b30.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673280706921/6cf9271d-1387-4908-80cb-e69a0d6b0669.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Build your static website using your preferred tools and frameworks.</p>
</li>
<li><p>Upload your static website files to the Amplify hosting environment by Connecting the Git repository or to your source code or Upload the files to deploy your website.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673280804805/e324c44d-0b56-4cae-bd8c-9517946fa303.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Authorize AWS Amplify to access to your Git provider account</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673281072106/e3f2535e-af5a-43b8-a17d-73a3be07afd2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Choose the repository and branch where your code resides.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673281160525/8ada95b3-8c2e-416e-b99a-f2917cd127f1.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Give the App name then save and deploy.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673281221015/3ee943a2-60c7-4656-8a04-6a2a6e4b8dab.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673281617655/52d481cd-1116-4829-b27f-b1fb58b93881.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Allow some time for the website to be deployed.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673281996170/cdbfac7f-4041-4e35-a576-90626c707ded.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673282054453/45c06a06-cff0-4c4a-b304-da79f70a1ed1.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Click on the link shown and browse your static website</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673282429975/8e53a454-9335-4c98-810f-c3ffc907de31.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>That's it! You should now have a static website hosted on AWS Amplify. If you need more detailed instructions or run into any issues, you can refer to the <a target="_blank" href="https://docs.aws.amazon.com/amplify/">AWS Amplify documentation</a> for more information.</p>
]]></content:encoded></item><item><title><![CDATA[Setup email hosting on AWS with WorkMail]]></title><description><![CDATA[Overview
Amazon WorkMail is a secure, managed business email and calendar service with support for existing desktop and mobile email client applications. Users can access their email, contacts, and calendars using the client application of their choi...]]></description><link>https://blog.anupkafle.com.np/setup-email-hosting-on-aws-with-workmail</link><guid isPermaLink="true">https://blog.anupkafle.com.np/setup-email-hosting-on-aws-with-workmail</guid><category><![CDATA[workmail]]></category><category><![CDATA[AWS]]></category><category><![CDATA[route53]]></category><category><![CDATA[domain]]></category><dc:creator><![CDATA[Anup kafle]]></dc:creator><pubDate>Mon, 31 Oct 2022 15:27:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229995947/ZxgRYITKN.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-overview">Overview</h2>
<p>Amazon WorkMail is a secure, managed business email and calendar service with support for existing desktop and mobile email client applications. Users can access their email, contacts, and calendars using the client application of their choice, including Microsoft Outlook or any client application supporting the IMAP protocol.</p>
<h2 id="heading-pricing">Pricing</h2>
<p>Amazon WorkMail costs $4.00 per user per month and includes 50 GB of mailbox storage for each user. You can get started with a 30-day free trial for up to 25 users.</p>
<blockquote>
<p>Note: If a user is created after the first of a month, then the monthly fee for that mailbox will be adjusted on a pro-rata basis from the first day it was active to the end of that month. If a user is terminated or deleted before the end of a month, then the monthly fee for that user will still apply through the end of the month.</p>
</blockquote>
<h1 id="heading-lets-get-started">Let's get started..</h1>
<h3 id="heading-step1-login-to-your-aws-account-and-search-for-workmail">step1: Login to your AWS account and search for WorkMail</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667225186680/pZwlmpcWr.PNG" alt /></p>
<h3 id="heading-step2-click-on-create-organization">Step2: Click on Create organization</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667225286605/jU-diZB3J.PNG" alt /></p>
<blockquote>
<p>Note: There are several options, but I am using my domain parked on Route53 . Don't worry, you can later add the external domain as well if you don't have domain now.</p>
</blockquote>
<ul>
<li><p>Select an Existing Route 53 domain</p>
</li>
<li><p>Choose Route53 hosted zone</p>
</li>
<li><p>Setup alias</p>
</li>
<li><p>Create organization</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667226985400/oN3gg5W-M.PNG" alt /></p>
</li>
<li><p>You will see the dashboard like below:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667227121415/llye_GPFb.PNG" alt /></p>
<h3 id="heading-step3-adding-custom-domain">Step3: Adding custom domain</h3>
<ul>
<li>Click on your organization name</li>
</ul>
<p><em>You will see a dashboard like below.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667227435413/mua1jM-D8.PNG" alt /></p>
<ul>
<li><p>Click on Domains on the dashboard left bar .</p>
<blockquote>
<p>Note: I have already added my domain shoeasy.me but you will see domain like ailas.awsapps.com</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667227766762/73jQh1DeD.png" alt /></p>
</li>
<li><p>Click on Add domain <em>(optional if you don't want custom domain email)</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667227941808/mUZgYUijE.png" alt /></p>
</li>
<li><p>Choose the domain and click Add domain</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667228115285/QaaDrs1To.png" alt /></p>
</li>
<li><p>Click on Update all records in route 53</p>
<blockquote>
<p>This will add all the DNS records required for WorkMail automatically. After records are added you can set your domain as default domain .</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667228231211/GWSLRIUnw.PNG" alt /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667228287002/XqjqbIIil.PNG" alt /></p>
<h3 id="heading-step4-setup-mobile-policy-optional">Step4: Setup Mobile policy <em>(optional)</em></h3>
<blockquote>
<p>Here you can set password policies and other mobile policies. I kept all of the defaults, but you can change them to suit your needs. </p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667228709736/Oa6K8QmdC.PNG" alt /></p>
<h3 id="heading-step5-creating-users">Step5: Creating users</h3>
<ul>
<li><p>Click Users from the left menu of your organization dashboard. You see a page like the below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667228944948/P0-myhZA9.PNG" alt /></p>
</li>
<li><p>Click on Create user and create a user by filling in the required details.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229005757/1lVB25ri8.PNG" alt /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229153379/fzlCSdeNV.PNG" alt /></p>
<ul>
<li>On email address choose your custom domain <em>(If you havenot added custom domain go with domain provided by aws) , choose password and create user</em></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229166769/MCKsEvuAO.PNG" alt /></p>
<p>You have sucessfully created a user . You can add more user according to your needs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229915109/PEQihIOm6.PNG" alt /></p>
<h3 id="heading-step6user-login">Step6:User login</h3>
<ul>
<li><p>You will find a login link on your organization dashboard (say https://shoeasymail.awsapps.com/mail) go to the address.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229519003/ust-GGfkc.PNG" alt /></p>
</li>
<li><p>Use the credentials you used before while creating mail to sign in</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229687956/fjRInUEj5.PNG" alt /></p>
</li>
<li><p>You are all done you can receive and send mails now</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667229739028/P68pX0t6a.PNG" alt /></p>
</li>
</ul>
<h2 id="heading-enjoy">Enjoy !!!!</h2>
]]></content:encoded></item></channel></rss>