<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Untitled Publication]]></title><description><![CDATA[DevOps | Hackathons | AWS | Travel | Tech Evangelism]]></description><link>https://sagaruprety.com.np</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 11:14:51 GMT</lastBuildDate><atom:link href="https://sagaruprety.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How to implement CI/CD in AWS with AWS CodePipeline?]]></title><description><![CDATA[Introduction
Continuous Integration/Continuous Deployment (CI/CD) is a set of software development practices that involves automating the process of integrating, testing, and deploying code changes to production. CI/CD pipelines have become the indus...]]></description><link>https://sagaruprety.com.np/how-to-implement-cicd-in-aws-with-aws-codepipeline</link><guid isPermaLink="true">https://sagaruprety.com.np/how-to-implement-cicd-in-aws-with-aws-codepipeline</guid><category><![CDATA[AWS]]></category><category><![CDATA[awscodepipeline]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[aws cicd]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sagar Uprety]]></dc:creator><pubDate>Wed, 17 Jan 2024 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706197341507/23e3bcf9-a375-4e3a-823f-c419847f7453.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>Continuous Integration/Continuous Deployment (CI/CD) is a set of software development practices that involves automating the process of integrating, testing, and deploying code changes to production. CI/CD pipelines have become the industry standard practice in modern software development, allowing organizations to deliver software faster and with higher quality.</p>
<p><strong>Continuous Integration (CI)</strong> refers to the process of integrating code changes into a shared repository such as GitHub, building the application, and testing it automatically to ensure that the code is error-free and ready to be deployed. The aim is to catch any conflicts and errors early in the development phase.</p>
<p><strong>Continuous Delivery (CD)</strong> takes a step further and deploys these changes to a development or production environment. The goal here is to deliver the application to the end-user in the fastest manner and shorten the feedback loop.</p>
<h3 id="heading-why-is-cicd-needed"><strong>Why is CI/CD needed?</strong></h3>
<p>Here’s a closer look into the benefits of CI/CD and why they have become the industry standard process.</p>
<ol>
<li><p><strong>Short release time:</strong> Automating the process of integrating and deploying code changes allows for quicker and more frequent releases, enabling faster time-to-market and the ability to respond to user feedback and change requirements more efficiently.</p>
</li>
<li><p><strong>Early detection of issues:</strong> By integrating and testing code changes regularly, CI/CD helps catch bugs, errors, and conflicts early in the development process, reducing the risk of issues reaching production.</p>
</li>
<li><p><strong>Increased collaboration:</strong> CI/CD ensures collaboration among team members by encouraging regular integration of code changes and resolving conflicts early. This leads to better communication, coordination, and teamwork among developers.</p>
</li>
<li><p><strong>Improved software quality:</strong> By automating testing and deployment, CI/CD helps ensure that only high-quality, thoroughly tested code reaches production, resulting in more stable, secure, and reliable software.</p>
</li>
<li><p><strong>Greater agility and innovation:</strong> CI/CD enables teams to quickly adapt to changing requirements and market conditions, promoting innovation and the ability to continuously improve the software product.</p>
</li>
</ol>
<h3 id="heading-demo-aws-codepipeline-for-a-java-web-app"><strong>Demo : AWS CodePipeline for a Java Web app</strong></h3>
<p>There are several CI/CD tools available today such as Jenkins, GitHub Actions, GitLab CI/CD, CircleCI, Bamboo, Azure DevOps, etc each having their own strengths and weaknesses. In this blog, we will use AWS CodePipeline which integrates well with other AWS services to create the CI/CD pipeline for a sample Java web app served by Tomcat. Feel free to read further as the process is very similar for other web applications such as Node.Js, Django, .NET, and Laravel.</p>
<h4 id="heading-prerequisites"><strong>Prerequisites</strong></h4>
<p>Before we begin, make sure you have the following prerequisites:</p>
<ol>
<li><p>An AWS account with appropriate permissions to create and configure CodePipeline, CodeBuild, and Elastic Beanstalk resources.</p>
</li>
<li><p>Source code for your application. You can use our sample application for demo purposes.</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">GitHub Repo: <a target="_blank" href="https://github.com/adexltd/aws-ci-cd">GitHub - adexltd/aws-ci-cd: Sample Java Maven Application for AWS CI/CD Demo!</a></div>
 </div>


</li>
</ol>
<p>We will use the following AWS services to create the pipeline:</p>
<ol>
<li><p><strong>AWS CodePipeline:</strong> A fully managed CI/CD service that automates the building, testing, and deployment of applications.</p>
</li>
<li><p><strong>AWS CodeBuild:</strong> A fully managed build service that compiles source code, runs tests, and produces software packages ready to deploy.</p>
</li>
<li><p><strong>AWS Elastic Beanstalk:</strong> Elastic Beanstalk is a fully managed service that makes it easy to deploy, manage, and scale applications in the AWS Cloud. It will handle the process of provisioning and maintaining underlying required infrastructure such as EC2, S3, and ASG based on our application environment.</p>
</li>
</ol>
<h3 id="heading-high-level-architecture-and-process-flow"><strong>High-Level Architecture and Process Flow</strong></h3>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_65b8fb69-49f5-4e7d-87fb-53c583aa2ade_20240118065758.png" alt="CI-CD Process Overview" /></p>
<p><strong>Flow Description</strong></p>
<p>As seen from the diagram above, the pipeline will be triggered automatically whenever there is a code change in the source repository. AWS CodeBuild will then take this source code and build the application with applied configurations. This generates an application package (artifact) and stores it in S3 Bucket. Finally, in the deployment phase, Elastic Beanstalk will pull the artifact from S3 and launch the application from the bundle in the pre-defined environment. Additionally, CloudWatch produces the logs from CodeBuild that we can use for monitoring the build process. That concludes our flow.</p>
<h3 id="heading-step-by-step-implementation-guide"><strong>Step by Step Implementation Guide</strong></h3>
<p><strong>Step 1: Create an Elastic Beanstalk Environment</strong></p>
<ol>
<li><p>Open the AWS Management Console, navigate to the Elastic Beanstalk service, and click the <em>"Create application"</em> button. This will launch a configuration wizard.</p>
</li>
<li><p>In the first step, choose the <em>"Web Server environment"</em> as the environment tier.</p>
</li>
<li><p>Now, you can select the appropriate options for our application, such as platform, platform version application code source, and presets. We will go will the <em>“Single Instance”</em> preset as it is free tier eligible.</p>
</li>
</ol>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_4358fff2-ce65-4462-a0e0-92034129d035_20240118065817.png" alt="Elastic Beanstalk configuration" class="image--center mx-auto" /></p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_3f85d189-20ef-4776-a779-d6e30fdcabe6_20240118065934.png" alt="Elastic Beanstalk Configuration" class="image--center mx-auto" /></p>
<pre><code class="lang-bash">For our sample application: we have the following settings:

Application name : ci-cd-demo
Environment name : Cicddemo-env
Platform: Tomcat 8.5 
Platform Version : 4.3.7
</code></pre>
<p>4. Next, you can configure service access. Here, we will create a new service role. Optionally, you can add EC2 key pair to access EC2 servers deployed by Elastic Beanstalk.</p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_5c495045-eeba-4d75-81de-a9eefc98841d_20240118070101.png" alt="Elastic Beanstalk configuration" class="image--center mx-auto" /></p>
<p>5. Next, we will use the default options and select <em>“Skip to review”</em>. If you wish, you can set up networking (selecting custom VPCs, assigning Public IP to our EC2 instances), and create databases and tags. You can also choose the capacity of Auto Scaling Group, and configure monitoring and logging through CloudWatch metrics.</p>
<p>6. Review the configurations and click the <em>"Submit"</em> button to create the Elastic Beanstalk environment.</p>
<h4 id="heading-step-2-create-a-buildspecyml-file-for-codebuild"><strong>Step 2: Create a buildspec.yml file for CodeBuild</strong></h4>
<p>The <em>buildspec.yml</em> file is a configuration file that defines the build steps for your application and is used by CodeBuild to execute the build process (we will see the use of CodeBuild later). In your source code repository, create a buildspec.yml file at the root level of your project directory. This file contains the build steps for your application, such as building and testing your code.</p>
<p>Here is a <strong>buildspec.yml</strong> file for our sample application:</p>
<pre><code class="lang-bash">version: 0.2
phases:
  pre_build:
    commands:
    - <span class="hljs-built_in">echo</span> <span class="hljs-string">"Pre-Build Phase"</span>
  build:
    commands:
      - <span class="hljs-built_in">echo</span> <span class="hljs-string">"Build Phase Started"</span>
      - mvn clean package
  post_build:
    commands:
      - <span class="hljs-built_in">echo</span> <span class="hljs-string">"Build Succeded"</span>
artifacts:
  files:
    - target/aws-ci-cd*/*
  discard-paths: yes
</code></pre>
<p>This <strong>buildspec.yml</strong> file specifies three build phases: <em>build, test,</em> and <em>post_build</em>. If you are using another web framework such as Node.js, you can install your dependencies in the <em>pre_build</em> stage.</p>
<p>Here, we build our maven package in the build stage which creates the build artifact in the target/aws-ci-cd directory. The <code>artifacts</code> section of the build specification file provides details about where to store the artifacts and the format in which they should be stored.</p>
<p>By default, CodeBuild stores the build artifacts in an S3 bucket created and managed by CodeBuild. The S3 bucket is named with a prefix <code>codepipeline-*</code> followed by a unique identifier for the CodeBuild project.</p>
<p>You can also configure the build project to use a custom S3 bucket for storing the build artifacts. In our case, <code>target/aws-ci-cd*/*</code> pattern will be used to include files from the <code>target</code> directory that match the <code>aws-ci-cd*/*</code> pattern. Once the build process is complete, the resulting artifact files will be uploaded to the S3 bucket created by CodeBuild.</p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_29d2f1de-62f5-4b45-8e90-e9e088da5d34_20240118070125.png" alt="Build Artifact stored in S3" class="image--center mx-auto" /></p>
<p>We can customize the buildspec.yml file based on the requirements of the application, such as adding additional build steps, tests, or deployment instructions. Make sure to commit and push the buildspec.yml file to your source code repository if you have not already.</p>
<p><strong>Step 3: Set up a CodePipeline</strong></p>
<p>Now, let’s set up a code pipeline that glues together everything we have done so far and deploy our application to ElasticBeanStalk</p>
<ol>
<li><p>Open the AWS Management Console, navigate to the AWS CodePipeline service, and click the <em>"Create pipeline"</em> button.</p>
</li>
<li><p>Enter a pipeline name, and create a new service role.</p>
</li>
<li><p>Next, in the Source Stage, select your source provider (GitHub v2), and choose the repository and branch that you want to use for your application source code.</p>
</li>
<li><p>You might need to create a connection to GitHub if you are doing this for the first time. Give a connection name and click on “Install a new app” that will install AWS Connector for GitHub. Once connected, make sure the change detection option is checked to trigger the pipeline from the source code change.</p>
</li>
</ol>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_b9bf7582-d059-4ce9-8836-3428ef1db68b_20240118070145.png" alt /></p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_10847827-7ab0-4113-998e-82e75753bc44_20240118070158.png" alt="GitHub Connection" class="image--center mx-auto" /></p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_aa442aa8-f8d0-4580-b59c-f2c05ffe1375_20240118070209.png" alt="Code Pipeline- Source Configuration" class="image--center mx-auto" /></p>
<p>5. Next, in the <strong>Build Stage</strong> select AWS CodeBuild as your build provider, and select <em>“Create Project”</em> option. Here, we will configure a CodeBuild project that will compile the source code, run tests (if present), and produce a software package that is ready to deploy.</p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_7b01a4d1-7323-4b80-a932-75b3946a80c8_20240118070226.png" alt="Code Pipeline- Build Stage Configuration" class="image--center mx-auto" /></p>
<ul>
<li><strong>(Build Stage Continued)</strong>: Enter the project name and choose a runtime environment for your build, such as Node.js, Java, or Python, and specify the build configurations. We will choose the following configurations.</li>
</ul>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_0116aa0a-e278-46cd-8280-0ce9a1f030be_20240118070237.png" alt="Code Pipeline- Build Stage Configuration" class="image--center mx-auto" /></p>
<ul>
<li><p><strong>(Build Stage Continued)</strong> : Remember, we created buildspec.yaml earlier? It comes into use here. We will choose the <em>“Use a buildspec file”</em> option which in turn will look for buildspec.yaml in our repository root by default. Make sure to rename it if you have any other name for the build spec configs.</p>
</li>
<li><p>Optionally, you can configure additional build options such as webhook triggers, CloudWatch Logs, and S3 Logs. Review your project configuration, add environment variables (if any), select “Buid type” as Single build, and click the "Continue to CodePipeline".</p>
</li>
</ul>
<p>5. In the <strong>Deploy Stage</strong>, select <em>AWS Elastic Beanstalk</em> as your deployment provider, and choose the environment which we created in <strong>Step 1.</strong></p>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_865cc7a1-7003-45e9-bd29-7b8fa027e9ae_20240118070317.png" alt="Deploy Configuration" /></p>
<p>6. Finally, review your pipeline configuration and click the <em>"Create pipeline"</em> button to create your pipeline.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Optional:</strong> We can create multiple environments in Beanstalk, for instance, separate production and dev environments. To quickly simulate these, Go to Elastic Beanstalk&gt;Environment&gt;Cicddemo-env&gt;Actions&gt;Clone environment. We will name it as Cicddemo-prod. After this go to your pipeline, click on ‘Edit’, and follow the instructions to Add Stage. (This may differ as per your project requirement and it’s fine to just use one deployment environment)</div>
</div>

<p><strong>Step 4: Test the CI/CD pipeline</strong></p>
<p>Now that we have set up the entire CI/CD pipeline, it's time to test it by making changes to your application source code and triggering a pipeline run.</p>
<ol>
<li><p>Make changes to your application source code, such as fixing a bug, adding a new feature, or updating a configuration file. Here, I will just change the welcome message in <em>/src/main/webapp/index.jsp</em> to <strong>“Welcome to CI/CD!”</strong></p>
</li>
<li><p>Commit and push the changes to GitHub or your source repository. This will trigger the pipeline to run automatically.</p>
</li>
<li><p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_d32c34f2-34c3-40d1-b711-7ee0676f4797_20240118070359.png" alt="CodePipeline - Process" class="image--center mx-auto" /></p>
<p> Open the AWS CodePipeline service, and navigate to your pipeline to see the status of different stages. Here, we have used two environments for deployment. You might only see one if you have not created a new Beanstalk environment and added it to the pipeline.</p>
</li>
<li><p>CodePipeline will automatically start the build process in CodeBuild, which will compile our source code, run tests, and produce a deployment package. Once the build is successful, CodePipeline will automatically deploy the application to Elastic Beanstalk according to the deployment settings.</p>
</li>
<li><p>Monitor the pipeline run in the CodePipeline console, and check the build details from CodeBuild or Elastic Beanstalk environment events and logs for any errors or issues.</p>
</li>
<li><p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_975b4af5-6882-4e40-ae0e-a67370ef4cca_20240118070429.png" alt="Elastic Beanstalk Environment" /></p>
<p> Once the deployment is complete, we can access the application on the Elastic Beanstalk environment URL to verify that the changes have been successfully deployed.</p>
</li>
</ol>
<p><img src="https://ds0xrsm6llh5h.cloudfront.net/blogs/image_036642c1-0c82-44a1-8058-32abe07f7814_20240118070556.png" alt="Sample App Message" class="image--center mx-auto" /></p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Implementing a CI/CD pipeline is a crucial step in modern software development practices to ensure efficient and reliable application delivery. We explored AWS’s powerful tools like CodePipeline, CodeBuild, and Elastic Beanstalk which can be easily integrated to set up a robust CI/CD pipeline on the cloud. By following these steps, you can easily automate the process of building, testing, and deploying your applications, saving time and ensuring consistent quality in your software releases.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Note: This article is authored by me and was originally published for Adex International. You can find the original blog <a target="_blank" href="https://adex.ltd/streamlining-ci-cd-process-with-aws-codepipeline">here</a></div>
</div>

<h3 id="heading-resources"><strong>Resources</strong></h3>
<p>AWS CodePipeline: <a target="_blank" href="https://aws.amazon.com/codepipeline/">https://aws.amazon.com/codepipeline/</a></p>
<p>CodeBuild buildspec specification: <a target="_blank" href="https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html">Build specification reference for CodeBuild - AWS CodeBuild</a></p>
<p>Official Hands-on: <a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/continuous-deployment-pipeline/">https://aws.amazon.com/getting-started/hands-on/continuous-deployment-pipeline/</a></p>
]]></content:encoded></item><item><title><![CDATA[Scoring 961 out of 1000 on the AWS Certified Solutions Architect Associate Exam: My Experience]]></title><description><![CDATA[Introduction
If you've landed here, you're likely familiar with the AWS landscape and planning to add that certification to your portfolio! If that sounds like you, you're in the right place. To be honest, the AWS Certified Solutions Architect Associ...]]></description><link>https://sagaruprety.com.np/how-i-aced-my-aws-certified-solutions-architect-associate-exam</link><guid isPermaLink="true">https://sagaruprety.com.np/how-i-aced-my-aws-certified-solutions-architect-associate-exam</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS certification]]></category><category><![CDATA[AWS Solution Architect]]></category><category><![CDATA[AWS training]]></category><category><![CDATA[SAA-C03]]></category><category><![CDATA[AWS Certified Solutions Architect Associate]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[solutionarchitect]]></category><dc:creator><![CDATA[Sagar Uprety]]></dc:creator><pubDate>Wed, 10 Jan 2024 15:12:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704889675400/8fe3b306-fe08-498f-8d83-64d03d400c49.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>If you've landed here, you're likely familiar with the AWS landscape and planning to add that certification to your portfolio! If that sounds like you, you're in the right place. To be honest, the AWS Certified Solutions Architect Associate is not a walk in the park unless you have 1-2 years of professional experience and can visualize AWS architecture in your head.</p>
<p>However, it's not overly difficult either. I like to think of it as both challenging and rewarding. As someone who scored 961 out of 1000 on the exam with just over 4 months of professional experience (at the time of the exam), I am eager to share my experiences, learning approach, resources I used, and insights to help others pursue this certification.</p>
<h3 id="heading-about-the-exam"><strong>About the Exam</strong></h3>
<p>The <a target="_blank" href="https://aws.amazon.com/certification/certified-solutions-architect-associate/">AWS Certified Solutions Architect Associate (SAA C03</a>) exam is designed to assess your knowledge and skills in building scalable and reliable AWS solutions. It covers a range of topics, including designing and deploying systems, implementing cost-effective solutions, and understanding security best practices. They also test you heavily on the <a target="_blank" href="https://aws.amazon.com/architecture/well-architected/">AWS Well Architect Framework</a>. So yes! you need to be aware of various AWS services and be able to decide the best combination depending on specific scenarios.</p>
<p>Instead of focusing deeply on the code level or infrastructure implementation and debugging (which is mostly the case with the <a target="_blank" href="https://aws.amazon.com/certification/certified-developer-associate/">Developer Associate</a> exam), the SAA C03 exam focuses on how you approach the given problem keeping in mind the six pillars of AWS Well Architect Framework. The four major domains for the exam are:</p>
<ul>
<li><p><strong>Domain 1: Design Secure Architectures</strong></p>
</li>
<li><p><strong>Domain 2: Design Resilient Architectures</strong></p>
</li>
<li><p><strong>Domain 3: Design High-Performing Architectures</strong></p>
</li>
<li><p><strong>Domain 4: Design Cost-Optimized Architectures</strong></p>
</li>
</ul>
<p>The format for the exam is 65 Multiple Choice Questions (MCQs) and you get 130 minutes to complete it. It will cost you about $150, and you can take it online at your home or from a verified testing center. Find more details about the exam on their <a target="_blank" href="https://aws.amazon.com/certification/certified-solutions-architect-associate/">official page</a>.</p>
<h3 id="heading-learning-approach"><strong>Learning Approach</strong></h3>
<p>Well, now you know about the exam and what it tests! Let's take a look at the learning approach I used. The preparation took me around 1.5 months studying about 1-2 hours per day alongside my full-time job. This might be more if you are just setting your footprints in AWS and less if you are already a pro.</p>
<p><strong>1. Set Clear Goals:</strong></p>
<p>Before diving into study materials, define your goals and <strong>why you need the certification in the first place</strong>. Is it a part of your job, are you trying to get hired or improve your portfolio? You also need to make sure that you align yourself with the exam objectives. Maybe there's another certification that's more suited to you. This is what will motivate you to study.</p>
<p><strong>2. Structured Study Plan and Consistency:</strong></p>
<p>Develop a structured study plan. Allocate specific times each day or week to cover different topics. <strong>Identify the areas where you need to work more.</strong> For me, this was cost-optimization and serverless domain. Remember that consistency is key here.</p>
<p><strong>3. Hands-On Practice:</strong></p>
<p>Apply theoretical knowledge that you have gained through hands-on practice. <strong>This is super crucial.</strong> Unless you don't apply what you learn, you can't relate the pain points and the problem you are solving. YES! I understand you can't use every AWS service that's out there and what exam tests you on. But, you need be be at least proficient with common services such as EC2, S3, VPC, Lambda, IAM, Cloudformation, ECS, RDS, etc, and be aware of others. Note that AWS offers a <a target="_blank" href="https://aws.amazon.com/free/">free tier</a> that allows you to experiment with various services without incurring charges.</p>
<p><strong>4. Use Multiple Resources:</strong></p>
<p>Don't rely on a single source. Combine official AWS documentation, online courses, and practice exams to gain a well-rounded understanding and consume diversified content. <strong>This is the single most important tip for you all.</strong></p>
<p><strong>5. Join Forums and Communities:</strong></p>
<p>Engage with the AWS community. Participate in forums, and discussion groups, and attend webinars. You can also watch the content from <a target="_blank" href="https://www.youtube.com/@amazonwebservices/">AWS YouTube</a> where they post really interesting videos on various AWS services and architectural patterns. I found their <a target="_blank" href="https://youtube.com/playlist?list=PLhr1KZpdzukdeX8mQ2qO73bg6UKQHYsHb&amp;si=32_kMlBhzP4ZUGcM">This is My Architecture Playlist</a> very helpful.</p>
<h3 id="heading-resources-official"><strong>Resources - Official</strong></h3>
<p>Alright! Now you know the approach to ace your exam. But hey, can you also tell me the resources you used? Sure, here's the list of the official resources I used. These are also the free ones.</p>
<ol>
<li><p><a target="_blank" href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/1851/aws-technical-essentials?saa=sec&amp;sec=prep">AWS Technical Essentials</a>: For clearing basic concepts on all the major topics (Recommended if you are getting started)</p>
</li>
<li><p><a target="_blank" href="https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf">AWS Certified Solutions Architect - Associate Exam Guide</a>: Has clear documentation on the EXAM domains, the specific skills they are looking for, and the relevant services you need to focus on.</p>
</li>
<li><p><a target="_blank" href="https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Sample-Questions.pdf">AWS Certified Solutions Architect - Associate Sample Questions</a><strong>:</strong> 10 questions to give you an idea of the questions and their complexity.</p>
</li>
<li><p><a target="_blank" href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/13266/aws-certified-solutions-architect-associate-official-practice-question-set-saa-c03-english?saa=sec&amp;sec=prep">AWS Certified Solutions Architect - Associate Official Practice Question Set</a>: 20 additional questions that supplement the sample questions</p>
</li>
<li><p><a target="_blank" href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/14760/exam-prep-aws-certified-solutions-architect-associate-saa-c03">Exam Prep: AWS Certified Solutions Architect - Associate</a>: 3-hour free course on SkillBuilder directly from AWS themselves.</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/pdfs/wellarchitected/latest/framework/wellarchitected-framework.pdf#welcome">WAFR WhitePapers:</a> Take your time to read this. It is more than 800 pages, and you do not need to learn everything. Just focus on understanding the six pillars and how you can incorporate them.</p>
<p> (Below are the recommended resources though I did not use them personally for my exam)</p>
</li>
<li><p><a target="_blank" href="https://pages.awscloud.com/traincert-twitch-power-hour-architecting.html?saa=sec&amp;sec=prep">AWS Power Hour: Architecting on-demand</a> - Twitch Recordings with six episodes that cover various domains from the exam with great use cases</p>
</li>
<li><p>AWS Service FAQs - Whenever you use any AWS Service, try to also read the FAQs such as the <a target="_blank" href="https://aws.amazon.com/ec2/faqs/">EC2 FAQ</a>. I found some questions on the exam that were answered in the FAQ Section.</p>
</li>
</ol>
<h3 id="heading-resources-others"><strong>Resources - Others</strong></h3>
<p>While official resources are the source of truth and are highly recommended, many of us prefer a guided approach with dedicated courses, practice sets, and a community to discuss and share learnings. I am no exception. Here's the list of third-party resources I used:</p>
<ol>
<li><p><a target="_blank" href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c03/"><strong>Stephane Mareek Udemy Course</strong></a>: Simply the best and my go-to resource for learning the exam contents. His way of explaining things in simple terms is phenomenal. If there's one course you want to purchase, this would be the one! He also got the five full-length <a target="_blank" href="https://www.udemy.com/course/practice-exams-aws-certified-solutions-architect-associate/"><strong>Practice Sets</strong></a> though I found them to be a bit tougher than what's tested on the exam. But this will prepare you well.</p>
</li>
<li><p><a target="_blank" href="https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c03/"><strong>Examtopics Question Bank</strong></a>: They have an arsenal of exam-like questions and almost half of the questions are available for free. Highly Recommended to consume those. I found the exam questions to be very similar to the ones here.</p>
</li>
<li><p><a target="_blank" href="https://www.whizlabs.com/aws-solutions-architect-associate/"><strong>WhizLabs Practice Sets</strong></a><strong>:</strong> Again, this is a great source for practice sets. If you already have a WhizLabs subscription, do try out this. They have several full-length sets where you can test your skills. Take this as an optional resource if you don't want to buy it.</p>
</li>
<li><p><a target="_blank" href="https://www.reddit.com/r/AWSCertifications/"><strong>r/AWSCertifications</strong></a><strong>:</strong> A great community for interacting and engaging with other test-takers and experts. They do joke around a lot, but sometimes even the joke teaches you a lot (chuckles)</p>
</li>
</ol>
<p>Below are some additional resources worth mentioning, as recommended by my colleagues and the community, though I did not personally purchase them:</p>
<ol>
<li><p><a target="_blank" href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams-saa-c03/"><strong>Tutorials Dojo Udemy Course</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams-saa-c03/"><strong>Neal Davis Udemy Course</strong></a></p>
</li>
</ol>
<h3 id="heading-exam-day-experience-and-tips"><strong>Exam Day Experience and Tips</strong></h3>
<p>I took my exam on 7th August 2023 and you can see my results below.</p>
<p><img src="https://media.licdn.com/dms/image/D5622AQEhojhqpwlydw/feedshare-shrink_1280/0/1691409289075?e=1707955200&amp;v=beta&amp;t=qnUUFVGR8Woet-BdAr6T1iAn1wvts7K9MIYYXiZ5z04" alt="graphical user interface, text, application, table, email" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704897898142/b59b84bc-0fcb-417f-8af1-8c8592d4e72a.png" alt class="image--center mx-auto" /></p>
<p>To be honest, I was not expecting to score that high, but was happy when I got those scores back in about 8 hours. I took my exam at a testing center and it was scheduled at around 11 AM. Overall, it was a pleasant experience as the center was not very crowded and there were no distractions at all. Here are my tips for D-Day.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">PRO TIP: Give your exam at a dedicated center if there's one nearby. Though rare, AWS sometimes cancels your exam if you have a network connection issue while taking it at home</div>
</div>

<p><strong>1. Request for additional 30 minutes</strong></p>
<p>If you are from a country where English is not the native language, you can request AWS for an extra 30 minutes for your exam during registration. DO NOT miss out on this opportunity if you are <a target="_blank" href="https://aws.amazon.com/certification/policies/before-testing/">eligible (ESL +30)</a>.</p>
<p><strong>2. Reach your test center early</strong></p>
<p>I reached my center about 25 minutes before the exam. My center was pretty calm with not much crowd. Also, I was the only test taker at that time slot, so the ID Checks and verification process went smoothly. But, this might not always be the case for you. So, try to reach your test center at least 30 minutes to provide you ample time for smooth check-ins.</p>
<p><strong>3. Flag and Review</strong></p>
<p>Manage your time wisely during the exam. Some questions are intentionally more difficult than others and may require more thought. So don't spend too much time on a single question. Just choose the one that your instincts suggest, mark it for review (yes! You can flag them), and come back to it later on.</p>
<p><strong>4. Read Carefully:</strong></p>
<p>Pay close attention to each question. Most questions are scenario-based, and sometimes missing a crucial detail such as cost in the phrase "clients want the most <strong>cost</strong>-effective solution" can lead to incorrect answers. An example could be the use of EC2 in Multi-Region for high availability. But this is not a cost-effective solution.</p>
<p><strong>5. Stay Calm:</strong></p>
<p>Stress can hinder your performance, I for sure was stressed about 8-10 questions that I had doubts about. But not spending too much time on them, taking deep breaths at times, and analyzing them later on with a fresh perspective helped.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Thank you for reading! Achieving AWS Certified Solutions Architect Associate status is both challenging and rewarding. A structured study plan, diverse resources, and hands-on experience are the keys to acing your exam. Remember, the journey doesn't stop here. Continuous learning and active engagement with the AWS community will further enrich your experience and knowledge. As we conclude, I'm curious to know your favorite resources for the SAA-C03 Exam. Share them in the comments below.</p>
<p>Best of luck on your path to AWS Certification Success! Stay tuned for more insights and guides by following my blog and subscribing to my <a target="_blank" href="https://sagaruprety.com.np/newsletter">newsletter</a>. Don't forget to check out my <a target="_blank" href="https://www.youtube.com/watch?v=jCJAiOSuOkg&amp;t=1s">YouTube</a> channel for informative videos on DevOps and Cloud!</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=jCJAiOSuOkg&amp;t=1s">https://www.youtube.com/watch?v=jCJAiOSuOkg&amp;t=1s</a></div>
]]></content:encoded></item><item><title><![CDATA[How to exec into your containers running on Amazon ECS]]></title><description><![CDATA[Introduction
Amazon Elastic Container Service (ECS) is a powerful container orchestration service that enables you to run containers in a highly scalable and cost-effective manner.
ECS Exec is a feature provided by AWS Systems Manager that leverages ...]]></description><link>https://sagaruprety.com.np/how-to-exec-into-your-containers-running-on-amazon-ecs</link><guid isPermaLink="true">https://sagaruprety.com.np/how-to-exec-into-your-containers-running-on-amazon-ecs</guid><category><![CDATA[AWS]]></category><category><![CDATA[ECS]]></category><category><![CDATA[containers]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Sagar Uprety]]></dc:creator><pubDate>Wed, 29 Nov 2023 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706434586758/c74c302d-6094-435d-83f6-663a0ff43864.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p><strong>Amazon</strong> Elastic Container Service (ECS) is a powerful container orchestration service that enables you to run containers in a highly scalable and cost-effective manner.</p>
<p><strong>ECS Exec</strong> is a feature provided by AWS Systems Manager that leverages the AWS Identity and Access Management (IAM) role associated with your ECS task to grant you secure access to your containers</p>
<p>In this blog post, we will explore how to leverage ECS Exec to access containers running on both Fargate and EC2-backed ECS tasks.</p>
<h3 id="heading-the-need">The Need</h3>
<p><strong>Picture this:</strong> You are a cloud engineer overseeing a fleet of containers running smoothly on Amazon's Elastic Container Service (ECS). Everything seems under control until one of your containerized applications starts throwing errors during peak traffic hours. You checked everything on your monitoring solution such as CloudWatch and it does provide information of substance. Or, perhaps, you need to run some commands and interact with various processes on your containers.</p>
<p>Either way, you need a way to quickly execute into the container. One solution is to SSH into the EC2 host running the containers and then perform <code>docker exec</code> against the container. But wait, Fargate does not even allow direct SSH. So, what now? There comes ECS Exec into play, a relatively new functionality, released by AWS in March 2021, that allows users to either run an interactive shell or a single command directly against a container.</p>
<p>For ECS running on EC2, this also removes the need for SSH or direct access to the host. This capability simplifies container management and makes it easier to diagnose issues, gather logs, and perform other necessary tasks.</p>
<h3 id="heading-pre-requisites"><strong>Pre-requisites</strong></h3>
<p>Before you can start using ECS Exec, you need to ensure you have the following in place:</p>
<ol>
<li><p><strong>An AWS ECS Cluster</strong>: You should already have an ECS cluster set up with one or more tasks running, either on Fargate or EC2 instances.</p>
</li>
<li><p><strong>IAM Permissions</strong>: Ensure that the IAM roles associated with your ECS tasks have the necessary permissions to use Systems Manager. Specifically, you need the <code>ssm:StartSession</code> permission.</p>
</li>
<li><p><strong>Systems Manager Agent (SSM Agent)</strong>: SSM Agent should be installed and running on your container instances. It comes pre-installed on most Amazon Machine Images (AMIs) provided by AWS.</p>
</li>
<li><p><strong>AWS Systems Manager Session Manager Plugin</strong>: You should have the AWS Systems Manager Session Manager plugin installed on your local machine. This plugin enables you to initiate ECS Exec sessions from your terminal.</p>
</li>
<li><p>Note: ECS Exec is not currently supported using the AWS Management Console.</p>
</li>
</ol>
<h2 id="heading-steps"><strong>Steps</strong></h2>
<p>Now, let's walk through the steps to access your containers using ECS Exec:</p>
<h3 id="heading-step-1-set-up-aws-systems-manager-session-manager-plugin"><strong>Step 1: Set Up AWS Systems Manager Session Manager Plugin</strong></h3>
<p>If you haven't already installed the AWS Systems Manager Session Manager plugin on your local machine, you can follow the official AWS documentation to do so. This plugin is available for various operating systems, and installation is straightforward.</p>
<h3 id="heading-step-2-access-your-container"><strong>Step 2: Access Your Container</strong></h3>
<p>Once the Session Manager plugin is installed, you can access your container using the <code>aws ssm start-session</code> command. Here's a basic example:</p>
<pre><code class="lang-bash">aws ecs execute-command  \
    --region <span class="hljs-variable">$AWS_REGION</span> \
    --cluster &lt;cluster-name&gt; \
    --task &lt;task-id&gt; \
    --container &lt;container-name&gt; \
    --<span class="hljs-built_in">command</span> <span class="hljs-string">"/bin/bash"</span> \
    --interactive
</code></pre>
<h3 id="heading-step-3-execute-commands"><strong>Step 3: Execute Commands</strong></h3>
<p>After initiating the session, you'll be presented with a shell prompt that allows you to execute commands inside your container. You can run diagnostics, check logs, or make configuration changes as needed.</p>
<h3 id="heading-step-4-exit-the-session"><strong>Step 4: Exit the Session</strong></h3>
<p>When you're done, simply type <code>exit</code> to exit the session.</p>
<h3 id="heading-step-5-clean-up-optional"><strong>Step 5: Clean Up (Optional)</strong></h3>
<p>It's a good practice to clean up unused sessions and resources. You can do this through AWS Systems Manager.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>ECS Exec, powered by AWS Systems Manager, is a valuable tool for container management and troubleshooting on Amazon ECS. It simplifies the process of accessing your containers running on Fargate and EC2, eliminating the need for SSH or direct host access. By following the prerequisites and steps outlined in this blog post, you can take advantage of ECS Exec to efficiently diagnose issues, collect logs, and perform maintenance tasks within your containers. This capability not only streamlines your container orchestration but also enhances your overall operational efficiency when working with ECS.</p>
<p>As you continue to leverage ECS Exec, you'll find it to be an essential part of your container management toolkit, improving your ability to maintain and troubleshoot containerized applications with ease.</p>
]]></content:encoded></item><item><title><![CDATA[What's all about AWS Default VPC?]]></title><description><![CDATA[Introduction
Did you know that your AWS account, if it was created after 4th December 2013 has a default VPC in each AWS Region? In this blog, we will talk about the various characteristics of the default VPC, discuss its use cases, and see how you c...]]></description><link>https://sagaruprety.com.np/whats-all-about-aws-default-vpc</link><guid isPermaLink="true">https://sagaruprety.com.np/whats-all-about-aws-default-vpc</guid><category><![CDATA[aws default vpc]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS VPC]]></category><category><![CDATA[aws networking]]></category><dc:creator><![CDATA[Sagar Uprety]]></dc:creator><pubDate>Tue, 14 Nov 2023 18:15:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704860219280/5a3448f5-8437-42fb-adab-bc442ffa4e9f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>Did you know that your AWS account, if it was created after <strong>4th December 2013</strong> has a default VPC in each AWS Region? In this blog, we will talk about the various characteristics of the default VPC, discuss its use cases, and see how you can create one from the AWS Console, CLI, and Terraform. We will also run through scenarios when you would not want to use the default VPC. So, let's get started 🚀</p>
<h3 id="heading-why-do-we-need-aws-default-vpc"><strong>Why do we need AWS Default VPC?</strong></h3>
<p>First thing first, most if not all AWS resources require you to define or choose a Virtual Private Cloud (VPC) for it to run. VPC is the underlying foundation that creates a logically isolated virtual network for your resources. That's where the AWS Default VPC comes into play. So, instead of throwing a bunch of errors (such as missing subnets, no-internet connection, etc.) when you try to launch an EC2 instance or any other AWS service, it can use the default VPC and let you get that aah-hah deployment success message!</p>
<p>YES! You could create your own VPC and launch your resources there (we will cover this later in the blog)! But, what if you don't want to dive into understanding subnetting, setting up an Internet Gateway, Route Tables, etc? If this sounds scary enough, and you want to quickly get started, the default VPC comes to your rescue. You can immediately start launching your Amazon EC2 instances into a default VPC. You can also use services such as Elastic Load Balancing, Amazon RDS, Amazon ECS, Amazon EMR, and others in your default VPC.</p>
<h3 id="heading-characteristics-of-aws-default-vpc"><strong>Characteristics of AWS Default VPC</strong></h3>
<p>Alright! I got that. But what exactly makes up the Default VPC you may ask? Here's your answer:</p>
<p><img src="https://docs.aws.amazon.com/images/vpc/latest/userguide/images/default-vpc.png" alt="                We create a default VPC in each Region, with a default subnet in each Availability Zone.            " class="image--center mx-auto" /></p>
<ol>
<li><p><strong>IPv4 Address Space</strong>: IPv4 CIDR block (<code>172.31.0.0/16</code>). Notice the <strong>size /16</strong> here. This provides a vast pool of <strong>65,536 private IPv4 addresses</strong>, essential for scalability and resource allocation. Notice the IPv4 address space in the image below.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704814652979/5fe3503d-ac78-4814-b46d-5f18ead72a9b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Subnets</strong>: Each Availability Zone (AZ) gets a dedicated /20-size public subnet, allowing for <strong>4,096 IP addresses</strong> per subnet. Note that, all the EC2 Instances that you launch in a default subnet get both a public IPv4 address and a private IPv4 address, as well as both public and private DNS hostnames.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704813601691/d2ce3dac-44bd-4f09-b784-cf3ce27d8731.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Gateway to the Internet</strong>: An <strong>Internet Gateway (IGW)</strong> that allows seamless connection between your VPC and the Internet, enabling secure and efficient communication. You also get a <strong>route</strong> in the main route table that points to all external traffic (<code>0.0.0.0/0</code>) to the internet gateway and internal traffic redirects to the VPC itself (local).</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704813511095/35bc824b-8297-4c47-873e-deb7c5952c32.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Security Measures</strong>: A default <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/default-security-group.html"><strong>Security Group(SG)</strong></a> and a default <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#default-network-acl"><strong>Network Access Control List (NACL)</strong></a> that ensure a secure perimeter for your resources.</p>
</li>
</ol>
<h3 id="heading-customize-your-default-vpc"><strong>Customize your default VPC</strong></h3>
<p>If required, you can customize your Default VPC to align with your unique requirements which is often the case:</p>
<ul>
<li><p>Add additional IPv4 CIDR blocks (if you need more IP address space)</p>
</li>
<li><p>Add nondefault subnets</p>
</li>
<li><p>Modify Route Tables and add/remove routes</p>
</li>
<li><p>Update rules in the default security group and/or add additional security groups, providing granular control over inbound and outbound traffic.</p>
</li>
<li><p>Configure for AWS Site-to-Site VPN and Direct Connect Gateway</p>
</li>
<li><p>Associate an IPv6 CIDR block</p>
</li>
</ul>
<h3 id="heading-access-your-default-vpc-and-subnets"><strong>Access your default VPC and subnets</strong></h3>
<p>Well, thanks for that! Now, I know what default VPC is, why is it needed, and the various components. But, hey there 👋 how and where can I see it? I got you!</p>
<ol>
<li><p><strong>Navigate to the</strong> <a target="_blank" href="https://console.aws.amazon.com/vpc/"><strong>AWS VPC Console</strong></a>.</p>
</li>
<li><p><strong>Access "Your VPCs" in the sidebar</strong>.</p>
</li>
<li><p><strong>Check the "Default VPC" column for a Yes value</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704814452161/4b9ba743-126b-4e0d-abb0-7b19c6106d32.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click on YOUR-VPC-ID and open <strong>"Resource Map"</strong> to check for the underlying <strong>default subnets, route tables, and IGW.</strong> Alternatively, you can navigate to Subnets and identify them from the "Default Subnets" columns</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704814931881/40a378d5-6655-48bb-a3f5-40943a5773be.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-creating-default-vpc"><strong>Creating Default VPC</strong></h3>
<p>Whoo! This looks interesting. So, can I create one if I don't have one already? Well, certainly. Let's look at three different ways of achieving this.</p>
<ol>
<li><p><strong>AWS Console</strong>: Navigate to "<strong>Your VPCs</strong>," execute "<strong>Actions,</strong>" and select <strong>"Create Default VPC.</strong>" You are all set to go!</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704815462149/8ef16ba7-a9b5-46da-a2af-68080b06cf19.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>AWS CLI</strong>: If you are more off of a CLI wizard, as you should be as a DevOps Engineer, you can use the following command to create a new default VPC.</p>
<pre><code class="lang-bash"> aws ec2 create-default-vpc
</code></pre>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">Note that you would need to <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">install AWS CLI</a> and authenticate to your account to run the command. Read the configuration guide <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html">here</a></div>
 </div>


</li>
</ol>
<p><strong>3. Terraform:</strong> If you do everything as IaC (Infrastructure as Code), you can create the same with the following <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc">Terraform Resource Block</a>.</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_default_vpc"</span> <span class="hljs-string">"default"</span> {
  tags = {
    Name = <span class="hljs-string">"Default VPC"</span>
  }
}
</code></pre>
<h3 id="heading-can-i-delete-a-default-vpc"><strong>Can I delete a default VPC?</strong></h3>
<p>The short answer is <strong>YES!</strong> But be careful if that's what you want and only do so if you have custom VPC for your use cases. Otherwise, you might not be able to use many AWS services.</p>
<h3 id="heading-can-i-make-an-existing-vpc-the-default-vpc-or-restore-a-deleted-default-vpc-in-amazon-vpc"><strong>Can I make an existing VPC the default VPC or restore a deleted default VPC in Amazon VPC?</strong></h3>
<p>Here, the answer is <strong>NO!</strong> You cannot convert your existing non-default VPC to your default VPC. You also cannot restore a previous default VPC that you deleted.</p>
<h3 id="heading-should-i-always-use-the-default-vpc">Should I always use the default VPC?</h3>
<p>Now this is the question you should be asking! While it's very easy to get started creating resources and get it up and running in the default VPC. Once you have multiple resources for different projects, especially in production, it is generally advised not to use default VPC.</p>
<p>This is because you can isolate your workloads on multiple dedicated VPC, say for instance your containers and your database in RDS can run on a separate custom VPC. This creates a logical separation and adds an extra layer of security. You also get much control over IP addresses and subnetting in custom VPCs. On top, most security audits will flag default VPCs, SGs, etc. Check out the <strong>tfsec</strong> flag details <a target="_blank" href="https://aquasecurity.github.io/tfsec/v1.8.0/checks/aws/vpc/no-default-vpc/">here</a></p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>AWS Default VPC comes with predefined IPv4 addresses, subnets, IGW and other components to help you quickly get started with AWS services. It abstracts the creation of a custom VPC and provides you with a ready-to-use VPC. However, for production use cases, it's advised to create your own custom VPC for specific workloads and additional security.</p>
<p>If you like my blog, do follow and check out my videos on YouTube where I publish videos on DevOps and Cloud. Here's one about Docker Bridge Network that explains how containers communicate with each other and to the internet:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=jCJAiOSuOkg">https://www.youtube.com/watch?v=jCJAiOSuOkg</a></div>
<p> </p>
<p><strong>Resources</strong></p>
<ol>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html">https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/default-security-group.html">https://docs.aws.amazon.com/vpc/latest/userguide/default-security-group.html</a></p>
</li>
<li><p><a target="_blank" href="https://aquasecurity.github.io/tfsec/v1.8.0/checks/aws/vpc/no-default-vpc/">https://aquasecurity.github.io/tfsec/v1.8.0/checks/aws/vpc/no-default-vpc/</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc">https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc</a></p>
</li>
</ol>
]]></content:encoded></item></channel></rss>