aws scenario based interview questions

Do you aspire to be an airline pilot? Do you want to work for an airline as a pilot? Then here is a collection of rules that will assist you in beginning your career as an airline pilot. The most thrilling, prestigious, and well-paid career is that of an airline pilot. An airline pilot is an individual who controls and operates the airplane’s flight and operates directly.

Obtaining a pilot’s license necessitates extensive training. ProjectPractical provides a compilation of Top 60 Airline Pilot Job Interview questions and answers to help you secure a successful job as an airline pilot. These questions can assist you in determining the criteria for becoming an airline pilot.

AWS Cloud Architect Interview Series | SCENARIO based Questions | Part – 1

AWS Architect Interview Questions and Answers for 2022

Ace Your Next Job Interview with Mock Interviews from Experts to Improve Your Skills and Boost Confidence!

Services

AWS

OpenStack

User Interface

GUI-Console

API-EC2 API

CLI -Available

GUI-Console

API-EC2 API

CLI -Available

Computation

EC2

Nova

File Storage

S3

Swift

Block Storage

EBS

Cinder

Networking

IP addressing Egress, Load Balancing Firewall (DNS) , VPC

IP addressing load balancing firewall (DNS)

Big Data

Elastic MapReduce

Get FREE Access to Data Analytics Example Codes for Data Cleaning, Data Munging, and Data Visualization

  • What type of performance can you expect from Elastic Block Storage? How do you back it up and enhance the performance ?
  • Performance of an elastic block storage varies i.e. it can go above the SLA performance level and after that drop below it. SLA provides an average disk I/O rate which can at times frustrate performance experts who yearn for reliable and consistent disk throughput on a server. Virtual AWS instances do not behave this way. One can backup EBS volumes through a graphical user interface like elasticfox or use the snapshot facility through an API call. Also, the performance can be improved by using Linux software raid and striping across four volumes.

  • Imagine that you have an AWS application that requires 24×7 availability and can be down only for a maximum of 15 minutes. How will you ensure that the database hosted on your EBS volume is backed up?
  • Automated backup are the key processes here as they work in the background without requiring any manual intervention. Whenever there is a need to back up the data, AWS API and AWS CLI play a vital role in automating the process through scripts. The best way is to prepare for a timely backup of EBS of the EC2 instance. The EBS snapshot should be stored on Amazon S3 and can be used for recovery of the database instance in case of any failure or downtime.

  • You create a Route 53 latency record set from your domain to a system in Singapore and a similar record to a machine in Oregon. When a user located in India visits your domain, to which location will he be routed to?
  • Assuming that the application is hosted on Amazon EC2 instance and multiple instances of the applications are deployed on different EC2 regions. The request is most likely to go to Singapore because Amazon Route 53 is based on latency and it routes the requests based on the location that is likely to give the fastest response possible.

  • Differentiate between on-demand instance and spot instance.
  • Spot Instances are spare unused EC2 instances which one can bid for. Once the bid exceeds the existing spot price (which changes in real-time based on demand and supply) the spot instance will be launched. If the spot price becomes more than the bid price then the instance can go away anytime and terminated within 2 minutes of notice. The best way to decide on the optimal bid price for a spot instance is to check the price history of last 90 days that is available on AWS console. The advantage of spot instances is that they are cost-effective and the drawback is that they can be terminated anytime. Spot instances are ideal to use when –

  • There are optional nice to have tasks.
  • You have flexible workloads which can be run when there is enough compute capacity.
  • Tasks that require extra computing capacity to improve performance.
  • On-demand instances are made available whenever you require them and you need to pay for the time you use them on an hourly basis. These instances can be released when they are no longer required and do not require any upfront commitment. The availability fo these instances is guaranteed by AWS unlike spot instances.

    The best practice is to launch couple of on-demand instances which can maintain minimum level of guaranteed compute resources for the application and add-on few spot instances whenever there is an opportunity.

  • How will you access the data on EBS in AWS ?
  • Elastic block storage as the name indicates provides persistent, highly avaialble and high performance block level storage that can be attached to a running EC2 instance. The storage can formatted and mounted as a file system or the raw storage can be accessed directly.

  • What is the boot time for an instance store backed instance ?
  • The boot time for an Amazon Instance Store -Backed AMI is usually less than 5 minutes.

    New Projects

  • Is it possible to vertically scale on an Amazon Instance? If yes, how ?
  • Following are the steps to scale an Amazon Instance vertically –

  • Spin up a larger Amazon instance than the existing one.
  • Pause the exisiting instance to remove the root ebs volume from the server and discard.
  • Stop the live running instance and detach its root volume.
  • Make a note of the unique device ID and attach that root volume to the new server.
  • Start the instance again.
  • Differentiate between vertical and horizontal scaling in AWS.
  • The main difference between vertical and horizontal scaling is the way in which you add compute resources to your infrastructure. In vertical scaling, more power is added to the existing machine while in horizontal scaling additional resources are added into the system with the addition of more machines into the network so that the workload and processing is shared among multiple devices. The best way to understand the difference is imagine that you are retiring your Toyota and buying a Ferrari because you need more horsepower. This is vertical scaling. Another way to get that added horsepower is not to ditch the Toyota for the Ferrari but buy another car. This can be related to horizontal scaling where you drive several cars all at once.

    When the users are up to 100, an EC2 instance alone is enough to run the entire web application or the database until the traffic ramps up. Under such circumstances when the traffic ramps up, it is better to scale vertically by increasing the capacity of the EC2 instance to meet the increasing demands of the application. AWS supports instances up to 128 virtual cores or 488GB RAM.

    When the users for your application grow up to 1000 or more, vertical cannot handle requests and there is need for horizontal scaling which is achieved through distributed file system, clustering, and load balancing.

    Recommended Reading:

  • What is the total number of buckets that can be created in AWS by default ?
  • 100 buckets can be created in each of the AWS accounts. If additional buckets are required, increase the bucket limit by submitting a service limit increase.

  • Differentiate between Amazon RDS, Redshift and Dynamo DB.
  • Features

    Amazon RDS

    Redshift

    Dynamo DB

    Computing Resources

    Instances with 64 vCPU and 244 GB RAM

    Nodes with vCPU and 244 GB RAM

    Not specified, SaaS-Software as a Service.

    Maintenance Window

    30 minutes every week.

    30 minutes every week.

    No impact

    Database Engine

    MySQL, Oracle DB, SQL Server,Amazon Aurora, Postgre SQL

    Redshift

    NoSQL

    Primary Usage Feature

    Conventional Databases

    Datawarehouse

    Database for dynamically modified data

    Multi A-Z Replication

    Additional Service

    Manual

    In-built

    Get More Practice, More Big Data and Analytics Projects, and More guidance.Fast-Track Your Career Transition with ProjectPro

  • An organization wants to deploy a two-tier web applications on AWS. The application requires complex query processing and table joins. However, the company has limited resources and requires high availability. Which is the best configuration that company can opt for based on the requirements ?
  • DynamoDB deals with core problems of database scalability, management, reliability, and performance but does not have the functionalities of a RDBMS. DynamoDB does not render support for complex joins or query processing or complex transactions. You can run a relational engine on Amazon RDS or EC2 for this kind of a functionality.

  • If you have half of the workload on public cloud while the other half is on local storage, what kind of architecture will you use for this ?
  • Is it possible to cast-off S3 with EC2 instances ? If yes, how ?
  • It is possible to cast-off S3 with EC2 instances using root approaches backed by native occurrence storage.

  • How will you configure an instance with the application and its dependencies , and make it ready to serve traffic?
  • You can acheive this with the use of lifecycle hooks. They are powerful as they let you pause the creation or termination of an instance so that you can sneak peak in and perform custom actions like configuring the instance, downloading the required files, and any other steps that are required to make the instance ready.Every auto scaling group can have multiple lifecycle hooks.

  • How can you safeguard EC2 instances running on a VPC ?
  • AWS Security groups associated with EC2 instances can help you safeguard EC2 instances running in a VPC by providing security at the protocol and port access level. You can configure both INBOUND and OUTBOUND traffic to enables secured access for the EC2 instance.AWS security groups are much similar to a firewall-they contain set of rules which filter the traffic coming into and out of an EC2 instance and deny any kind of unauthorized access to EC2 instances.

  • How many EC2 instances can be used in a VPC ?
  • There is a limit of running up to a total of 20 on-demand instances across the instance family , you can purchase 20 reserved instances and request spot instances as per your dynamic spot limit region.

  • What are some of the key best practices for security in Amazon EC2?
  • Create individual IAM (Identity and Access Management) users to control access to your AWS recourses. Creating separate IAM user provides separate credentials for every user making it possible to assign different permissions to each user based on the access requirements.
  • Secure the AWS Root account and its access keys.
  • Harden EC2 instances by disabling unnecessary services and applications by installing only necessary software and tools on EC2 instances.
  • Grant least privileges by opening up permissions that are required to perform a specific task and not more than that. Additional permissions can be granted as required.
  • Define and review the security group rules on a regular basis.
  • Have a well-defined strong password policy for all the users.
  • Deploy anti-virus software on the AWS network to protect it from Trojans, Viruses, etc.
  • What should be the instance’s tenancy attribute for running it on a single tenant hardware ?
  • The instance tenancy attribute must be set to a dedicated instance and other values might not be appropriate for this operation.

  • There is a distributed application that processes huge amounts of data across various EC2 instances. Application is designed in such a way that it can recover gracefully from EC2 instance failures. How will you accomplish this in a cost effective manner ?
  • On-demand or reserved instance will not be ideal in this case as the task here is not continuous. Moreover. It does not make sense to launch an on-demand instance whenever work comes up because on-demand instances are expensive.In this case, the ideal choice would be to opt for a spot instance owing to its cost effectiveness and no long term commitments.

  • What are the important features of a classic load balancer in EC2 ?
  • The high availability feature ensures that the traffic is distributed among EC2 instances in single or multiple availability zones.This ensures high scale of availability for incoming traffic.
  • Classic load balancer can decide whether to route the traffic or not based on the results of health check.
  • You can implement secure load balancing within a network by creating security groups in a VPC.
  • Classic load balancer supports sticky sessions which ensure that the traffic from a user is always routed to the same instance for a seamless experience.
  • What parameters will you take into consideration when choosing the availability zone ?
  • Performance, pricing, latency, and response time are some of the factors to consider when selecting the availability zone.

  • Which instance will you use for deploying a 4-node Hadoop cluster in AWS ?
  • We can use a c4.8x large instance or i2.large for this, but using a c4.8x will require a better configuration on PC.

  • Will you use encryption for S3 ?
  • It is better to consider encryption for sensitive data on S3 as it is a proprietary technology.

  • How can you send request to Amazon S3 ?
  • Using the REST API or the AWS SDK wrapper libraries which wrap the underlying Amazon S3 REST API.

  • How will you bind the user session with a specific instance in ELB (Elastic Load Balancer) ?
  • This can be achieved by enabling Sticky Session.

  • What are the possible connection issues you encounter when connecting to an EC2 instance ?
  • Unprotected private key file
  • Server refused key
  • Connection timed out
  • No supported authentication method available
  • Host key not found,permission denied.
  • User key not recognized by the server, permission denied.
  • What is the difference between Amazon S3 and EBS ?
  • Amazon S3

    EBS

    Paradigm

    Object Store

    Filesystem

    Security

    Private Key or Public Key

    Visible only to your EC2

    Redundancy

    Across data centers

    Within the data center

    Performance

    Fast

    Superfast

    Build an Awesome Job Winning Project Portfolio with Solved End-to-End Big Data Projects

  • Can you run multiple websites on an EC2 server using a single IP address?
  • More than one elastic IP is required to run multiple websites on EC2.

  • What happens when you reboot an EC2 instance?
  • Rebooting an instance is just similar to rebooting a PC. You do not return to ’s original state, however, the contents of the hard disk are same as before the reboot.

  • A content management system running on EC2 instance is approaching 100% CPU utilization. How will you reduce the load on EC2 instance ?
  • This can be done by attaching a load balancer to an autoscaling group to efficiently distribute load among all instances.

  • What happens when you launch instances in Amazon VPC ?
  • Each instance has a default IP address when the instance is launched in Amazon VPC. This approach is considered ideal when you need to connect cloud resources with the data centers.

  • Can you modify the private IP address of an EC2 instance while it is running in a VPC ?
  • It is not possible to change the primary private IP addresses. However, secondary IP addresses can be assigned, unassigned or moved between instances at any given point.

  • You are launching an instance under the free usage tier from AMI having a snapshot size of 50GB. How will you launch the instance under the free usage tier ?
  • It is not possible to launch this instance under the free usage tier.

  • Which load balancer will you use to make routing decisions at the application layer or transport layer that supports either VPC or EC2?
  • 36. Mention the native AWS security logging capabilities.

    AWS CloudTrail:

    AWS CloudTrail facilitates security analysis, compliance auditing, and resource change tracking of an AWS environment. It can also provide a history of AWS API calls for a particular account. CloudTrain is an essential service required to understand AWS use and should be enabled in all AWS regions for all AWS accounts, irrespective of where the services are deployed. CloudTrail delivers log files and an optional log file integrity validation to a designated Amazon S3 (Amazon Simple Storage Service) bucket once almost every five minutes. AWS CloudTrail may be configured to send messages using Amazon Simple Notification Service (Amazon SNS) when new logs have been delivered. It is also able to integrate with AWS CloudWatch Logs and AWS Lambda for processing purposes.

    AWS Config can create an AWS resource inventory, send notifications for any changes in configuration and maintain relationships among AWS resources. It provides a timeline of changes in resource configuration for specific services. If multiple changes occur over a short time interval, then only the cumulative changes get recorded. Snapshots of changes are stored in a configured Amazon S3 bucket and can be set to send Amazon SNS notifications when resource changes are detected in AWS. Apart from tracking resource changes, AWS Config should be enabled to troubleshoot or perform any security analysis and demonstrate compliance over some time or at a specific time interval.

    Detailed billing reports show the cost breakdown by the hour, day, or month, by a particular product or product resource, by each account in a company, or by customer-defined tags. Billing reports indicate how AWS resources are getting consumed and can be used to audit a company’s consumption of AWS services. AWS publishes detailed billing reports to a specified S3 bucket in CSV format several times a day.

    Get confident to build end-to-end projects.

    Access to a curated library of 250+ end-to-end industry projects with solution code, videos and tech support.

    S3 Access logs record information about individual requests made to the Amazon S3 buckets and can be used to analyze traffic patterns, perform troubleshooting, and perform security and access auditing. The access logs are delivered to designated target S3 buckets on a best-effort basis. They can help users learn about the customer base, define access policies, and set lifecycle policies.

    Elastic Load Balancing Access Logs

    Elastic Load Balancing Access logs record the individual requests made to a particular load balancer. They can also analyze traffic patterns, perform troubleshooting, and manage security and access auditing. The logs give information about the request processing durations. This data can improve user experiences by discovering user-facing errors generated by the load balancer and debugging any errors in communication between the load balancers and back-end web servers. Elastic Load Balancing access logs get delivered to a configured target S3 bucket based on the user requirements at five or sixty-minute intervals.

    Amazon CloudFront Access logs record individual requests made to CloudFront distributions. Like the previous two access logs, Amazon CloudFront Access Logs can also be used to analyze traffic patterns, perform any troubleshooting required, and for security and access auditing. Users can use these access logs to gather insight about the customer base, define access policies, and set lifecycle policies. CloudFront Access logs get delivered to a configured S3 bucket on a best effort basis.

    Amazon Redshift logs collect and record information concerning database connections, any changes to user definitions, and user activity. The logs can be used for security monitoring and troubleshooting any database-related issues. Redshift logs get delivered to a designated S3 bucket.

    Amazon Relational Database Service (RDS) Logs

    RDS logs record information on access, errors, performance, and operation of the database. They make it possible to analyze the security, performance, and operation of the AWS-managed databases. RDS logs can be viewed or downloaded using the Amazon RDS console, the Amazon RDS API, or the AWS command-line interface. The log files may also be queried from a specific database table.

    Amazon Relational Database Service (RDS) logs capture information about database access, performance, errors, and operation. These logs allow security, performance, and operation analysis of the AWS-managed databases. Customers can view, watch, or download these database logs using the Amazon RDS console, the AWS Command Line Interface, or the Amazon RDS API. the log files may also be queried by using DB engine-specific database tables.

    Amazon VPC Flow logs collect information specific to the IP traffic, incoming and outgoing from the Amazon Virtual Private Cloud (Amazon VPC) network interfaces. They can be applied, as per requirements, at the VPC, subnet, or individual Elastic Network Interface level. VPC Flow log data is stored using Amazon CloudWatch Logs. To perform any additional processing or analysis, the VPC Flow log data can be exported using Amazon CloudWatch. It is recommended to enable Amazon VPC flow logs for debugging or monitoring policies that require capturing and visualizing network flow data.

    Various options are available in AWS for centrally managing log data. Most of the AWS audit and access logs data are delivered to Amazon S3 buckets, which users can configure.

    Consolidation of all the S3-based logs into a centralized, secure bucket makes it easier to organize, manage and work with the data for further analysis and processing. The Amazon CloudWatch logs provide a centralized service where log data can be aggregated.

    37) What is a DDoS attack, and how can you handle it?

    A Denial of Service (DoS) attack occurs when there is a malicious attempt to affect the availability of a particular system, such as an application or a website, to the end-users. A DDoS attack or a Distributed Denial of Service attack occurs when the attacker uses multiple sources to generate the attack.DDoS attacks are generally segregated based on the layer of the Open Systems Interconnection (OSI) model that they attack. The most common DDoS attacks tend to be at the Network, Transport, Presentation, and Application layers, corresponding to layers 3,4,6, and 7, respectively.

    38) What is RTO and RPO in AWS?

    The Disaster Recovery (DR) Strategy involves having backups for the data and redundant workload components. RTO and RPO are objectives used to restore the workload and define recovery objectives on downtime and data loss.

    Recovery Time Objective or RTO is the maximum acceptable delay in time between the interruption of a service and its restoration. It is used to determine an acceptable time window during which a service can remain unavailable.

    Recovery Point Objective or RPO is the maximum amount of time that can be allowed since the last data recovery point. It is used to determine what can be considered an acceptable loss of data from the last recovery point to the service interruption.

    RPO and RTO are set by the organization using AWS and have to be set based on business needs. The cost of recovery and the probability of disruption can help an organization determine the RPO and RTO.

    39) How is stopping an EC2 instance different from terminating it?

    Stopping an EC2 instance results in a normal shutdown being performed on the instance, and the instance is moved to a stop state. However, when an EC2 instance is terminated, it is transferred to a stopped state, and any EBS volumes attached to it are deleted and cannot be recovered once more.

    40) How can you automate EC2 backup by using EBS?

    AWS EC2 instances can be backed up by creating snapshots of EBS volumes. The snapshots are stored with the help of Amazon S3. Snapshots can capture all the data contained in EBS volumes and create exact copies of this data. The snapshots can then be copied and transferred into another AWS region, ensuring safe and reliable storage of sensitive data.

    Before running AWS EC2 backup, it is recommended to stop the instance or detach the EBS volume that will be backed up. This ensures that any failures or errors that occur will not affect newly created snapshots.

    The following steps must be followed to back up an AWS EC2 instance:

  • Sign in to the AWS account, and launch the AWS console.
  • Launch the EC2 Management Console from the Services option.
  • From the list of running instances, select the instance that has to be backed up.
  • Find the Amazon EBS volumes that are attached locally to that particular instance.
  • List the snapshots of each of the volumes, and specify a retention period for the snapshots. A snapshot has to be created of each volume too.
  • Remember to remove snapshots that are older than the retention period.
  • 41) Explain how one can add an existing instance to a new Auto Scaling group?

    To add an existing instance to a new Auto Scaling group:

  • Open the EC2 console.
  • From the instances, select the instance that is to be added
  • Go to Actions -> Instance Setting -> Attach to Auto Scaling Group
  • Select a new Auto Scaling group and link this particular group to the instance.
  • Most Watched Projects

    AWS Interview Questions For S3

    S3 stands for Simple Storage Service. AWS S3 can be utilized to store and get any amount of data at any time and the best part from anywhere on the web. The payment model for S3 is to pay as you go.

    You can send the request by utilizing the AWS SDK or REST API wrapper libraries.

    The standard frequency accessed is the default storage class in S3.

    Storage class that is accessible in Amazon S3 are:

  • Amazon S3 standard
  • Amazon S3 standard infrequent access
  • Amazon S3 reduced repetition storage
  • Amazon glacier
  • Three different methods will let you encipher the data in S3

  • Server-side encryption – C
  • Server-side encryption – S3
  • Server-side encryption – KMS
  • aws scenario based interview questions

    Following factors are taken under consideration while deciding S3:

  • Transfer of data
  • Storage that is utilized
  • Number of requests made
  • Transfer acceleration
  • Storage management
  • The various types of routing policies available are as follows:

    The maximum size of an S3 bucket is five terabytes.

    Yes, Definitely. Amazon S3 is an international service. Its main objective is to provide an object storage facility through the web interface, and it utilizes the Amazon scalable storage infrastructure to function its global network.

    Here we have listed down some of the essential differences between EBS and S3:

  • EBS is highly scalable, whereas S3 is less scalable.
  • EBS has blocked storage; on the other hand, S3 is object storage.
  • EBS works faster than S3, whereas S3 works slower than EBS.
  • The user can approach EBS only through the given EC2 instance, but S3 can be accessible by anyone. It is a public instance.
  • EBS supports the file system interface, whereas S3 supports the web interface.
  • With the help of these following steps, one can upgrade or downgrade a system with near-zero downtime:

  • Start EC2 console
  • Select the AMI operating system
  • Open an instance with a recent instance type
  • Install the updates
  • Install applications
  • Analyze the instance to check whether it is working
  • If working then expand the new instance and cover it up with the older one
  • After it is extended the system with near-zero downtime can be upgraded and downgraded
  • AMI includes the following:

  • A template for the root volume for the instance
  • Opening permission
  • A block mapping which helps to decide on the capacity to be attached when it gets launched.
  • With the help of these below-mentioned resources, you will know whether the amount you are paying for the resource is accurate or not:

  • Check the top services table: You will find this on the dashboard in the cost management console that will display the top five most used services. This will also demonstrate how much you are paying on the resources in question.
  • Cost explorer: With the help of cost explorer, you can see and check the usage cost for 13 months. Moreover, know the amount of the next three months too.
  • AWS budget: This lets you plan your budget efficiently.
  • Cost allocation tags: Get to view that resource that has cost you more in a particular month. Moreover, organize and track your resource as well.
  • AWS CLI for Linux
  • Putty
  • AWS CLI for Windows
  • AWS CLI for Windows CMD
  • AWS SDK
  • Eclipse
  • EIP stands for Elastic IP address. It is a static Ipv4 address that is provided by AWS to administer dynamic cloud computing services.

    Checkout Other Popular Categories As Well

    AWS Interview Questions And Answers 2022 | AWS Solution Architect Training | Edureka In this Edureka AWS Interview Questions video, you will get to know the questions which you may face in the interview, the concepts explained here are essential for any Solution Architect in the making.

    For a detailed discussion on this topic, please refer our Cloud Computing blog. Following is the comparison between two of the most popular Cloud Service Providers:

    Amazon Web Services Vs Microsoft Azure

    Parameters AWS Azure
    Initiation 2006 2010
    Market Share 4x x
    Implementation Less Options More Experimentation Possible
    Features Widest Range Of Options Good Range Of Options
    App Hosting AWS not as good as Azure Azure Is Better
    Development Varied & Great Features Varied & Great Features
    IaaS Offerings Good Market Hold Better Offerings than AWS

    Define and explain the three basic types of cloud services and the AWS products that are built based on them?

    The three basic types of cloud services are:

    Here are some of the AWS products that are built based on the three cloud service types:

    Computing – These include EC2, Elastic Beanstalk, Lambda, Auto-Scaling, and Lightsat.

    Storage – These include S3, Glacier, Elastic Block Storage, Elastic File System.

    Networking – These include VPC, Amazon CloudFront, Route53

    FAQ

    Can S3 be cast off with EC2 instances?

    Basic AWS Interview Questions
    • Define and explain the three basic types of cloud services and the AWS products that are built based on them? …
    • What is the relation between the Availability Zone and Region? …
    • What is auto-scaling? …
    • What is geo-targeting in CloudFront? …
    • What are the steps involved in a CloudFormation Solution?

    What are the interview questions for AWS Solution Architect?

    Can S3 be cast-off with EC2 Instances, If yes specify how? Answer: Yes, it is possible to cast off with EC2 instances by using root approaches which have the backup of native occurrence storage.

    Related Posts

    Leave a Reply

    Your email address will not be published. Required fields are marked *