Home
Courses
Practice Exams
Pricing
Blog
Tools
Cheat Sheets
Full Stack Generator
Cloud Experts
BlowStack logoBlowStack logo
  • BlowStack
  • Practice Exams
  • AWS Certified DevOps Engineer - Professional - DOP-C02

AWS Certified DevOps Engineer - Professional - Exam Simulator

DOP-C02

Boost your readiness for the AWS Certified DevOps Engineer - Professional (DOP-C02) exam with our practice exam simulator. Featuring realistic questions and detailed explanations, it helps you identify knowledge gaps and improve your skills.

Questions update: Aug 08 2024

Questions count: 2775

Example questions

Domains: 6

Tasks: 19

Services: 101

Difficulty

The AWS Certified DevOps Engineer - Professional (DOP-C02) certification is designed for individuals with extensive experience in managing AWS environments. It assesses your ability to implement and manage continuous delivery systems and methodologies on AWS. Before attempting this certification, it is recommended to have a solid understanding of modern development and operations processes and methodologies, as well as experience in developing code in at least one high-level programming language and managing automated infrastructures.

 

The exam consists of 75 questions in multiple-choice and multiple-response formats. You have 180 minutes to complete the exam, which can be taken at a Pearson VUE testing center or online with a proctor. The cost of the exam is $300, and it is available in several languages, including English, Japanese, Korean, and Simplified Chinese. To pass, you need a score of 750 out of 1000.

This certification focuses on six main areas: automating the software development lifecycle (SDLC), managing infrastructure as code (IaC), creating resilient cloud solutions, monitoring and logging, incident and event response, and ensuring security and compliance. Key AWS services you need to be familiar with include AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, AWS CloudFormation, Amazon CloudWatch, AWS Lambda, and AWS IAM.

 

Preparing for this certification involves reviewing the official exam guide, which details the domains covered and their respective weightages. Practical experience is crucial, so setting up a lab environment to practice using AWS services is highly recommended. Additionally, enrolling in relevant training courses, such as those offered by experts like Stephane Maarek and Adrian Cantrill, can provide valuable insights and hands-on experience. Practice exams, like those by Jon Bonso, can help you get accustomed to the exam format and identify areas where you need to focus more.

 

Overall, achieving the AWS Certified DevOps Engineer - Professional certification demonstrates your proficiency in managing and automating AWS environments using DevOps principles, making it a valuable credential for advancing your career in cloud computing and DevOps.

How AWS Exam Simulator works

The Simulator generates on-demand unique practice exam question sets fully compatible with the selected AWS Official Certificate Exam.

The exam structure, difficulty requirements, domains, and tasks are all included.

Rich features not only provide you with the same environment as your real online exam but also help you learn and pass AWS Certified DevOps Engineer - Professional - DOP-C02 with ease, without lengthy courses and video lectures.

See all features - refer to the detailed description of AWS Exam Simulator description.

Exam Mode Practice Mode
Questions count751 - 75
Limited exam timeYesAn option
Time limit180 minutes10 - 200 minutes
Exam scope6 domains with appropriate questions ratio Specify domains with appropriate questions ratio
Correct answersAfter exam submissionAfter exam submission or after question answer
Questions typesMix of single and multiple correct answersSingle, Multiple or Both
Question tipNeverAn option
Reveal question domainAfter exam submissionAfter exam submission or during the exam
Scoring15 from 75 questions do not count towards the resultOfficial AWS Method or mathematical mean

Exam Scope

The Practice Exam Simulator questions sets are fully compatible with the official exam scope and covers all concepts, services, domains and tasks specified in the official exam guide.

AWS Certified DevOps Engineer - Professional - DOP-C02 - official exam guide

For the AWS Certified DevOps Engineer - Professional - DOP-C02 exam, the questions are categorized into one of 6 domains: SDLC Automation, Configuration Management and IaC, Resilient Cloud Solutions, Monitoring and Logging, Incident and Event Response, Security and Compliance, which are further divided into 19 tasks.

AWS structures the questions in this way to help learners better understand exam requirements and focus more effectively on domains and tasks they find challenging.

This approach aids in learning and validating preparedness before the actual exam. With the Simulator, you can customize the exam scope by concentrating on specific domains.

Exam Domains and Tasks - example questions

Explore the domains and tasks of AWS Certified DevOps Engineer - Professional - DOP-C02 exam, along with example questions set.

Question

Task 1.1 Implement CI/CD pipelines

You are a DevOps Engineer at a company that has a microservices architecture built using AWS Lambda. You have been tasked with setting up a Continuous Integration/Continuous Deployment (CI/CD) pipeline that automates the deployment of these Lambda functions. The deployment must ensure zero downtime and should integrate seamlessly with AWS CodeDeploy. Which deployment strategy should you choose?

select single answer

Explanation

All-at-once Deployment deploys the new version of the application to all instances simultaneously, which can cause downtime if any issues arise during deployment.

Explanation

Rolling Deployment can make a phased update to instances, but it is more applicable to EC2 or container-based applications rather than strictly serverless architectures like AWS Lambda.

Explanation

While Blue/Green Deployment can ensure zero downtime, it typically involves a more complex setup with two separate environments, which may not integrate as seamlessly with AWS Lambda as a Canary Deployment would.

Explanation

Canary Deployment gradually shifts traffic to the Lambda function in small increments, ensuring that any issues can be detected and rolled back quickly if needed, thus guaranteeing zero downtime.

Question

Task 1.2 Integrate automated testing into CI/CD pipelines

Your team is responsible for maintaining an application's CI/CD pipeline. Recently, there have been frequent bugs discovered in production. To improve the quality of code being deployed, you decide to integrate automated testing into your CI/CD pipeline. Your pipeline currently uses AWS CodePipeline, and deploys the application using AWS CodeDeploy. Which of the following is the MOST appropriate method to ensure automated testing is performed before the deployment stage with AWS CodeDeploy?

select single answer

Explanation

Testing after deployment means the code has already been deployed, potentially introducing bugs to the production environment.

Explanation

Running tests after deployment does not prevent faulty code from reaching production, which could lead to discovering issues only after they have potentially affected users.

Explanation

Manual testing is not in alignment with the goal of CI/CD automation. It leads to slower cycles and increased chances of human error.

Explanation

Integrating a testing stage in AWS CodePipeline ensures that all automated tests are executed before the deployment stage, thus ensuring only tested and stable code is deployed using AWS CodeDeploy.

Question

Task 1.3 Build and manage artifacts

As a DevOps engineer at a rapidly growing tech company, you are tasked with automating the build and deployment of EC2 instances and container images. Your company uses AWS CloudFormation to manage its infrastructure as code. Recently, the team decided to utilize EC2 Image Builder to streamline the creation and update of both EC2 and container images. You need to ensure that the image creation process is automated and integrates seamlessly with your existing CloudFormation setup. Which approach would you use to achieve this goal?

select single answer

Explanation

While EC2 User Data scripts can automate some tasks during instance boot, they are not suitable for integrating EC2 Image Builder and do not leverage CloudFormation for defining infrastructure.

Explanation

Manually triggering EC2 Image Builder pipelines from the AWS Management Console would not be an automated approach and would not integrate with CloudFormation, which contradicts the requirement to automate the process.

Explanation

AWS Elastic Beanstalk is used for deploying and managing applications and doesn't provide the necessary integration for automating EC2 Image Builder pipelines through CloudFormation.

Explanation

AWS CloudFormation supports the definition of EC2 Image Builder components, recipes, and pipelines. By defining these resources in CloudFormation, you can automate the creation and update of EC2 instances and container images seamlessly within your existing infrastructure as code (IaC) setup.

Question

Task 1.4 Implement deployment strategies for instance, container, and serverless environments

You are a DevOps engineer at a company that is deploying a new microservices architecture on Amazon EKS. The team decided to use a blue/green deployment strategy to release a new version of a critical microservice in order to minimize risks and downtime. Which method should you employ to ensure traffic is gradually shifted from the existing version (blue) to the new version (green) within Amazon EKS?

select single answer

Explanation

Using an ingress controller with weighted routing, such as an AWS Application Load Balancer (ALB) Ingress Controller, allows you to gradually shift traffic between the existing and new versions of a microservice. This ensures that you can monitor the new version's performance before fully switching over.

Explanation

A rolling update strategy updates instances of the application one at a time, but it doesn't fit the blue/green deployment paradigm where both versions run in parallel to ensure minimal risk.

Explanation

Switching DNS records might lead to DNS propagation delays and could result in some users hitting the old version while others hit the new one. This method doesn't facilitate a gradual traffic shift.

Explanation

Manually updating the Kubernetes service selector doesn't provide a controlled and gradual traffic shift between versions. It swaps traffic instantaneously, which can introduce risks.

Question

Task 2.1 Define cloud infrastructure and reusable components to provision and manage systems throughout their lifecycle

You are a DevOps engineer at a mid-sized tech company. Your team is in the process of implementing a new infrastructure as code (IaC) strategy using AWS CloudFormation to manage your cloud resources. You have created several reusable CloudFormation templates to standardize the deployment of various AWS services, including EC2 instances, RDS databases, and VPC configurations. However, you recently discovered that some of your templates grant overly permissive IAM roles, which has raised security concerns. Now, you need to enforce more stringent IAM policies within your CloudFormation templates and ensure that only needed permissions are granted to various resources. Which solution will help you achieve this objective most effectively?

select single answer

Explanation

While AWS Service Catalog helps with standardizing and deploying CloudFormation templates, it does not directly address the need for more granular IAM policy conditions within the templates.

Explanation

This approach is not practical for maintaining standardization and could lead to human error. It defeats the purpose of using IaC to automate infrastructure management.

Explanation

This approach is insecure and goes against AWS best practices by providing excessive permissions. The goal should be to implement least privilege access, not to grant administrator access unnecessarily.

Explanation

IAM policy conditions allow you to define explicit rules about when a policy effect is allowed or denied. Using conditions makes your IAM policies more secure and customized to specific scenarios.

Question

Task 2.2 Deploy automation to create, onboard, and secure AWS accounts in a multi-account or multi-Region environment

You are a DevOps engineer working for a company that has recently decided to scale its cloud infrastructure using AWS. You have been tasked with setting up a governance framework for multiple AWS accounts across different regions. One of the requirements is to automate the provisioning of AWS resources to ensure they adhere to best practices and security policies. You want to use a service that allows you to easily create and manage catalogs of approved AWS resources that can be deployed across multiple accounts and regions. Which service should you choose to meet these requirements?

select single answer

Explanation

AWS Security Hub provides comprehensive security checks and compliance auditing across AWS accounts but doesn't offer cataloging and provisioning capabilities for AWS resources.

Explanation

AWS Control Tower is used for setting up and governing a secure, multi-account AWS environment based on AWS best practices. While it helps with governance, it doesn't specifically create and manage catalogs of approved resources for deployment.

Explanation

AWS Service Catalog allows you to create and manage catalogs of IT services that are approved for use on AWS. It enables you to centrally manage deployed IT services and achieve consistent governance, compliance, and security.

Explanation

AWS Config is primarily used for assessing, auditing, and evaluating the configurations of your AWS resources. It does not offer a catalog of approved resources for deployment.

Question

Task 2.3 Design and build automated solutions for complex tasks and large-scale environments

You are working for a large e-commerce company that needs to ensure its software environment is compliant with regulatory standards across multiple AWS accounts. Your team uses AWS Config to maintain this compliance by checking the state of AWS resources against predefined rules. Recently, you have been tasked with designing and building an automated solution to handle this process for hundreds of AWS accounts and thousands of resources. Which approach would best achieve this objective?

select single answer

Explanation

AWS Config Aggregator allows you to collect and view compliance data across multiple accounts, and AWS Systems Manager Automation can be leveraged to automatically handle remediation actions, thereby achieving large-scale automated compliance management.

Explanation

AWS Trusted Advisor is more focused on best practice checks rather than compliance with regulatory standards, and AWS Config is best suited for ongoing configuration monitoring and compliance assessment.

Explanation

While CloudWatch and Lambda can provide remediation capabilities, this approach lacks the centralized compliance data management and scaling features offered by AWS Config Aggregator for large-scale environments.

Explanation

This approach is not feasible for large-scale environments with hundreds of AWS accounts and thousands of resources, as it requires too much manual effort and is prone to human error.

Question

Task 3.1 Implement highly available solutions to meet resilience and business requirements

Your company runs a fleet of web applications across multiple AWS regions to ensure high availability and low latency for users around the globe. The architecture leverages Amazon EC2 instances running behind Elastic Load Balancers (ELBs) and uses Amazon Route 53 for DNS routing. The business requirements dictate that even in the event of a regional outage, your users should experience minimal downtime and uninterrupted service. How can you meet these requirements with the least amount of manual intervention while also improving the performance of your globally distributed applications?

select single answer

Explanation

While EC2 Auto Scaling across multiple AZs increases resilience within a region, it does not provide a solution for cross-region high availability or performance optimization based on global user traffic.

Explanation

AWS Global Accelerator provides a single entry point to your application and automatically routes user traffic to the optimal endpoint based on network latency, ensuring low latency and high availability. In case a region becomes unhealthy, Global Accelerator will route traffic to the next best region.

Explanation

While Route 53 failover routing can manage regional outages, it doesn't provide optimal performance based on user location and can introduce additional latency. It also requires manual configuration and might have a slower failover time compared to Global Accelerator.

Explanation

An ALB with cross-zone load balancing ensures availability within a single region but does not address cross-region traffic management or failover. It also does not enhance performance based on user's location.

Question

Task 3.2 Implement solutions that are scalable to meet business requirements

You are a DevOps engineer working for a financial services company that is expanding its services to global markets. The company aims to provide a highly available and scalable architecture to handle a large influx of API requests for processing transactions. You’ve been tasked with designing a serverless architecture that ensures each step in the transaction process, such as validation, authorization, and transaction logging, is carried out reliably and is highly available. Which of the following architectures best meets these requirements using AWS services?

select single answer

Explanation

AWS Step Functions allow you to orchestrate various AWS services, ensuring that your transaction workflow is reliably executed. Each step can call a Lambda function for specific tasks, and an Amazon API Gateway can manage the API requests efficiently, ensuring high availability and scalability.

Explanation

An RDS-based monolithic application would not be scalable nor highly available to the same degree as a serverless architecture using AWS Step Functions and Lambda. RDS does not fit well for orchestrating multiple serverless functions and managing a high volume of API requests efficiently.

Explanation

While EC2 instances can handle the tasks and Auto Scaling can help with scalability, this approach is not inherently serverless and can introduce unnecessary complexities compared to AWS Step Functions and Lambda for orchestrating and scaling the tasks.

Explanation

AWS Fargate is a good option for container management and provides some serverless features, but it is not as seamless for orchestrating multi-step workflows as AWS Step Functions. Using Fargate alone would also require additional configurations to handle high availability and reliability at the level AWS Step Functions provides.

Question

Task 3.3 Implement automated recovery processes to meet RTO and RPO requirements

You are working as a DevOps engineer for a company that runs a web application using Amazon RDS with a Multi-AZ deployment for its database backend. The application is distributed across multiple EC2 instances behind an Application Load Balancer (ALB). Recently, you discovered that when a primary instance in your RDS Multi-AZ deployment fails, there is a brief period during which the application experiences downtime until the failover to the standby instance completes. The company's Service Level Agreement (SLA) requires minimal downtime to meet its Recovery Time Objective (RTO) and Recovery Point Objective (RPO) mandates. What steps should you take to configure the load balancer to ensure seamless recovery and maintain high availability in case of a backend failure?

select single answer

Explanation

By configuring health checks, the ALB can periodically determine the health state of the backend instances, including the RDS instances, and automatically reroute traffic to healthy instances, ensuring minimal disruption and meeting RTO and RPO requirements.

Explanation

While deploying across multiple regions can enhance availability, it introduces complexity and does not directly address the configuration of the load balancer for automated recovery processes within a region.

Explanation

This approach involves manual intervention, which can result in longer downtimes, not meeting the SLA requirements for automated recovery processes to achieve the desired RTO and RPO.

Explanation

Increasing the instance size of the primary RDS instance does not directly address failover handling or optimize the load balancer's ability to reroute traffic seamlessly during backend failures.

Question

Task 4.1 Configure the collection, aggregation, and storage of logs and metrics

A healthcare company uses AWS to host its applications, and it is critical to ensure the privacy and security of patient information. The DevOps team has been tasked with configuring Amazon CloudWatch to monitor application logs and store them in an encrypted format to comply with stringent security policies and industry regulations like HIPAA. After setting up CloudWatch, they must ensure that the log data collected is encrypted at rest. Which of the following actions should the DevOps team take to properly configure the encryption of log data using AWS KMS while ensuring the log metrics remain available for alerting and analysis?

select single answer

Explanation

This approach is cumbersome and counterintuitive, as CloudWatch Logs already provides built-in integration with AWS KMS for encryption. Manually handling encryption would add unnecessary complexity and potential for error.

Explanation

SSL/TLS encrypts data in transit, but it does not provide encryption at rest. The question focuses on encrypting log data while stored, which SSL/TLS does not address.

Explanation

CloudWatch Logs natively supports the use of AWS KMS Customer Master Keys (CMKs) for encrypting log data at rest. By creating a CMK and configuring CloudWatch Logs to use it, the logs will be encrypted according to the specified key, ensuring compliance with security policies.

Explanation

While S3 server-side encryption is a valid method for encrypting data stored in S3, it does not apply to the native storage of CloudWatch Logs. CloudWatch Logs does not use S3 directly for log storage, and this approach would not address the encryption requirement.

Question

Task 4.2 Audit, monitor, and analyze logs and metrics to detect issues

A mid-sized e-commerce company uses AWS services to handle its backend operations. They store web server access logs in an Amazon S3 bucket and want to analyze these logs to identify potential security issues such as unauthorized access attempts and unusual traffic patterns. The company wants to use AWS services to perform these analyses efficiently. Which combination of AWS services should they use to achieve their goal?

select single answer

Explanation

Amazon Athena allows querying of data directly in S3 using SQL, making it easy to analyze large sets of log data. Amazon CloudWatch Logs Insights can then be used for in-depth analysis and visualization of the logs, helping to detect and understand security issues.

Explanation

Amazon Redshift is a data warehouse solution and not optimized for log storage or real-time analysis. Amazon QuickSight can visualize data but requires the data to first be in a suitable database like Redshift, making the process more complex compared to using Athena and CloudWatch Logs Insights.

Explanation

AWS CloudTrail tracks API calls, and AWS Config monitors configuration changes, but neither is specifically designed for the purpose of querying and analyzing logs stored in S3. They serve different monitoring and compliance purposes rather than log data analysis.

Explanation

While AWS Glue can transform data and AWS Lambda can process log files, this combination is not primarily designed for querying and analyzing logs. It would also require custom scripts and potentially more management overhead.

Question

Task 4.3 Automate monitoring and event management of complex environments

You are managing a large-scale web application deployed on AWS. The application uses an Application Load Balancer (ALB) and Amazon Route 53 to distribute traffic across multiple EC2 instances. To ensure high availability and fault tolerance, you have configured health checks for both the ALB and Route 53. You want to automate monitoring and event management such that if any of the health checks fail, an alert is triggered, and specific remediation steps are automatically initiated. Which of the following solutions best accomplishes this task using AWS Config?

select single answer

Explanation

While CloudWatch Alarms can monitor health check statuses, this solution lacks automation for the remediation steps. The goal is to automate both monitoring and event management, which this approach doesn’t fully satisfy.

Explanation

AWS Elastic Beanstalk simplifies application deployment and management but does not provide the specific level of automation and integration that AWS Config offers for monitoring health checks and initiating predefined remediation steps.

Explanation

AWS Trusted Advisor offers insights and recommendations to optimize AWS infrastructure but is not designed for real-time monitoring or automated event management.

Explanation

Using AWS Config rules allows you to continually evaluate the configuration settings of your AWS resources. By integrating with Amazon SNS and AWS Lambda, you can automate alerts and remediation steps effectively when specific conditions are met, such as the failure of health checks.

Question

Task 5.1 Manage event sources to process, notify, and take action in response to events

Your team has built a financial application that stores transactional data in an Amazon DynamoDB table. To ensure the consistency and reliability of your system, you need to update a secondary system whenever there is an insertion or update in your DynamoDB table. The requirements are to process these changes efficiently, ensure persistence, and allow retry mechanisms in case of failures. What AWS service combination would be the most suitable for this task?

select single answer

Explanation

DynamoDB Streams can capture changes to the table. AWS Lambda can process those changes and Amazon SQS can provide the required durability and retry logic for eventual consistency.

Explanation

Amazon SNS can send notifications, but it doesn't support retries and persistence in the same way Amazon SQS does.

Explanation

AWS Step Functions are excellent for orchestrating complex workflows, but they are not designed to natively capture changes in DynamoDB.

Explanation

Amazon Kinesis is highly scalable and can process large streams of data, but it is more complex and may be overkill for simple change processing. Also, Amazon Kinesis Data Firehose and SNS don't provide built-in retries and persistence like SQS.

Question

Task 5.2 Implement configuration changes in response to events

You are working as an AWS DevOps Engineer and you recently implemented an instance monitoring solution using AWS EventBridge. Your EventBridge rule is triggering an AWS Lambda function whenever a specific EC2 instance enters a 'Stopped' state unexpectedly. The Lambda function should automatically remediate this non-desired state by starting the instance again. During a system audit, you noticed the instance remained in a 'Stopped' state and was never started as expected. Which action will most likely resolve this issue?

select single answer

Explanation

While AWS Systems Manager Run Command can be used to manage EC2 instances, the core issue is the permissions for the Lambda function. Changing the method of execution will not resolve the permission issue.

Explanation

If the Lambda function does not have the appropriate permissions to start the EC2 instance, it will not be able to execute the necessary API call to change the state from 'Stopped' to 'Running'.

Explanation

Updating the frequency of the EventBridge rule does not address the issue of Lambda not having the necessary permissions to start the EC2 instance.

Explanation

Increased memory and timeout settings can help with performance but will not resolve the core issue of the Lambda function lacking the necessary permissions to execute 'StartInstances' API calls.

Question

Task 5.3 Troubleshoot system and application failures

Your company has deployed a set of microservices on Amazon ECS using the Fargate launch type for a new application. Recently, some of the containers have started failing intermittently. Upon investigation, you’ve noticed that tasks are being killed unexpectedly. To ensure the stability of the application, you need to troubleshoot and resolve this issue promptly. What action should you take to troubleshoot the cause of the failed ECS tasks?

select single answer

Explanation

Restarting the ECS service might temporarily resolve the problem, but it does not address the underlying cause of the task failures. Proper troubleshooting using logs and other diagnostic tools is crucial to identify and fix the root cause.

Explanation

CloudWatch Logs provide detailed information about the application running inside the containers. Reviewing this information can help you identify any error messages, resource constraints, or other issues that may be causing the tasks to fail.

Explanation

While resource constraints can cause tasks to fail, simply increasing the CPU and memory values to the maximum without understanding the root cause can lead to resource wastage and potential other issues. The appropriate action is to first diagnose the issue using the logs.

Explanation

Disabling auto-scaling does not resolve the issue of tasks being killed. Auto-scaling is designed to manage resource allocation according to demand, and turning it off could lead to under-provisioning or over-provisioning of resources. The focus should be on identifying the root cause of the failures.

Question

Task 6.1 Implement techniques for identity and access management at scale

You are designing a new internal application for an enterprise that is rapidly scaling its AWS infrastructure. To ensure secure access management, you need to implement permissions boundaries to restrict the maximum permissions that IAM and machine identities can receive. You also need to ensure that only certain trusted administrators can grant IAM roles with elevated permissions. What combination of approaches should you use to achieve this? A) Apply permissions boundaries to all IAM roles and users, and grant administrators the necessary IAM permissions to make changes. B) Use AWS Organizations Service Control Policies (SCPs) to manage permissions across accounts, and use IAM roles with MFA to restrict access further. C) Apply permissions boundaries to IAM roles and users, use AWS STS for temporary elevated permissions when required, and implement MFA for administrative actions. D) Use IAM access advisor combined with VPC endpoint policies to restrict permissions and access.

select single answer

Explanation

While applying permissions boundaries is correct, simply granting administrators necessary IAM permissions doesn't address temporary elevated permissions for specific tasks or include MFA, reducing overall security.

Explanation

IAM access advisor helps in analyzing and reviewing permissions but does not provide enforcement or define boundaries. VPC endpoint policies manage access to VPC endpoints and do not directly relate to permissions management for roles and users.

Explanation

This approach ensures that IAM roles and users have restricted permissions through permissions boundaries. AWS STS can provide temporary credentials for elevated permissions, and MFA adds a layer of security for administrative actions, aligning with best practices for identity and access management at scale.

Explanation

SCPs help in managing permissions across accounts but do not restrict permissions at the level of individual roles and users. Although MFA enhances security, SCPs alone are not sufficient for fine-grained control using permissions boundaries.

Question

Task 6.2 Apply automation for security controls and data protection

You are designing a secure application that stores sensitive configuration data, such as database credentials, API keys, and authentication information. Your goal is to ensure that this data is both encrypted at rest and securely accessible by your application during runtime without exposing the secrets in plain text. To achieve this, you decide to use AWS Secrets Manager in conjunction with AWS Key Management Service (KMS). Which of the following approaches will help you automate the security controls and data protection for this application?

select single answer

Explanation

While server-side encryption with Amazon S3 provides data at rest protection, manually updating application configuration files introduces human error and does not fully automate the process, failing to meet the requirement for automated security controls.

Explanation

Storing secrets in DynamoDB and using IAM policies for access control ensures data protection, but manually scripting secret rotation introduces complexity and potential security risks, lacking full automation and robust management features provided by AWS Secrets Manager.

Explanation

Using AWS Secrets Manager with AWS KMS ensures that the sensitive data is encrypted at rest. Automating secret rotation with a Lambda function helps keep the secrets updated and secure without manual intervention, aligning with the task of applying automation for security controls and data protection.

Explanation

AWS Systems Manager Parameter Store can store sensitive data and supports encryption, but it doesn't automatically integrate with IAM policies or provide automated secret rotation without additional configuration. Lack of automation does not align with the requirement for continuous and automated data protection.

Question

Task 6.3 Implement security monitoring and auditing solutions

Your company operates several microservices that are critical to business operations. You've recently noticed some suspicious activity in the logs and need to ensure that your AWS environment remains compliant with security policies. Your team uses AWS Config for this purpose. Which of the following actions would allow you to continuously monitor and analyze the logs, metrics, and security findings to ensure compliance with your security policies?

select single answer

Explanation

AWS Glue is an ETL (Extract, Transform, Load) service that is not directly related to continuous monitoring and compliance of security policies. It's used for preparing data for analytics, not for real-time security compliance.

Explanation

AWS Config rules can be used to continuously check compliance of AWS resources based on specified rules. Integration with Amazon CloudWatch and AWS CloudTrail helps to monitor and analyze logs and metrics in real-time, ensuring all suspicious activities are detected and proper actions are taken.

Explanation

AWS Elastic Beanstalk is a service for deploying and scaling web applications and services; it does not provide tools specifically for monitoring and compliance of security policies.

Explanation

While Amazon QuickSight can help you visualize data, it does not provide real-time monitoring or compliance checks necessary for security and compliance tasks.

Exam Services


AWS Practice Exams

AWS Certified Data Engineer - Associate - DEA-C01
Practice Exam Simulator

Prepare for your AWS Certified Data Engineer - Associate exam with our practice exam simulator. Featuring real exam scenarios, detailed explanations, and instant feedback to boost your confidence and success rate.

AWS Certified Advanced Networking - Specialty - ANS-C01
Practice Exam Simulator

The AWS Certified Advanced Networking - Specialty practice exam simulates the real test, offering scenario-based questions that assess your ability to design, implement, and troubleshoot complex AWS networking solutions.

AWS Certified Solutions Architect - Associate - SAA-C03
Practice Exam Simulator

Unlock your potential with the AWS Certified Solutions Architect - Associate Practice Exam Simulator. This comprehensive tool is designed to prepare you thoroughly and assess your readiness for the most sought-after AWS associate certification.

AWS Certified Cloud Practitioner - CLF-C02
Practice Exam Simulator

Master your AWS Certified Cloud Practitioner exam with our Practice Exam Simulator. Prepare effectively and assess your readiness with realistic practice exams designed to mirror the most popular official AWS exam.

AWS Certified Developer - Associate - DVA-C02
Practice Exam Simulator

Unlock your potential as a software developer with the AWS Certified Developer - Associate Exam Simulator! Prepare thoroughly with realistic practice exams designed to mirror the official exam.

AWS Certified Solutions Architect - Professional - SAP-C02
Practice Exam Simulator

Elevate your career with the AWS Certified Solutions Architect - Professional Exam Simulator. Get ready to ace the most popular Professional AWS exam with our realistic practice exams. Assess your readiness, boost your confidence, and ensure your success.

AWS Certified Security - Specialty - SCS-C02
Practice Exam Simulator

Advance your career in cloud cybersecurity with the AWS Certified Security - Specialty Exam Simulator! Tailored for professionals, this tool offers realistic practice exams to mirror the official exam.

© 2024 BlowStack - AWS App Development and Interactive E-Learning
BlowStack logo
Powered by AWS Cloud Computing
info@blowstack.com

AWS App Development

Full Stack Generator
Cloud Experts

AWS Academy

Practice Exams
Interactive Courses
Pricing

Resources

Blog
Tools
Cheat Sheets

Other

Contact
Conditions & Terms
AWS Certified Advanced Networking - SpecialtyAWS Certified Data Engineer - AssociateAWS Certified DevOps Engineer - ProfessionalAWS Certified Solutions Architect - AssociateAWS Certified Cloud PractitionerAWS Certified Developer - AssociateAWS Certified Solutions Architect - ProfessionalAWS Certified Security - Specialty