A company has an organization in AWS Organizations for its multi-account environment. A DevOps engineer is developing an AWS CodeArtifact based strategy for application package management across the organization. Each application team at the company has its own account in the organization. Each application team also has limited access to a centralized shared services account.
Each application team needs full access to download, publish, and grant access to its own packages. Some common library packages that the application teams use must also be shared with the entire organization.
Which combination of steps will meet these requirements with the LEAST administrative overhead? (Select THREE.)
* Step 1: Creating a Centralized Domain in the Shared Services Account
To manage application package dependencies across multiple accounts, the most efficient solution is to create a centralized domain in the shared services account. This allows all application teams to access and manage package repositories within the same domain, ensuring consistency and centralization.
Action: Create a domain in the shared services account.
Why: A single, centralized domain reduces the need for redundant management in each application team's account.
A company uses an AWS CodeCommit repository to store its source code and corresponding unit tests. The company has configured an AWS CodePipeline pipeline that includes an AWS CodeBuild project that runs when code is merged to the main branch of the repository.
The company wants the CodeBuild project to run the unit tests. If the unit tests pass, the CodeBuild project must tag the most recent commit.
How should the company configure the CodeBuild project to meet these requirements?
Step 1: Using Native Git in CodeBuild
To meet the requirement of running unit tests and tagging the most recent commit if the tests pass, the CodeBuild project should be configured to use native Git to clone the CodeCommit repository. Native Git support allows full functionality for managing the repository, including the ability to create and push tags.
Action: Configure the CodeBuild project to use native Git to clone the repository and run the tests.
Why: Using native Git provides flexibility for managing tags and other repository operations after the tests are successfully executed.
Step 2: Tagging the Most Recent Commit
Once the unit tests pass, the CodeBuild project can use native Git to create a tag for the most recent commit and push that tag to the repository. This ensures that the tagged commit is linked to the test results.
Action: Configure the project to use native Git to create and push a tag to the repository if the tests pass.
Why: This ensures the correct commit is tagged automatically, streamlining the workflow.
A company has developed a static website hosted on an Amazon S3 bucket. The website is deployed using AWS CloudFormation. The CloudFormation template defines an S3 bucket and a custom resource that copies content into the bucket from a source location.
The company has decided that it needs to move the website to a new location, so the existing CloudFormation stack must be deleted and re-created. However, CloudFormation reports that the stack could not be deleted cleanly.
What is the MOST likely cause and how can the DevOps engineer mitigate this problem for this and future versions of the website?
Step 1: Understanding the Deletion Failure
The most likely reason why the CloudFormation stack failed to delete is that the S3 bucket was not empty. AWS CloudFormation cannot delete an S3 bucket that contains objects, so if the website files are still in the bucket, the deletion will fail.
Issue: The S3 bucket is not empty during deletion, preventing the stack from being deleted.
Step 2: Modifying the Custom Resource to Handle Deletion
To mitigate this issue, you can modify the Lambda function associated with the custom resource to automatically empty the S3 bucket when the stack is being deleted. By adding logic to handle the RequestType: Delete event, the function can recursively delete all objects in the bucket before allowing the stack to be deleted.
Action: Modify the Lambda function to recursively delete the objects in the S3 bucket when RequestType is set to Delete.
Why: This ensures that the S3 bucket is empty before CloudFormation tries to delete it, preventing the stack deletion failure.
A company recently migrated its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses Amazon EC2 instances. The company configured the application to automatically scale based on CPU utilization.
The application produces memory errors when it experiences heavy loads. The application also does not scale out enough to handle the increased load. The company needs to collect and analyze memory metrics for the application over time.
Which combination of steps will meet these requirements? (Select THREE.)
* Step 1: Attaching the CloudWatchAgentServerPolicy to the IAM Role
The CloudWatch agent needs permissions to collect and send metrics, including memory metrics, to Amazon CloudWatch. You can attach the CloudWatchAgentServerPolicy managed IAM policy to the IAM instance profile or service account role to grant these permissions.
Action: Attach the CloudWatchAgentServerPolicy managed IAM policy to the IAM instance profile that the EKS cluster uses.
Why: This ensures the CloudWatch agent has the necessary permissions to collect memory metrics.
A company uses AWS Organizations to manage its AWS accounts. A DevOps engineer must ensure that all users who access the AWS Management Console are authenticated through the company's corporate identity provider (IdP).
Which combination of steps will meet these requirements? (Select TWO.)
* Step 1: Using AWS IAM Identity Center for SAML-based Identity Federation
To ensure that all users accessing the AWS Management Console are authenticated via the corporate identity provider (IdP), the best approach is to set up identity federation with AWS IAM Identity Center (formerly AWS SSO) using SAML 2.0.
Action: Use AWS IAM Identity Center to configure identity federation with the corporate IdP that supports SAML 2.0.
Why: SAML 2.0 integration enables single sign-on (SSO) for users, allowing them to authenticate through the corporate IdP and gain access to AWS resources.