Managing access to multiple AWS Accounts with OpenID

Many organisations look towards a multiple account strategy with Amazon Web Services (AWS) to provide administrative isolation between workloads, limited visibility and discoverability of workloads, isolation to minimize blast radius, management of AWS limits and cost categorisation. However, this comes at a large complexity cost, specifically around Identity Access Management (IAM).

Starting off with a single AWS account, and using a handful of IAM users and groups for access management, is usually the norm. As an organisation grows they start to see a need for separate staging, production, and developer tooling accounts. Managing access to these can quickly become a mess. Do you create a unique IAM user in each account and provide your employees with the unique sign-on URL? Do you create a single IAM user for each employee and use AssumeRole to generate credentials and to enable jumping between accounts? How do employees use the AWS Application Programming Interface (API) or the Command Line Interface (CLI); are long-lived access keys generated? How is an employee’s access revoked should they leave the organisation?

User per account approach

All users in a single account

using STS AssumeRole to access other accounts

Design

Reuse employees existing identities

In most organisations, an employee will already have an identity, normally used for accessing e-mail. These identities are normally stored in Active Directory (AD), Google Suite (GSuite) or Office 365. In an ideal world, these identities could be reused and would grant access to AWS. This means employees would only need to remember one set of credentials and their access could be revoked from a single place.

Expose an OpenID compatible interface for authentication

OpenID provides applications with a way to verify a users identity using JSON Web Tokens (JWT). Additionally, it provides profile information about the end user such as first name, last name, email address, group membership, etc. This profile information can be used to store the AWS accounts and AWS roles the user has access to.

By placing an OpenID compatible interface on top an organisation’s identity store users can easily generate JWTs which can be later used by services to authenticate them.

Trading JWTs for AWS API Credentials

In order to trade JWTs for AWS API Credentials, a service can be created that runs on AWS with a role that has access to AssumeRole.This service would be responsible for validating a users JWT, ensuring the JWT contains the requested role and executing STS AssumeRole to generate the AWS API Credentials.

Additionally, the service would also generate an Amazon Federated Sign-On URL which would enable users to access the AWS Web Console using their JWT.

Example Implementation

Provided below is an example implementation of the above design. One user with username “demo” and password “demo” exists. Please do not use this demo in a production environment without https.

To follow along, clone or download the code at https://github.com/imduffy15/aws-credentials-issuer.

OpenID Provider

Keycloak provides an IAM along with OpenID Connect (OIDC) and Security Assertion Markup Language (SAML) interfaces. Additionally, it supports federation to Active Directory (AD) and Lightweight Directory Access Protocol (LDAP) servers. Keycloak enables organisations to centrally manage employees access to many different services.

The provided code contains a docker-compose file that on executing docker-compose up will bring up a keycloak server with its administrative interface accessible at http://localhost:8080/auth/admin/ using username “admin” and password “password”.

On the “clients” screen, a client named “aws-credentials-issuer” is present, users will use this client to generate their JWT tokens. This client is pre-configured to work with the Authorization Code Grant for command line interfaces to generate tokens and the Implicit Grant for a frontend application.

Under the “aws-credentials-issuer” additional roles can be added, these roles must exist on AWS and they must have a trust relationship to the account that will be running the “aws-credentials-issuer”.

Additionally, these roles must be placed into the users JWT tokens, this is pre-configured under “mappers”.

Finally, the role must be assigned to a user. This can be done by navigating to users -> demo -> role mappings and moving the wanted role from “available roles” to “assigned roles” for the client “aws-credentials-issuer”

AWS Credentials Issuer Service

Backend

The provided code supplies a lambda function which will take care of validating the users JWT token and exchanging it using AssumeRole for AWS Credentials.

This code can be deployed to an AWS account by using the serverless framework and the supplied definition. The definition will create the following:

With AWSCLI configured with credentials for the account that the service will run in execute sls deploy, this will deploy the lambda functions and return URLs for executing them.

Frontend

The provided code supplies a frontend which will provide users with a graphical experience for accessing the AWS Web Console or generating AWS API Credentials.

The frontend can be deployed to an S3 bucket using the serverless framework. Before deploying it some variables must be modified. In the serverless definition (serverless.yml), replace “ianduffy-aws-credentials-issuer” with a desired S3 bucket name and modify ui/.env to contain your Keycloak and Backend URL as highlighted above. The deployment can be executed with sls client deploy.

On completion, a URL in the format of http://<bucket-name>.s3-website.<region>.amazonaws.com will be returned. This needs to be supplied to keycloak a redirect URI for the “aws-credentials-issuer” client.

Usage

Browser

By navigating to the URL of the S3 bucket a user can get access to the AWS Web Console or get API credentials which they can use to manually configure an application.

Command line

To interact with the “aws-credentials-issuer” the user must have a valid JWT. This can be done by executing the Authorization Code Grant against keycloak.

token-cli can be used to execute the Authorization Code Grant and generate a JWT token, this can be downloaded from the projects releases page; alternatively, on OSX it can be installed with homebrew brew install imduffy15/tap/token-cli.

Once token-cli is installed it must be configured to run against keycloak, this can be done as follows:

Finally, a token can be generated with token-cli token get aws-credentials-issuer -p 9000. On first run the users browser will be opened and they will be required to login, on subsequent runs the token will be cached or refreshed automatically.

This token can be used against the “aws-credentials-issuer” to get AWS API credentials:

curl https://<API-GATEWAY>/dev/api/credentials?role=<ROLE-ARN> \
-H "Authorization: bearer $(token-cli token get aws-credentials-issuer)"

Alternatively, a AWS Amazon Federated Sign-On URL can also be generated:

curl https://<API-GATEWAY>/dev/api/login?role=<ROLE-ARN> \
-H "Authorization: bearer $(token-cli token get aws-credentials-issuer)"

Scala and AWS managed ElasticSearch

AWS offer a managed ElasticSearch service. It exposes an HTTP endpoint for interacting with ElasticSearch and requires authentication via AWS Identity Access Management.

Elastic4s offers a neat DSL and Scala client for ElasticSearch. This post details how to use it with AWS’s managed ElasticSearch service.

Creating a request signer

Using the aws-signing-request-interceptor library its easy to create an HttpRequestInterceptor which can be later added to the HttpClient used by Elastic4s for making the calls to ElasticSearch

private def createAwsSigner(config: Config): AWSSigner = {
  import com.gilt.gfc.guava.GuavaConversions._

  val awsCredentialsProvider = new DefaultAWSCredentialsProviderChain
  val service = config.getString("service")
  val region = config.getString("region")
  val clock: Supplier[LocalDateTime] = () => LocalDateTime.now(ZoneId.of("UTC"))
  new AWSSigner(awsCredentialsProvider, region, service, clock)
}

Creating an HTTP Client and intercepting the requests

The ElasticSearch RestClientBuilder allows for registering a callback to modify the customise the HttpAsyncClientBuilder enabling registering the interceptor to sign the requests.

The callback can be created by implementing the HttpClientConfigCallback interface as follows:

private val esConfig = config.getConfig("elasticsearch")

private class AWSSignerInteceptor extends HttpClientConfigCallback {
  override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
    httpClientBuilder.addInterceptorLast(new AWSSigningRequestInterceptor(createAwsSigner(esConfig)))
  }
}

Finally, an Elastic4s client can be created with the interceptor registered:

private def createEsHttpClient(config: Config): HttpClient = {
  val hosts = ElasticsearchClientUri(config.getString("uri")).hosts.map {
    case (host, port) =>
      new HttpHost(host, port, "http")
  }

  log.info(s"Creating HTTP client on ${hosts.mkString(",")}")

  val client = RestClient.builder(hosts: _*)
    .setHttpClientConfigCallback(new AWSSignerInteceptor)
    .build()
  HttpClient.fromRestClient(client)
}

Full Example on GitHub

Azure bug bounty Root to storage account administrator

In my previous blog post Azure bug bounty Pwning Red Hat Enterprise Linux I detailed how it was possible to get administrative access to the Red Hat Update Infrastructure consumed by Red Hat Enterprise Linux virtual machines booted from the Microsoft Azure Marketplace image. In theory, if exploited one could have gained root access to all virtual machines consuming the repositories by releasing an updated version of a common package and waiting for virtual machines to execute yum update.

As an attacker, this would have granted access to every piece of data on the compromised virtual machines. Sadly, the attack vector is actually much more widespread than this. Given some poor implementation within the mandatory Microsoft Azure Linux Agent (WaLinuxAgent) one is able to obtain the administrator API keys to the storage account used by the virtual machine for debug log shipping purposes, at the time of research this storage account defaulted to one shared by multiple virtual machines.

At the time of research, the Red Hat Enterprise Linux image available on the Microsoft Azure Marketplace came with WaLinuxAgent 2.0.16. When a virtual machine was created with the “Linux diagnostic extension” enabled the API key for access to the specified storage account was written to /var/lib/waagent/Microsoft.OSTCExtensions.LinuxDiagnostic-2.3.9007/xmlCfg.xml.

Once acquired one can simply use the Azure Xplat-CLI to interact the storage account:

export AZURE_STORAGE_ACCOUNT="storage_account_name_as_per_xmlcfg"
export AZURE_STORAGE_ACCESS_KEY="storage_account_access_key_as_per_xmlcfg"
azure storage container list # acquire some container name
azure storage blob list # provide the container name
# Copy, download, upload, delete any blobs available across any containers you can access.

If the storage account was used by multiple virtual machines there is potential to download their virtual hard disks.

Azure bug bounty Pwning Red Hat Enterprise Linux

TL;DR Acquired administrator level access to all of the Microsoft Azure managed Red Hat Update Infrastructure that supplies all the packages for all Red Hat Enterprise Linux instances booted from the Azure marketplace.

I was tasked with creating a machine image of Red Hat Enterprise Linux that was compliant to the Security Technical Implementation guide defined by the Department of Defense.

This machine image was to be used for both Amazon Web Services and Microsoft Azure. Both of which offer marketplace images which had a metered billing pricing model[1][2]. Ideally, I wanted my custom image to be billed under the same mechanism, as such the virtual machines would be able to consume software updates from a local Red Hat Enterprise Linux repository owned and managed by the cloud provider.

Both Amazon Web Services and Microsoft Azure utilise a deployment of Red Hat Update Infrastructure for supplying this functionality.

This setup requires two main parts:

Red Hat Update Appliance

There is only one Red Hat Update Appliance per Red Hat Update Infrastructure installation, however, both Amazon Web Services and Microsoft Azure create one per region.

The Red Hat Update Appliance is responsible for:

The Red Hat Update Appliance does not need to be exposed to the repository clients.

Content Delivery server

The content delivery server(s) provide the yum repositories that clients connect to for updated packages.

Achieving metered billing

Both Amazon Web Services and Microsoft Azure use SSL certifications for authentication against the repositories.

However, these are the same SSL certificates for every instance.

On Amazon Web Services having the SSL certificates is not enough, you must have booted your instance from an AMI that had an associated billing code. It is this billing code that ensures you pay the extra premium for running Red Hat Enterprise Linux.

On Azure it remains undefined how they manage to track billing. At the time of research, it was possible to copy the SSL certificates from one instance to another and successfully authenticate. Additionally, if you duplicated a Red Hat Enterprise Linux virtual hard disk and created a new instance from it all billing association seemed to be lost but repository access was still available.

Where Azure Failed

On Azure to setup repository connectivity, they provide an RPM with the necessary configuration. In the older version of their agent, it is responsible for this task [3]. The installation script it references comes from the following archive. If you expand this archive you will find the client configuration for each region.

By browsing the metadata of the RPMs we can discover some interesting information:

$ rpm -qip RHEL6-2.0-1.noarch.rpm
Name        : RHEL6                        Relocations: (not relocatable)
Version     : 2.0                               Vendor: (none)
Release     : 1                             Build Date: Sun 14 Feb 2016 06:40:54 AM UTC
Install Date: (not installed)               Build Host: westeurope-rhua.cloudapp.net
Group       : Applications/Internet         Source RPM: RHEL6-2.0-1.src.rpm
Size        : 20833                            License: GPLv2
Signature   : (none)
URL         : http://redhat.com
Summary     : Custom configuration for a cloud client instance
Description :
Configurations for a client to connect to the RHUI infrastructure

As you can see, the build host enables us to discover all of the Red Hat Update Appliances:

$ host westeurope-rhua.cloudapp.net
westeurope-rhua.cloudapp.net has address 104.40.209.83

$ host eastus2-rhua.cloudapp.net
eastus2-rhua.cloudapp.net has address 13.68.20.161

$ host southcentralus-rhua.cloudapp.net
southcentralus-rhua.cloudapp.net has address 23.101.178.51

$ host southeastasia-rhua.cloudapp.net
southeastasia-rhua.cloudapp.net has address 137.116.129.134

At the time of research, all of servers were exposing their REST APIs over HTTPs.

The URL to the archive containing these RPMs was discovered a package labeled PrepareRHUI on available on any Red Hat Enterprise Linux Box running on Microsoft Azure.

$ yumdownloader PrepareRHUI
$ rpm -qip PrepareRHUI-1.0.0-1.noarch.rpm
Name        : PrepareRHUI                  Relocations: (not relocatable)
Version     : 1.0.0                             Vendor: Microsoft Corporation
Release     : 1                             Build Date: Mon 16 Nov 2015 06:13:21 AM UTC
Install Date: (not installed)               Build Host: rhui-monitor.cloudapp.net
Group       : Unspecified                   Source RPM: PrepareRHUI-1.0.0-1.src.rpm
Size        : 770                              License: GPL
Signature   : (none)
Packager    : Microsoft Corporation <xiazhang@microsoft.com>
Summary     : Prepare RHUI installation for Redhat client
Description :
PrepareRHUI is used to prepare RHUI installation for before making a Redhat image.

The build host is interesting rhui-monitor.cloudapp.net, at the time of research running a port scan revealed an application running on port 8080.

Microsoft Azure RHUI Monitoring tool

Despite the application requiring username and password based authentication, It was possible to execute a run of their “backend log collector” on a specified content delivery server. When the collector service completed the application supplied URLs to archives which contain multiple logs and configuration files from the servers.

Included within these archives was an SSL certificate that would grant full administrative access to the Red Hat Update Appliances [4].

Pulp admin keys for Microsoft Azure's RHUI

At the time of research all Red Hat Enterprise Linux virtual machines booted from the Azure Marketplace image had the following additional repository configured:

[rhui-PA]
name=Packages for Azure
mirrorlist=https://eastus2-cds1.cloudapp.net/pulp/mirror/PA
enabled=1
gpgcheck=0
sslverify=1
sslcacert=/etc/pki/rhui/ca.crt

Given no gpgcheck is enabled, with full administrative access to the Red Hat Enterprise Linux Appliance REST API one could have uploaded packages that would be acquired by client virtual machines on their next yum update.

The issue was reported in accordance to the Microsoft Online Services Bug Bounty terms. Microsoft agreed it was a vulnerability in their systems. Immediate action was taken to prevent public access to rhui-monitor.cloudapp.net. Additionally, they eventually prevented public access to the Red Hat Update Appliances and they claim to have rotated all secrets.

[1] https://azure.microsoft.com/en-in/pricing/details/virtual-machines/red-hat/

[2] https://aws.amazon.com/partners/redhat/

[3] https://github.com/Azure/azure-linux-extensions/blob/master/Common/WALinuxAgent-2.0.16/waagent#L2891

[4] https://fedorahosted.org/pulp/wiki/Certificates

Hello World

resource "null_resource" "hello_world" {
    provisioner "local-exec" {
        command = "echo 'Hello World'"
    }
}

Interested in automation and the HashiCorp suite of tools? If so you’ll love this blog. Through different posts we will explore lots different automation tasks utilising both public and private cloud with the HashiCorp toolset.

Thanks, Ian.