AWS SFTP using Azure AD Tutorial
  • Posted October 29th, 2021

Using Azure AD as an Identity Provider for AWS Transfer (SFTP)V2

AWS Transfer for SFTP enables you to easily move your file transfer workloads that use the Secure Shell File Transfer Protocol (SFTP) to AWS without needing to modify your applications or manage any SFTP servers. Out of the box you can easily allow for certificate based authentication, however tying into another validation mechanism can be a bit cumbersome. AWS SFTP now supports custom identity providers, however it us up to you to create the backend logic for authentication and policy creation.

The high level concept

We are going to use AWS Transfer for SFTP with a custom authentication configured to allow uploading to S3 via SFTP using Azure Active Directory credentials. In order for the user to gain access and use this service, the user will be required to have access rights on the Azure AD application. We will further break down some working scenarios specifying access rights depending on group membership.

AWS Transfer for SFTP using Azure AD Diagram

We have the following steps involved:

  1. User initiates an SFTP transfer
  2. AWS Transfer for SFTP then sends the login request to the AWS API Gateway
  3. AWS Lambda function receives the UserName and Password from the API Gateway invocation
  4. AWS Lambda function calls the Azure AD API call to validate the UserLogin and then Validates that the user is a part of the specified Security Group
  5. If both Authentication and Group Membership returns true, then the function continues to build our custom IAM Policy for the specified user
  6. the AWS Lambda function returns the specified IAM Role as well as a custom Scoped Down IAM Policy
  7. End user now has access to upload and download files and create new directories in the specified directory: MY_BUCKET/USER_NAME

Setting up the Azure AD Application

  1. Sign into the Azure AD portal and navigate to Azure Active Directory: https://portal.azure.com
  2. In the left navigation pane select App registrations
  3. At the top menu bar select New registration
  4. Fill in the information related to your application then press register button at the bottom of the page

  5. Azure AD Register Application

  6. In the left navigation pane select API Permissions
  7. On the top menu select Grant admin consent and ensure that User.Read status is set to Granted for..

  8. Azure AD grant consent

  9. In the left navigation pane select Authentication
  10. Under Advanced settings select Allow public client flows and then save

  11. Azure application allow public flows

  12. On the left navigation menu, go to overview and at the bottom of the page choose Go to Enterprise applications

  13. Azure AD grant consent

  14. On the left navigation menu, go to Properties
  15. Under Properties, set Enabled for users to sign-in and Assignment required set to Yes

  16. assignment required

  17. On the left navigation, select Users and groups
  18. use the wizard to grant any user or group that should be able to use the SFTP service
  19. On the left navigation, select Overview and then copy the Application ID as we will use this when setting up the sftp server

Setting up the IAM Role for S3 Access

  1. Sign into the AWS console and navigate to IAM: https://console.aws.amazon.com/iam
  2. Go to Policies in the navigation pane and select Create policy then select the JSON tab:
  3. Paste in the following JSON code into the editor window and select Review policy
                                    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "ListBuckets",
                "Effect": "Allow",
                "Action": [
                    "s3:ListAllMyBuckets",
                    "s3:GetBucketLocation"
                ],
                "Resource": "*"
            },
            {
                "Sid": "ListSftpBucket",
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::MY_BUCKET_NAME"
                ]
            },
            {
                "Sid": "AllowFtpWriteOptions",
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:DeleteObjectVersion",
                    "s3:DeleteObject",
                    "s3:GetObjectVersion"
                ],
                "Resource": [
                    "arn:aws:s3:::MY_BUCKET_NAME/*/*"
                ]
            }
        ]
    }
                                    
                                

  4. Name your policy and select Create policy
  5. Go to Roles in the navigation pane and select Create role:
  6. AWS IAM Role Creation for AWS SFTP
  7. Select AWS service and then select S3 and press the Next: Permissions button and filter for the policy that you created in the previous step Attach IAM Policy for AWS SFTP IAM Role

  8. Select Next: Tags and then select Next: Review. Create a unique name for your Role and select Create role AWS IAM Role Creation for AWS Transfer for SFTP

  9. Search for the role that you have just created, and select it. Once inside the role copy the Role ARN as we will use that when deploying our CloudFormation template. Select the Trust relationships and paste in the following JSON document:
                                    
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "transfer.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
                                    
                                
    and select Update Trust Policy.

We will set more restrictive permissions in our Scoped Down IAM Policy which will be specific to each user that is logging in.

Setting up the API Gateway

  1. Create an AWS CloudFormation stack from the sample template found at the following Amazon S3 URL:
    https://ldaptive-pubic-downloads.s3.amazonaws.com/s3-sftp-gateway-setupv2.template
    This template generates a basic API Gateway and AWS Lambda function.
    Deploying this stack is the easiest way to integrate a custom identity provider into the AWS SFTP workflow. The stack uses the AWS Lambda function to support a gateway based on API Gateway. You can then use this gateway as a custom identity provider in AWS SFTP. By default, this lambda function just validates that the username and password is being passed to the function. This will return a response similar to what the final lambda function will produce.
  2. Fill in the parameters for the cloud formation template. This will all needed IAM Roles and IAM Policies as well as API Gateway and lambda function and set the environment variables that will be used with our final version of the Lambda function.
    AWS Cloud formation template for AWS SFTP

  3. Check that the API gateway is working and is invoking the lambda function as expected.
    To do this, open the API Gateway console at https://console.aws.amazon.com/apigateway and view the Transfer Custom Identity Provider basic template API that the AWS CloudFormation template generated. Navigate to to the Resources in the navigation pain, the select GET method listed here. The following screenshot shows the correct configuration as well as how to test this function. Select the TEST option on this dialog page. Testing AWS API Gateway for SFTP Azure Identity Providier

    The following screenshot shows a valid test and that the data was successfully passed to the lambda function. Testing Response from AWS API Gateway for SFTP Azure Identity Providier

Setting up the AWS Transfer for SFTP Server

  1. Navigate to AWS Transfer for SFTP: https://console.aws.amazon.com/transfer and select Create server.
  2. Under Endpoint configuration select Public.Optional: we will create a Custom hostname here leveraging Amazon Route53 DNS alias since our domain is hosted with Route53. AWS Transfer for SFTP Endpoint configuration

  3. Under Identity provider select Custom and in the Custom provider paste the prod stage invocation URL that we was created under the steps Setting up the API Gateway. For the Invocation role select role that was created via the CloudFormation template: Azure-AD-IdentityValidator-SFTP-TransferRole. AWS Transfer for SFTP Custom Identity Provider

  4. Under Logging role select the role that was created via the CloudFormation template: Azure-AD-IdentityValidator-Logs-Role. Then select Create server. IAM Role for logging AWS Transfer for SFTP User access

Understanding and setting up the Lambda Function

  • Here we have the entire lambda function. One thing to note, this version (compared to our previous version) does not have any external requirements so the function that was deployed via the cloudformation template will work as expected.
                                    
    const AWS = require('aws-sdk');
    const https = require('https');
    const queryString = require('querystring');
    
    /**
     * @param {object} event            event passed from sftp server
     * @param {string} event.username   username of sftp user
     * @param {string} event.password   password of sftp user
     * @returns                         access response
     */
    exports.handler = async (event) => {
        const tenantId = await decryptVariable(process.env.tenantId);
        const clientId = await decryptVariable(process.env.AzureClientId);
        const bucket = await decryptVariable(process.env.bucket);
        const s3Role = await decryptVariable(process.env.S3RoleArn);
    
        //if using tenantId (the guid id) set this to your domain name example: mydomain.com
        const domain = tenantId;
    
        var userName = event.username;
    
        if (userName.includes('%40')) {
            userName = decodeURIComponent(userName);
        } else {
            userName = `${userName}@${domain}`;
        };
    
        var credentials = {
            client_id: clientId,
            response_type: 'token',
            scope: 'https://graph.microsoft.com/User.Read',
            grant_type: 'password',
            username: userName,
            password: event.password
        };
    
        var postData = queryString.stringify(credentials);
        var options = {
            method: 'POST',
            host: 'login.microsoftonline.com',
            path: `/${tenantId}/oauth2/v2.0/token`,
            headers: {
                "Accept": "application/json",
                "Content-Type": "application/x-www-form-urlencoded",
                "Content-Length": postData.length
            }
        };
    
        var token = await webRequest(options, postData);
    
        if (!token.access_token) {
            if (token.error) {
                console.log({ status: 'Failure', user: userName, error: token.error, errorUri: token.error_uri });
            };
            return {};
        } else {
            console.log({ status: 'Success', user: userName, scope: token.scope });
    
            /**
             * Add Additional login here!
             */
    
            var response = {
                Role: s3Role,
                HomeBucket: bucket,
                HomeDirectory: "/" + bucket + '/' + userName.toLowerCase(),
                Policy: JSON.stringify(scopedPolicy)
            };
            return response;
        };
    };
    
    /**
     * @param {object} options          https options
     * @param {string} options.host     https domain or root url
     * @param {string} options.path     https url endpoint to hit
     * @param {string} options.port     https port to use - defaults to 443
     * @param {string} options.method   https method POST | GET | PUT | DELETE
     * @param {object} options.headers  Header data that needs to be passed the call
     * @param {object} postData         data that should be sent in a post body
     * @returns 
     */
    var webRequest = (options, postData) => new Promise((resolve) => {
        const req = https.request(options, res => {
            var chunk = '';
            res.on('data', d => {
                chunk += d
            }).on('end', () => {
                var response = JSON.parse(chunk.toString());
                response.statusCode = res.statusCode;
                resolve(response);
            });
        });
        req.on('error', error => {
            console.error('error', error);
        });
        if (postData) { req.write(postData); };
        req.end();
    });
    
    /**
     * @param {string} variable         environment variable encrypted by KMS
     * @returns                         decrypted variable 
     */
    var decryptVariable = (variable) => new Promise((resolve) => {
        if (!variable.startsWith('AQICA')) { return resolve(variable) };
        var aws = new AWS.KMS().decrypt({
            CiphertextBlob: Buffer.from(variable, 'base64'),
            EncryptionContext: { LambdaFunctionName: process.env.AWS_LAMBDA_FUNCTION_NAME }
        });
        aws.on('success', r => {
            resolve(r.data.Plaintext.toString('ascii'));
        }).on('error', e => {
            console.log('error decrypting key', e.message);
        }).send();
    });
    
    // this is our scoped policy that will determine the access rights of the user
    var scopedPolicy = {
        Version: "2012-10-17",
        Statement: [
            {
                Sid: "allowFolderList",
                Action: [
                    "s3:ListBucket"
                ],
                Effect: "Allow",
                Resource: [
                    "arn:aws:s3:::${transfer:HomeBucket}"
                ],
                Condition: {
                    StringLike: {
                        "s3:prefix": [
                            "${transfer:UserName}/*"
                        ]
                    }
                }
            },
            {
                Sid: "allowListBuckets",
                Effect: "Allow",
                Action: [
                    "s3:ListAllMyBuckets",
                    "s3:GetBucketLocation"
                ],
                Resource: "*"
            },
            {
                Sid: "HomeDirectoryAccess",
                Effect: "Allow",
                Action: [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:DeleteObjectVersion",
                    "s3:DeleteObject",
                    "s3:GetObjectVersion"
                ],
                Resource: [
                    "arn:aws:s3:::${transfer:HomeDirectory}/*"
                ]
            },
            {
                Sid: "DenyDeletionOfHomeDirectory",
                Effect: "Deny",
                Action: [
                    "s3:DeleteObjectVersion",
                    "s3:DeleteObject"
                ],
                Resource: [
                    "arn:aws:s3:::${transfer:HomeDirectory}/"
                ]
            }
        ]
    };
                                    
                                
  • The top portion has all the requirements that are needed for native node calls
                                
    const AWS = require('aws-sdk');
    const https = require('https');
    const queryString = require('querystring');
                                    
                                
  • The next portion defines our lambda entry point. In the params, the function will receive an object with 2 properties that are passed from the SFTP server:
    • username: login with either username (jsmith) or UPN name (jsmith@mydomain.com)
    • password: Azure AD Password
    Once we are inside of the function, there are 4 environment variables that must be set in order for this function to work:
    • tenantId: This is your Azure Tenant ID. You can specify your domain (mydomain.com) or your actual tenant ID(9818db02-8f26-45bc-a160-3af995cfe128)
    • AzureClientId: This is the Azure Application ID that was created in the first section of this article
    • bucket: Our S3 bucket that all data from the SFTP Server will be housed in
    • S3RoleArn: The IAM Role that was created in above that will allow the SFTP server write access
  • You will notice that each of these variables are also calling a a function decryptVariable, If the function notices that any of the environment variables are encrypted it will decrypt the variable. It is strongly recommend that you at least encrypt the AzureClientId for security. For directions on how to encrypt these, following the directions outlined here: aws lambda sensitive environment variables
  •                             
    /**
     * @param {object} event            event passed from sftp server
     * @param {string} event.username   username of sftp user
     * @param {string} event.password   password of sftp user
     * @returns                         access response
     */
    exports.handler = async (event) => {
        const tenantId = await decryptVariable(process.env.tenantId);
        const clientId = await decryptVariable(process.env.AzureClientId);
        const bucket = await decryptVariable(process.env.bucket);
        const s3Role = await decryptVariable(process.env.S3RoleArn);
    
        //if using tenantId (the guid id) set this to your domain name example: mydomain.com
        const domain = tenantId;
    
        var userName = event.username;
    
        if (userName.includes('%40')) {
            userName = decodeURIComponent(userName);
        } else {
            userName = `${userName}@${domain}`;
        };
    
        var credentials = {
            client_id: clientId,
            response_type: 'token',
            scope: 'https://graph.microsoft.com/User.Read',
            grant_type: 'password',
            username: userName,
            password: event.password
        };
    
        var postData = queryString.stringify(credentials);
        var options = {
            method: 'POST',
            host: 'login.microsoftonline.com',
            path: `/${tenantId}/oauth2/v2.0/token`,
            headers: {
                "Accept": "application/json",
                "Content-Type": "application/x-www-form-urlencoded",
                "Content-Length": postData.length
            }
        };
    
        var token = await webRequest(options, postData);
    
        if (!token.access_token) {
            if (token.error) {
                console.log({ status: 'Failure', user: userName, error: token.error, errorUri: token.error_uri });
            };
            return {};
        } else {
            console.log({ status: 'Success', user: userName, scope: token.scope });
    
            /**
             * Add Additional logic here if required
             */
    
            var response = {
                Role: s3Role,
                HomeBucket: bucket,
                HomeDirectory: "/" + bucket + '/' + event.username.toLowerCase(),
                Policy: JSON.stringify(scopedPolicy)
            };
            return response;
        };
    };
                                    
                                
  • The function will reach out to Azure AD and attempt to authenticate the user against the tenant specific to your company. If you plan on making your SFTP server available for a multi tenant scenario, then you can change the path option to the following url: /organizations/oauth2/v2.0/token. If you do use a multi tenant scenario, then you MUST login with the UPN of the user (jsmith@domain.com)
  • You can further extend the logic by adding in additional api calls and data enrichment by leveraging the me endpoint of the graph api
  • Lets say we do not care to use the username of the Home Directory for the user, but we would rather use a Department name folder. we can get this from the me endpoint specifying that we want my department
                                
    ............
    } else {
        console.log({status: 'Success', user: userName, scope: token.scope});
        
        var options = {
            method: 'GET',
            host: 'graph.microsoft.com',
            path: `/v1.0/me?$select=department`,
            headers: {
                "Accept": "application/json",
                "Authorization": `Bearer ${token.access_token}`
            }
        };
    
        var userInfo = await webRequest(options);
    
        var response = {
            Role: s3Role,
            HomeBucket: bucket,
            HomeDirectory: "/" + bucket + '/' + userInfo.department.toLowerCase(),
            Policy: JSON.stringify(scopedPolicy)
        };
        return response;
    };
    .............
                                    
                                
  • Perhaps we have a requirement dealing with Azure AD group membership and we want to give access to a specific path if member of a specific group. We can again leverage the endpoint to see if the user is a part of that group to make a determination of the path or the scoped policy that we will return.
                                
    ............
    } else {
        console.log({status: 'Success', user: userName, scope: token.scope});
        
        var groupId = "fc069ccd-e570-4648-b826-705ac1b230fe";
        var groupSearch = encodeURIComponent(`id eq '${groupId}'`);
    
        var options = {
            method: 'GET',
            host: 'graph.microsoft.com',
            path: `/v1.0/me/memberof?$filter=${groupSearch}`,
            headers: {
                "Accept": "application/json",
                "Authorization": `Bearer ${token.access_token}`
            }
        };
        
        var homeDirectory = "/" + bucket + '/' + userName.toLowerCase()
        var userInfo = await webRequest(options);
    
        if(userInfo.statusCode === 200){
            homeDirectory = "/" + bucket + '/my_group_membership_folder'
        } else {
            //not member of group no change
        }
    
        var response = {
            Role: s3Role,
            HomeBucket: bucket,
            HomeDirectory: homeDirectory,
            Policy: JSON.stringify(scopedPolicy)
        };
        return response;
    };
    .............
                                    
                                

Conclusion

This is just an example on how to setup authentication with Azure AD and leverage group membership validation. There are many different use cases, weather leveraging Read Only or Write access, or being able to better direct who has access to what files. Our use case for creating this dealt with several hundred users needing to be able to access data specific to them and provisioning local access keys was not a viable solution nor did it pass rigorous audit checks due to the nature of the data. We also gained insight into what a user is doing what with data by logging all file access to cloudwatch and thus not requiring a separate CloudTrail for this bucket.

An added benefit with this version, access to objects in Azure AD are limited to the user profile. With our previous version it required having read only to the entire directory in order to understand who is a member of what group. Not having any external library dependance allows for smaller code footprint and less chance of security vulnerability creep or braking changes.

Source code and examples

The source code for this function as well as other examples can be accessed via our github repo at the following location: https://github.com/ldaptive/aws-s3-sftp-azure-IdentityProvider-V2

View the Online Demo!

Are you curious as to what type of data Intelligent Discovery collects in relation to AWS vulnerabilities?
Login into our on-line demo to see a simulated view of what Intelligent Discovery collects and explains how to remediate.
demo.intelligentdiscovery.io

 

 

View Pricing

Explore our pricing models with levels from individual to enterprise.

learn more +

Free Trial

See how Intelligent Discovery can help you improve your AWS security.

learn more +