Skip to main content

Amazon Transfer Family

Amazon Transfer Family FAQs

General

Open all

A: Amazon Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, and FTP directly into and out of Amazon S3 or Amazon EFS. You can seamlessly migrate, automate, and monitor your file transfer workflows by maintaining existing client-side configurations for authentication, access, and firewalls — so nothing changes for your customers, partners, and internal teams, or their applications.

A: SFTP stands for Secure Shell (SSH) File Transfer Protocol, a network protocol used for secure transfer of data over the internet. The protocol supports the full security and authentication functionality of SSH, and is widely used to exchange data between business partners in a variety of industries including financial services, healthcare, media and entertainment, retail, advertising, and more.

A: FTP stands for File Transfer Protocol , a network protocol used for the transfer of data. FTP uses a separate channel for control and data transfers. The control channel is open until terminated or inactivity timeout, the data channel is active for the duration of the transfer. FTP uses cleartext and does not support encryption of traffic.

A: FTPS stands for File Transfer Protocol over SSL , and is an extension to FTP. It uses Transport Layer Security (TLS) and Secure Sockets Layer (SSL) cryptographic protocols to encrypt traffic. FTPS allows encryption of both the control and data channel connections either concurrently or independently.

A: AS2 stands for Applicability Statement 2, a network protocol used for the secure and reliable transfer of business-to-business data over the public internet over HTTP/HTTPS (or any TCP/IP network).

A: Amazon Transfer Family connectors are used to connect to externally hosted servers and transfer files directly to or from Amazon storage services. Customers can use SFTP connectors to connect to external SFTP servers, or AS2 connectors to connect to external AS2 servers.

A: Today, if you are using file transfer protocols such as SFTP, AS2, FTPS or FTP to exchange data with third parties such as vendors, business partners, or customers, and want to manage that data in for processing, analytics, and archival, you have to host and manage your own file transfer service. This requires you to invest in operating and managing infrastructure, patching servers, monitoring for uptime and availability, and building one-off mechanisms to provision users and audit their activity. The Amazon Transfer Family solves these challenges by providing fully managed support for SFTP, AS2, FTPS, and FTP that can reduce your operational burden, while preserving your existing transfer workflows for your end users. The service stores transferred data as objects in your Amazon S3 bucket or as files in your Amazon EFS file system, so you can extract value from them in your data lake, or for your Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) workflows, or for archiving in.

A: The Amazon Transfer Family provides you with a fully managed, highly available file transfer service with auto-scaling capabilities, eliminating the need for you to manage file transfer related infrastructure. Your end users’ workflows remain unchanged, while data uploaded and downloaded over the chosen protocols is stored in your Amazon S3 bucket or Amazon EFS file system. With the data in Amazon Web Services, you can now easily use it with the broad array of Amazon Web Services services for data processing, content management, analytics, machine learning, and archival, in an environment that can meet your compliance requirements.

A: In 3 simple steps, you get an always-on server endpoint enabled for SFTP, FTPS, and/or FTP. First, you select the protocol(s) you want to enable your end users to connect to your endpoint. Next, you set up your users using the service’s built-in authentication (service managed) or by integrating an existing identity provider like Microsoft Active Directory or LDAP (“BYO” authentication). Finally, you choose if you are using the server to access S3 buckets or EFS file systems. Once the protocol(s), identity provider, and the access to file systems are enabled, your users can continue to use their existing SFTP, FTPS, or FTP clients and configurations, while the data accessed is stored in the chosen file systems.

A: You can start using AS2 to exchange messages with your trading partners in three simple steps: First, import your certificates and private keys and your trading partners’ certificate and certificate chain. Next, create profiles using yours and your partner’s AS2 IDs. Finally, pair up your own and your partner’s profile information using an agreement for receiving data and connector for sending data. At this point you are ready to exchange messages with your trading partner’s AS2 server.

A: You can start using SFTP connectors to copy files between remote SFTP servers and Amazon S3 in two simple steps: First, create a connector by supplying the connection configuration such as the address of the remote server and credentials for authentication. Next, invoke a file transfer operation by providing the source and destination file paths when calling the StartFileTransfer API to copy a file to or from the remote server using the connector. To learn more, visit SFTP connector documentation .

A: FTPS and SFTP can both be used for secure transfers. Since they are different protocols, they use different clients and technologies to offer a secure tunnel for transmission of commands and data. SFTP is a newer protocol and uses a single channel for commands and data, requiring fewer port openings than FTPS.

A: SFTP, FTPS, and AS2 can all be used for secure transfers. Since they are different protocols, they use different clients and technologies to offer secure transmission of data. Aside from support for encrypted and signed messages, AS2’s built in mechanism for Message Disposition Notification (MDN) alerts the sender that the message has been successfully received and decrypted by the recipient. This provides proof to the sender that their message was delivered without being tampered in transit. Use of AS2 is prevalent in workflows operating in retail, e-commerce, payments, supply chain for interacting with business partners who are also able to use AS2 to transact messages so that it is securely transmitted and delivered. AS2 provides you with options to ensure identity of the sender and receiver, integrity of the message, and confirm whether the message was successfully delivered and decrypted by the receiver.

A: Yes, any existing file transfer client application will continue to work as long as you have enabled your endpoint for the chosen protocols. Examples of commonly used clients include WinSCP, FileZilla, CyberDuck, lftp, and OpenSSH clients.

A: Yes, you can deploy CloudFormation templates to automate creation of your servers and users or for integrating an identity provider, as well as to automate creation of your connectors. Refer to the usage guide for using Transfer resources in CloudFormation templates.

A: No, your users will need to use SFTP, FTPS, FTP, or AS2 to transfer files. Most file transfer clients offer either of these protocols as an option that will need to be selected during authentication. Please let us know via Support or through your account team of any specific protocols you would like to see supported.

Server endpoint options

Open all

A: Yes. If you already have a domain name, you can use Amazon Route 53 or any DNS service to route your users’ traffic from your registered domain to the server endpoint in. Refer to the documentation on how the Amazon Transfer Family uses Amazon Route 53 for custom domain names (applicable to internet facing endpoints only).

A: Yes, if you don’t have a domain name, your users can access your endpoint using the hostname provided by the service. Alternatively, you can register a new domain using the Amazon Route 53 console or API, and route traffic from this domain to the service supplied endpoint hostname.

A: Yes, you will need to CNAME the domain to the service supplied endpoint hostname.

A: Yes. When you create a server or update an existing one, you have the option to specify whether you want the endpoint to be accessible over the public internet or hosted within your VPC. By using a VPC hosted endpoint for your server, you can restrict it to be accessible only to clients within the same VPC, other VPCs you specify, or in on-premises environments using networking technologies that extend your VPC such as Direct Connect, VPN, or VPC peering. You can further restrict access to resources in specific subnets within your VPC using subnet Network Access Control Lists (NACLs) or Security Groups. Refer to the documentation on creating your server endpoint inside your VPC using PrivateLink for details.

No, when you enable FTP, you will only be able to use VPC hosted endpoint‘s internal access option. If traffic needs to traverse the public network, secure protocols such as SFTP or FTPS should be used.

The service doesn’t allow you to use FTP over public networks because, when you create a server enabled for FTP, the server endpoint is only accessible to resources within your VPC. If you need to use FTP for exchanging data over the public internet, you can front your server’s VPC endpoint with an internet-facing Network Load Balancer (NLB).

No. VPC is required to host FTP server endpoints. Please refer to the documentation for CloudFormation templates to automate creation of VPC resources to host the endpoint during server creation.

A: Yes. You can enable fixed IPs for your server endpoint by selecting the VPC hosted endpoint for your server and choosing the internet-facing option. This will allow you to attach Elastic IPs (including BYO IPs) directly to the endpoint, which is assigned as the endpoint’s IP address. Refer to the section on creating an internet facing endpoint in the documentation: Creating your server endpoint inside your VPC .

A: Yes. You can attach Security Groups to your server’s VPC endpoint which will control inbound traffic to your server. If you are using API Gateway to integrate your identity management system, you can also use WAF to allow, block, or rate limit access by your end users’ Source IP address.

A: Yes. You can deploy your server endpoint with shared VPC environments typically used when segmenting your environment using tools such as Landing Zone for security, cost monitoring, and scalability. 

A: Yes. Based on your security and compliance requirements, you can select one of our available service managed security policies to control the cryptographic algorithms that will be advertised by your server endpoints. When your end users’ file transfer clients attempt to connect to your server, only the algorithms specified in the policy can be used to negotiate the connection. Refer to the documentation on pre-defined security policies .

A: Yes. Amazon Transfer Family supports quantum-safe public-key exchange for SFTP file transfers. You can associate one of the pre-defined hybrid PQ security policies with your SFTP server enabling quantum-safe key exchange with clients that support quantum-safe cryptography.

A: No. Fixed IP addresses that are usually used for firewall whitelisting purposes are currently not supported on the PUBLIC Endpoint type. Use VPC hosted endpoints to assign static IP addresses for your endpoint.

A: If you are using the PUBLIC endpoint type, your users will need to allow list the IP address ranges. Refer to the documentation for details on staying up to date with  IP Address Ranges .

A: No. The server’s host key that is assigned when you create the server remains the same, unless you add a new host key and manually delete the original.

A: RSA, ED25519, and ECDSA key types are supported for SFTP server host keys.

A: Yes. You can import a host key when creating a server or import multiple host keys when updating a server. Refer to the documentation on  managing host keys for your SFTP-enabled server .

A: You can associate up to 10 host keys per SFTP server. However, only one host key per key type can be used by your end users’ clients to verify the authenticity of your SFTP server in a single session.

A: Multiple host keys can be identified using descriptions and tags, which can be added or edited when creating or updating a host key. Each host key also has a unique host key ID as well as an Amazon Resource Name (ARN) that can be used to identify and track the host key.

A: Yes. The oldest host key of each key type can be used to verify the authenticity of an SFTP server. By adding RSA, ED25519, and ECDSA host keys, 3 separate host keys can be used to identify your SFTP server.

A: The oldest host key of each key type is used to verify authenticity of your SFTP server.

A: Yes. You can rotate your SFTP server host keys at any time by adding and removing host keys. Refer to the documentation on managing host keys for your SFTP-enabled server.

A: When you enable FTPS access, you will need to supply a certificate from Amazon Certificate Manager (ACM). This certificate is used by your end user clients to verify the identity of your FTPS server. Refer to the ACM documentation on Requesting New certificates or importing existing certificates into ACM.

A: We only support passive mode, which allows your end users’ clients to initiate connections with your server. Passive mode requires fewer port openings on the client side, making your server endpoint more compatible with end users behind protected firewalls.

A: We only support explicit FTPS mode.

SFTP connectors

Open all

A: You can authenticate connections to remote servers using either SSH key-pairs or password, or both, depending on the remote server requirements. Store your username and SSH private key and/or password to login to the remote servers in your Amazon Secrets Manager account. To learn more about storing and managing your connector’s authentication credentials, visit the documentation .

A: We support RSA and ECDSA with key size of 2048 and 4096 bits. Please let us know via Amazon Web Services Support of any specific key algorithms you would like to see supported.

A: You can use OpenSSH key-pairs that end with .pem file extension and do not have a passphrase. If you have an SSH key-pair that is in PuTTY format (.ppk), first convert the key-pair into PEM format. If your key pair is in OpenSSH proprietary format, they should be converted to legacy PEM format before use with SFTP connector. Please refer to the documentation for steps to convert key formats.

A: You can transfer files to or from Amazon S3 to remote SFTP servers using SFTP connectors.

A: Yes. You can provision your SFTP connector resources and your storage service in different Amazon Web Services accounts.

A: You can use remote server’s SSH key fingerprint to validate the identity of remote server. Upload the remote server’s SSH key fingerprint to your SFTP connector’s configuration, and each time connector establishes a connection to the remote server, identity of the remote server will be validated using the fingerprint. If the fingerprint provided by the remote server does not match that uploaded to the SFTP connector configuration, the connection will fail and the error details will be logged in CloudWatch. To re-establish connection, you can manually edit the connector configuration to specify the updated SSH fingerprint of the server.

A: No, currently you cannot create connectors with a static IP address. Please let us know via Amazon Web Services Support or through your account team if you have a use-case that relies on SFTP connectors with static IP addresses.

A: You can use the Amazon Web Services Management Console or TestConnection API command to check whether the connection to the remote server was successfully created before initiating file transfers. To learn more, visit SFTP connectors  documentation .

A: SFTP connectors can be used to list files stored in a directory on remote SFTP server, retrieve files from a remote SFTP server to Amazon S3, send files from Amazon S3 to a remote SFTP server, and delete, rename or move files stored in the remote SFTP server. To learn more about using SFTP connectors, visit SFTP connectors documentation .

A: You can monitor the current status of your file transfer operations using the ListFileTransferResults API command. In addition, SFTP connectors emit detailed logs in Amazon CloudWatch, including status of your file transfers, operation type, timestamp, file path, and error description (if any) to help you maintain data lineage.

A: Yes. Static IP addresses are associated with your connectors by default that can be used to allow-list connections on your business partner’s firewall. You can identify static IP addresses associated with your connectors by navigating to the connector details page in the Amazon Transfer Family Console, or by using the DescribeConnector API/CLI/CDK command. 

A: Yes. All SFTP connectors in an account region will share a set of static IP addresses, and all AS2 connectors in an account region will share a set of static IP addresses. Sharing static IP addresses between connectors of a given type reduces the amount of allow-list documentation as well as the onboarding communications needed with your external partners. 

A: You can use the Amazon Web Services Management Console or TestConnection CLI command to check whether the connection to the remote server was successfully created before initiating file transfers. Make sure that the static IP addresses associated with your connectors are allow-listed on remote server’s firewall if needed. To learn more, visit SFTP connectors documentation .

A: Yes. You can schedule file transfers using Amazon EventBridge Scheduler. Create a schedule that meets your business’s needs using EventBridge’s Scheduler and specify Amazon Transfer Family’s StartFileTransfer API as the universal target for your schedule.

A: Yes. Amazon Step Functions integrates with various Amazon Web Service services, including Amazon Transfer Family, enabling you to invoke SFTP connector’s StartFileTransfer action directly from your state machine. Once you have created your SFTP connector with Amazon Transfer Family, leverage Step Functions' Amazon SDK integrations to call the StartFileTransfer API. If your use-case relies on monitoring the state of your file-transfers to provide a feedback back to your state machines, you could create a subscription filter in CloudTrail to monitor the log entry for file transfer completion event.

A: You can list all files from a directory on remote SFTP server using SFTP connectors, and build custom logic to filter the file list based on your wildcard criteria for filename patterns. You can then use the StartFileTransfer API operation to transfer those files using SFTP connectors.

A: No. Currently, SFTP connectors can only be used to connect with servers that offer an internet accessible endpoint. If you need to connect to servers that are only accessible via a private network, please let us know via Amazon Web Services Support or through your Amazon Web Services account team.

Multi-protocol access

Open all

A: Yes. During setup, you can select the protocol(s) you want to enable for clients to connect to your endpoint. The server hostname, IP address, and identity provider are shared across the selected protocols. Similarly, you can also enable additional protocol support to existing Amazon Transfer Family endpoints, as long as the endpoint configuration meets the requirements for all the protocols you intend to use.

A: When you need to use FTP (only supported for access within VPC), and also need to support over the internet for SFTP, AS2, or FTPS, you will need a separate server endpoint for FTP. You can use the same endpoint for multiple protocols, when you want to use the same endpoint hostname and IP address for clients connecting over multiple protocols. Additionally, if you want to share the same credentials for SFTP and FTPS, you can set up and use a single identity provider for authenticating clients connecting over either protocol.

Yes, you can provide the same user access over multiple protocols, as long as the credentials specific to the protocol have been set up in your identity provider. If you have enabled FTP, we recommend maintaining separate credentials for FTP. Refer to the documentation for setting up separate credentials for FTP.

Unlike SFTP and FTPS, FTP transmits credentials in cleartext. We recommend isolating FTP credentials from SFTP or FTPS because, if, inadvertently FTP credentials are shared or exposed, your workloads using SFTP or FTPS remain secure.

Identity Provider options for server endpoints

Open all

A: The service supports two modes of authentication: Service Managed, where you store user identities within the service, and, Custom (BYO), which enables you to integrate an identity provider of your choice. Service Managed authentication is supported for server endpoints that are enabled for SFTP only.

A: You can use Service Managed authentication to authenticate your SFTP users using SSH keys.

A: You can upload up to 10 SSH keys per user. RSA, ED25519, and ECDSA keys are supported.

A: Yes. Refer to the documentation for details on how to set up key rotation for your SFTP users.

A: No, storing passwords within the service for authentication is currently not supported.

A: When you create your server, you select a directory in Amazon Managed Microsoft AD, your on-premises environment, or self-managed AD in Amazon EC2 as your identity provider. You will then need to specify the AD Groups you want to enable for access using a Security Identifier (SID). Once you associate your AD group with access control information such as IAM Role, scope down policy (S3 only), POSIX Profile (EFS only), home directory location, and logical directory mappings, members of the group can use their AD credentials to authenticate and transfer files over the enabled protocols (SFTP, FTPS, FTP).

A: When you set up your users, you supply a scope down policy that is evaluated in run time based on your users’ information such as their username. You can use the same scope down policy for all your users to provide access to unique prefixes in your bucket based on their username. Additionally, a username can also be used to evaluate logical directory mappings by providing a standardized template on how your S3 bucket or EFS file system contents are made visible to your user. Visit the documentation to Grant Access to AD Groups .

A: Yes, you can use Microsoft AD to authenticate users for access over SFTP, FTPS, and FTP.

A: Yes, you can revoke file transfer access for individual AD Groups. Once revoked, members of the AD groups will not be able to transfer files using their AD credentials.

A: No, we only support setting access by AD Groups.

A: No, Amazon Transfer Family support for Microsoft AD can only be used for password-based authentication. To use a mix of authentication modes, use the Custom authorizer option.

A: The Custom authentication mode (“BYO” authentication) enables you to leverage an existing identity provider to manage your end users for all protocol types (SFTP, FTPS, and FTP), enabling easy and seamless migration of your users. Credentials can be stored in your corporate directory or an in-house identity datastore, and you can integrate it for end user authentication purposes. Examples of identity providers include Microsoft Azure AD, or any custom identity provider that you may be using as a part of an overall provisioning portal.

A: To integrate your identity provider with an Amazon Transfer Family server, you can use an Amazon Lambda function, or an Amazon API Gateway endpoint. Use Amazon API Gateway if you need a RESTful API to connect to an identity provider or want to leverage Amazon WAF for its geo-blocking and rate limiting capabilities. Visit the documentation to learn more about integrating common identity providers such as Amazon Cognito, Okta, and Amazon Secrets Manager.

A: To get started, you can use the Amazon CloudFormation template in the usage guide and supply the necessary information for user authentication and access. Visit the website on custom identity providers to learn more.

A: No, anonymous users are currently not supported for any of the protocols.

A: Your user will need to provide a username and password (or SSH key) which will be used to authenticate, and access to your bucket is determined by the Amazon IAM Role supplied by the API Gateway and Lambda used to query your identity provider. You will also need to provide home directory information, and it is recommended that you lock them down to the designated home folder for an additional layer of security and usability. 

AS2 trading partners

Open all

A: Your trading partner is uniquely identified using their AS2 Identifier (AS2 ID), Similarly your trading partners identify your messages using your AS2 ID.

A: You can use Amazon Transfer Family’s existing support for Amazon S3, networking features (VPC endpoints, Security Groups, and Elastic IPs), and access controls (Amazon IAM) for AS2, as you could for SFTP, FTPS, and FTP. User authentication, logical directories, custom banners, and Amazon EFS as a storage backend are not supported for AS2.

A: Non-repudiation, unique to AS2, validates that message are successfully exchanged between two parties. Non-repudiation in AS2 is achieved using Message Disposition Notifications (MDN). When an MDN is requested in a transaction, it ensures that the sender sent the message, the receiver successfully received it, and the message sent by the sender was the same message received by the receiver.

A: There are two aspects to messages transmission – one from the sender and from the receiver. Once the sender has determined what message to send, the message is signed (using the sender’s private key), encrypted (using the receiver’s certificate), and the message integrity is calculated using a hash. This signed and encrypted message is transmitted over the wire to the receiver. Once the message is received, it is decrypted (using the receiver’s private key), validated (using the sender’s public key), processed and a signed Message Disposition Notifications (MDN), if requested, is sent back to the sender to acknowledge successful delivery of the message. Refer to the documentation on how AS2 handles message transmission .

A: The combination of options possible are driven from a sender’s standpoint. The sender can choose to either only encrypt or only sign the data (or both), and choose to request an Message Disposition Notifications (MDN). If the sender chooses to request an MDN, they can request a signed or unsigned MDN. The receiver is expected to honor these options.

A: To receive messages from your trading partner, create an AS2 agreement that is associated with your AS2-enabled Transfer Family server. To send AS2 messages to your trading partner, create an AS2 connector that will be used to send messages to your trading partner’s AS2 server. Once created, you will be able to send messages with your connector by using the StartFileTransfer API/CLI/CDK command.

A: Yes, the sender can choose to request an MDN, choose to request a signed or unsigned MDN, as well as select the signing algorithms that should be used to sign the MDN.

A: Currently we only support synchronous MDN. Since synchronous MDNs are sent over the same connection channel as the message, it is much simpler and hence the recommended option. If you need more time to process the message before sending an MDN, Async MDNs are preferred. 

A: Amazon Transfer Family extracts key AS2 information from payloads and MDNs exchanged and stores them as JSON files in your Amazon S3 bucket. You can query these JSON files using Amazon S3 Select or Amazon Athena, or index the files using Amazon OpenSearch or Amazon DocumentDB for analytics.

A: Yes, once you receive an MDN from your trading partner, the service validates the MDN using your certificate and stores the message in your Amazon S3 bucket. You can choose to archive the message by leveraging S3 Lifecycle policies.

A: Once your data is ready for delivery, you will need to invoke a service provided API, associate a connector to notify us that it is ready to be delivered, and provide us the recipient’s information. This will notify the service to send the message to your trading partner’s endpoint. Refer to the documentation on connectors to send messages to your trading partner over AS2.

A: Yes, when you set up your trading partner’s profile you can use different folders for each of them.

A: Yes, you can import your partner’s existing keys and certificates and manage renewals and rotations. Refer to the documentation on importing certificates.

A: Using the Amazon Transfer Family console, you can view a dashboard of certificates sorted by their expiry dates, as well as expiry Status of each certificate. You can also check the expiry dates and status of your certificates using the DescribeCertificate API/CLI/CDK command.

A: Yes, Amazon Transfer Family support for AS2 has received the official Drummond Group AS2 Cloud Certification Seal. The Amazon Transfer Family AS2 capabilities have been thoroughly vetted for security and message exchange compatibility with fourteen other third-party AS2 solutions. Visit the official announcement to learn more. 

A: No.

A: Yes. Static IP addresses are associated with your connectors by default that can be used to allow-list connections on your trading partner’s AS2 server. You can identify static IPs associated with your connectors by navigating to the connector details page in the Amazon Transfer Family Console, or by using the DescribeConnector API/CLI/CDK command.

Yes. Your AS2 asynchronous MDN responses will use static IP addresses. You can identify the static IP addresses used for sending your asynchronous MDN responses by navigating to the server details page in the Amazon Transfer Family Management Console, or by using the DescribeServer API/CLI/CDK.

Managed Workflows for Post Upload Processing

Open all

A: Amazon Transfer Family managed workflows make it easier for you to create, run, and monitor post upload processing for file transfers over SFTP, FTPS, and FTP. Using this feature, you can save time with low code automation to coordinate all the necessary tasks such as copying, tagging and decrypting of files. You can also customize to scan for PII, virus/malware, or other errors such as incorrect file format or type, enabling you to quickly detect anomalies and meet your compliance requirements.

A: If you need to process files that you exchange with your business partners using Amazon Transfer Family, you need to set up an infrastructure to run custom code, continuously monitor for run time errors and anomalies, and make sure all changes and transformations to the data are audited and logged. Additionally, you need to account for error scenarios, both technical and business, while ensuring failsafe modes are properly triggered. If you have requirements for traceability, you need to track lineage of the data as it passes along different components of your system. Maintaining separate components of a file-processing workflow takes time away from focusing on differentiating work you could be doing for your business. Managed workflows remove the complexities of managing multiple tasks, and provides a standardized file-processing solution that can be replicated across your organization, with built-in exception handing and file traceability for each step to help you meet your business requirements.

A: Managed workflows allow you to easily preprocess data before it is consumed by your downstream applications by orchestrating file-processing tasks such as moving files to user-specific folders, encrypting files in-transit, malware scanning, and tagging. You can deploy workflows using Infrastructure as Code (IaC), enabling you to quickly replicate and standardize common post-upload file processing tasks spanning multiple business units in your organization. Managed workflows are only triggered on fully uploaded files, ensuring the data quality is maintained. Built-in exception handling allows you to quickly react to file-processing outcomes helping you maintain your business and technical SLAs, while offering you control on how to handle failures. Lastly, each workflow step produces detailed logs, which can be audited to trace the data lineage.

A: First, set up your workflow to contain actions such as copying, tagging, and a series of actions that can include your own custom step in a sequence of steps based on your requirements. Next, map the workflow to a server, so on file arrival, actions specified in this workflow are evaluated and triggered in real time. To learn more, visit the documentation .

A: Yes. The same workflow can be assigned to multiple servers so it is easier for you to maintain and standardize configurations.

A: The following common actions are available once a transfer server has received a file from the client:

  • Decrypting file using PGP keys
  • Move or copy data from where it arrives to where it needs to be consumed.
  • Delete the original file post archiving or copying to a new location.
  • Tag the file based on its contents so it can be indexed and searched by downstream services (S3 only).
  • Any custom file processing logic by supplying your own Lambda function as a custom step to your workflow. For example, checking compatibility of the file type, scanning files for malware, detecting Personally Identifiable Information (PII), and metadata extraction before ingesting files to your data analytics.

A: Yes. You can use a pre-built, fully managed workflow step for PGP decryption of files. For more information, refer to managed workflows documentation.

A: Yes. You can configure a workflow step to process either the originally uploaded file or the output file from the previous workflow step. This allows you to easily automate moving and renaming of your files after they are uploaded to Amazon S3. For example, to move a file to a different location for file archival or retention, configure two steps in your workflow. The first step is to copy a file to a different Amazon S3 location, and the second step to delete the originally uploaded file. Read the documentation for more details on selecting a file location for workflow steps.

A: Yes. Using workflows, you can create multiple copies of the original file while preserving the original file for records retention.

A: Yes. You can utilize username as a variable in workflows copy steps, enabling you to dynamically route files to user-specific folders in Amazon S3. This removes the need to hardcode destination folder location when copying files and automates creation of user-specific folders in Amazon S3, allowing you to scale your file automation workflows. Read the documentation to learn more.

A: Refer to the Monitoring section for details on the supported features for logging your managed workflows activity.

A: Amazon Step Functions is a serverless orchestration service that lets you combine Amazon Lambda with other services to define the execution of business application in simple steps. To perform file-processing steps using Amazon Step Functions, you use Amazon Lambda functions with Amazon S3’s event triggers to assemble your own workflows. Managed workflows provide a framework to easily orchestrate a linear sequence of processing and differentiates from existing solutions in the following ways: 1) Only full file uploads trigger workflows to be executed, 2) workflows can be triggered automatically for Amazon S3 as well as Amazon EFS (which doesn’t offer post upload events), and 3) customers can get end to end visibility into their file transfers and processing in Amazon CloudWatch logs.

A: No, you cannot currently use managed workflows with AS2.

A: No. Processing can be invoked only on file arrival using the inbound endpoint.

A: No. Workflows currently process one file per execution.

A: Yes. You can define workflows to be triggered on both full as well as partial file uploads. Once updated, please move this question up 3 spots above all of the questions answered with ‘No.’

Amazon S3 Access

Open all

A: Amazon IAM is used to determine the level of access you want to provide your users. This includes the operations you want to enable on their client and which Amazon S3 buckets they have access to – whether it’s the entire bucket or portions of it.

A: The home directory you set up for your user determines their login directory. This would be the directory path that your user’s client will place them in as soon as they are successfully authenticated into the server. You will need to ensure that the IAM Role supplied provides user access to the home directory.

A: Yes. You can assign a single IAM Role for all your users and use logical directory mappings that specify which absolute Amazon S3 bucket paths you want to make visible to your end users and how you these paths presented to them by their clients.

A: Files transferred over the supported protocols are stored as objects in your Amazon S3 bucket, and there is a one-to-one mapping between files and objects enabling native access to these objects using Amazon Web Services services for processing or analytics.

A: After successful authentication, based on your users’ credentials, the service presents Amazon S3 objects and folders as files and directories to your users’ transfer applications.

A: Common commands to create, read, update, and delete, files and directories are supported. Files are stored as individual objects in your Amazon S3 bucket. Directories are managed as folder objects in S3, using the same syntax as the S3 console.

Directory rename operations, append operations, changing ownerships, permissions and timestamps, and use of symbolic and hard links are currently not supported.

A: Yes, you can enable/disable file operations using the Amazon IAM role you have mapped to their username. Refer to the documentation on ' Creating IAM Policies and Roles to control your end users ’ access

A: Yes. The bucket(s) your user can access is determined by the  IAM Role, and the optional scope-down policy you assign for that user. You can only use a single bucket as the home directory for the user.

A: Yes. You can use the CLI and API to set up cross account access between your server and the buckets you want to use for storing files transferred over the supported protocols. The Console drop down will only list buckets in Account A. Additionally, you’d need to make sure the role being assigned to the user belongs to Account A.

A: Yes, you can use Amazon S3 events to automate post upload processing using a broad array of Amazon Web Services services for querying, analysis, machine learning and more. Visit the documentation to learn more on common examples for post upload processing using Lambda with Amazon S3 .

A: Yes. When your user uploads a file, the username and the server id of the server used for the upload is stored as part of the associated S3 object’s metadata. You can use this information for post upload processing. Refer to the documentation on information you use for post upload processing .

Amazon EFS Access

Open all

A: Prior to setting up Amazon Transfer Family to work with an Amazon EFS file system, you will need to set up ownership of files and folders using the same POSIX identities (user id/group id) you plan to assign to your Amazon Transfer Family users. Additionally, if you are accessing file systems in a different account, resource policies must also be configured on your file system to enable cross account access.

A: Amazon EFS uses POSIX IDs which consist of an operating system user id, group id, and secondary group id to control access to a file system. When setting up your user in the Amazon Transfer Family console/CLI/API, you will need to specify the username, user’s POSIX configuration, and an IAM role to access the EFS file system. You will also need to specify an EFS file system id and optionally a directory within that file system as your user’s landing directory. When your Amazon Transfer Family user authenticates successfully using their file transfer client, they will be placed directly within the specified home directory, or root of the specified EFS file system. Their operating system POSIX id will be applied to all requests made through their file transfer clients. As an EFS administrator, you will need to make sure the file and directories you want your Amazon Transfer Family users to access are owned by their corresponding POSIX ids in your EFS file system. Refer to the documentation to learn more on configuring ownership of sub-directories in EFS . Note that Transfer Family does not support access points if you are using Amazon EFS for storage.

A: Files transferred over the enabled protocols are directly stored in your Amazon EFS file Systems and will be accessible via a standard file system interface or from Amazon Web Services services that can access Amazon EFS file systems.

A: SFTP/FTPS/FTP commands to create, read, update, and delete files, directories, and symbolic links are supported. Refer to the table below on supported commands for EFS as well as S3.

Command            Amazon S3            Amazon EFS
     cd   Supported    Supported
     ls/dir   Supported   Supported
     pwd   Supported   Supported
     put   Supported   Supported
     get   Supported   Supported including resolving symlinks
     rename   Supported1   Supported1
     chown   Not supported   Supported2
     chmod   Not supported   Supported2
     chgrp   Not supported   Supported3
     ln -s/symlink   Not supported   Not supported
     mkdir   Supported   Supported
     rm/delete   Supported   Supported
     rmdr   Supported4   Supported
     chmtime   Not supported   Supported

1 Only file renames are supported. Directory renames and rename of files to overwrite existing files are not supported.

2 Only root i.e. users with uid=0 can change ownership and permissions of files and directories.

3 Supported either for root e.g. uid=0 or for the file’s owner who can only change a file’s group to be one of their secondary groups.

4 Supported for non-empty folders only.

A: The IAM policy you supply for your Amazon Transfer Family user determines if they have read-only, read-write, and root access to your file system. Additionally, as a file system administrator, you can set up ownership and grant to access files and directories within your file system using their user id and group id. This applies to users whether they are stored within the service (service managed) or within your identity management system (“BYO Auth”).

A: Yes, when you set up your user, you can specify different file systems and directories for each of your users. On successful authentication, EFS will enforce a directory for every file system request made using the enabled protocols.

A: Yes, using Amazon Transfer Family’s logical directory mappings, you can restrict your end users’ view of directories in your file systems by mapping absolute paths to end user visible path names. This also includes being able to “chroot” your user to their designated home directory.

A: Yes, when you set up an Amazon Transfer Family user, you can specify one or more file systems in the IAM policy you supply as part of your user set up in order to grant access to multiple file systems.

A: You can use clients and applications built for Microsoft Windows, Linux, macOS, or any operating system that supports SFTP/FTPS/FTP to upload and access files stored in your EFS file systems. Simply configure the server and user with the appropriate permissions to the EFS file system to access the file system across all operating systems.

A: For new files, the POSIX user id associated with the user uploading the file will be set as the owner of the file in your EFS file system. Additionally, you can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, and read operations. Visit the documentation to learn more on how to enable Amazon CloudWatch logging.

A: Yes. You can use the CLI and API to set up cross account access between your Amazon Transfer Family resources and EFS file systems. The Amazon Transfer Family console will only list file systems in the same account. Additionally, you’d need to make sure the IAM role assigned to the user to access the file system belongs to Account A.

A: If you set up an Amazon Transfer Family server to access a cross account EFS file system not enabled for cross account access, your SFTP/FTP/FTPS users will be denied access to the file system. If you have CloudWatch logging enabled on your server, cross account access errors will be logged to your CloudWatch Logs.

A: No, you can use Amazon Transfer Family to access EFS file systems in the same Amazon Web Services Region only.

A: Yes. You can use Amazon Transfer to write files into EFS and configure EFS Lifecycle Management to migrate files that have not been accessed for a set period of time to the Infrequent Access (IA) storage class.

A: Yes, Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of NFS/SFTP/FTPS/FTP clients.

A: Yes. Accessing your EFS file systems using your Amazon Transfer Family servers will consume your EFS burst credits regardless of the throughput mode. Refer to the documentation on available performance and throughput modes and view some useful performance tips.

Security and compliance

Open all

Either SFTP or FTPS should be used for secure transfers over public networks. Due to the underlying security of the protocols based on SSH and TLS cryptographic algorithms, data and commands are transferred through a secure, encrypted channel.

A: You can choose to encrypt files stored your bucket using Amazon S3 Server-Side Encryption (SSE-S3) or Amazon KMS (SSE-KMS). For files stored in EFS, you can choose Amazon or customer managed CMK for encryption of files at rest. Refer to the documentation for more details on options for at rest encryption of file data and metadata using Amazon EFS .

A: Amazon Transfer Family is PCI-DSS compliant. The service is also SOC 1, 2, and 3 compliant. Learn more about services in scope by compliance programs .

A: You can use Amazon CloudWatch Metrics to monitor and track data uploaded and downloaded by your users over the chosen protocols. Visit the documentation to learn more on using Amazon CloudWatch metrics.

A: You can use Amazon Transfer Family managed workflows to automatically decrypt files uploaded to your Amazon Transfer Family resource using PGP keys. For more information, refer to managed workflows documentation. If you are looking for PGP encryption support, reach out to us via Amazon Web Services Support or through your Amazon Web Services account team.

Billing

Open all

You are billed on an hourly basis for each of the protocols enabled, from the time you create and configure your server endpoint, until the time you delete it. You are also billed based on the amount of data uploaded and downloaded over SFTP, FTPS, or FTP, the number of messages exchanged over AS2, and the amount of data processed using Decrypt workflow step. When using SFTP connectors, you are billed for the amount of data transferred and the number of connector calls you make. Refer to the pricing page for more details

A: No, you are billed on an hourly basis for each of the protocols you have enabled and for the amount of data transferred through each of the protocols, regardless of whether same endpoint is enabled for multiple protocols or you are using different endpoints for each of the protocols.

A: Yes, stopping the server, by using the console, or by running the “stop-server” CLI command or the “StopServer” API command, does not impact billing. You are billed on an hourly basis from the time you create your server endpoint and configure access to it over one or more protocols until the time you delete it.

A: You are billed for Decrypt workflow step based on the amount of data you decrypt using PGP keys. There is no other additional charge for using managed workflows. Depending on your workflow configuration, you are also billed for use of Amazon S3, Amazon EFS, Amazon Secrets Manager and Amazon Lambda.

Managed Workflows

Open all

A: Managed workflows allow you to easily preprocess data before it is consumed by your downstream applications by orchestrating file-processing tasks such as moving files to user-specific folders, encrypting files in-transit, malware scanning, and tagging. You can deploy workflows using Infrastructure as Code (IaC), enabling you to quickly replicate and standardize common post-upload file processing tasks spanning multiple business units in your organization. You can have granular control by defining managed workflows that are triggered only on fully uploaded files, to ensure data quality is maintained, and by defining managed workflows that are triggered for partially uploaded files, to configure processing for incomplete uploads. Built-in exception handling allows you to quickly react to file-processing outcomes in case of errors or exceptions in the workflow execution, helping you maintain your business and technical SLAs, while offering you control on how to handle failures. Lastly, each workflow step produces detailed logs, which can be audited to trace the data lineage.

A: Amazon Step Functions is a serverless orchestration service that lets you combine Amazon Lambda with other services to define the execution of business application in simple steps. To perform file-processing steps using Amazon Step Functions, you use Amazon Lambda functions with Amazon S3’s event triggers to assemble your own workflows. Managed workflows provide a framework to easily orchestrate a linear sequence of processing and differentiates from existing solutions in the following ways: 1) You can granularly define workflows to be executed only on full file uploads, as well as workflows to be executed only on partial file uploads, 2) workflows can be triggered automatically for S3 as well as EFS (which doesn’t offer post upload events), and 3) customers can get end to end visibility into their file transfers and processing in CloudWatch logs.

A: Yes. You can define workflows to be triggered on both full as well as partial file uploads.

Monitoring

Open all

A: Amazon Transfer Family supports Amazon CloudWatch and Amazon CloudTrail as log destinations. Activity related to user access, file transfers, and workflow executions is delivered to Amazon CloudWatch, while activity related to API control plane operations is delivered to Amazon CloudTrail. 

A: You can monitor your end users’ activity using Amazon CloudWatch and Amazon CloudTrail logs. You can also access Amazon CloudWatch graphs for metrics such as number of files and bytes transferred in the Amazon Transfer Family Management Console, giving you a single pane of glass to monitor file transfers using a centralized dashboard. Use Amazon CloudTrail logs to access a record of all API operations invoked by your server to service your end users’ data requests. Visit the documentation to learn more.

A: You can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, and read operations. Visit the documentation to learn more on how to enable Amazon CloudWatch logging.

A: Yes, metrics for data uploaded and downloaded using your server are published to Amazon CloudWatch within the Amazon Transfer Family namespace. Visit the documentation to view the available metrics for tracking and monitoring.

A: Amazon Transfer Family delivers logs in a structured JSON format across all resources – including servers, connectors, and workflows – and all protocols – including SFTP, FTPS, FTP, and AS2. The structured JSON log format allows you to easily parse and query your logs using Amazon CloudWatch Log Insights, which automatically discovers JSON formatted fields. You’ll also benefit from improved monitoring with support for Amazon CloudWatch Contributor Insights, which requires a structured log format to track top users, total number of unique users, and their ongoing usage.

A: Amazon CloudWatch log groups are groups of log streams that share the same retention, monitoring, and access control settings. When creating or updating a server, you can specify a new or existing log group. The ability to share log groups across Amazon Transfer Family resources allows you to combine log streams from multiple servers into a single log group, making it easier to manage your monitoring configurations and log retention settings. Once a log group is created, it can also be used to create custom metrics and visualizations that can be added to Amazon CloudWatch dashboards.

A: Workflow executions can be monitored using Amazon CloudWatch metrics such as the total number of workflows executions, successful executions, and failed executions. Using the Amazon Web Services Management Console, you can also search and view real-time status of in progress workflow executions. Use Amazon CloudWatch logs to get detailed logging of workflows executions.

A: You can use the custom processing step to trigger notifications to Amazon EventBridge or Amazon Simple Notification Service (SNS) and get notified when file processing is complete. Additionally, you can also use Amazon CloudWatch logs from Amazon Lambda executions to get notifications.

A: Yes. If a file validation check fails against preconfigured workflow validation steps, you can use the exception handler to invoke your monitoring system or team members via Amazon Simple Notification Service (SNS) topics.