Skip to main content

Amazon DataSync FAQs

General

Open all

A: Amazon DataSync is an online data movement service that simplifies and accelerates data migrations to Amazon Web Services as well as moving data to and from on-premises storage, other cloud providers, and Amazon Web Services Storage.

For online data transfers, Amazon DataSync simplifies, automates, and accelerates copying large amounts of data between on-premises storage, edge locations, or other clouds, and Amazon Web Services Storage services, as well as between Amazon Web Services Storage services. DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, Azure Files, Azure Blob Storage including Azure Data Lake Storage Gen2, Amazon Simple Storage Service ( Amazon S3 ), Amazon Elastic File System ( Amazon EFS ) file systems and Amazon FSx  file systems.

A: Amazon DataSync enables you to move your data, securely and quickly. You can use DataSync to copy large datasets with millions of files, without having to build custom solutions with open-source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to Amazon Web Services, archive data to free up on-premises storage capacity, replicate data to Amazon Web Services for business continuity, or transfer data to the cloud for analysis and processing.

A: Amazon DataSync reduces the complexity and cost of online data transfer, making it simple to transfer datasets to and from on-premises storage, edge locations, other cloud providers and Amazon Storage services. DataSync connects to existing storage systems and data sources with standard storage protocols (NFS, SMB), as an HDFS client, using the Amazon S3 API, or using other cloud storage APIs. It uses a purpose-built network protocol and scale-out architecture to accelerate data transfer between storage systems and Amazon Web Services services. DataSync handles moving files and objects, scheduling data transfers, monitoring the progress of transfers, encryption, verification of data transfers, and notifying you of any issues.

Data movement

Open all

A: DataSync supports the following storage location types: Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, Azure Files, Azure Blob Storage including Azure Data Lake Storage Gen2, Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS) file systems and Amazon FSx file systems. 

A: You can use Amazon DataSync to migrate data located on premises, at the edge, or in other clouds to Amazon S3, Amazon EFS and Amazon FSx. Configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data until the final cut-over from on-premises to Amazon Storage services. DataSync includes encryption and integrity validation to help make sure your data arrives securely, intact, and ready to use. To minimize impact on workloads that rely on your network connection, you can schedule your migration to run during off-hours, or limit the amount of network bandwidth that DataSync uses by configuring the built-in bandwidth throttle. DataSync preserves metadata between storage systems that have similar metadata structures, enabling a smooth transition of end users and applications to using your target Amazon Web Services Storage service.

A: You can use Amazon DataSync to move cold data from on-premises storage systems directly to durable and secure long-term storage, such as Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier) or Amazon S3 Glacier Deep Archive . Use DataSync’s exclude filters to exclude copying temporary files and folders or use include filters or manifests to copy only a subset of files from your source location. You can select the most cost-effective storage service for your needs: transfer data to any S3 storage class , or use DataSync with EFS Lifecycle Management to store data in Amazon EFS Infrequent Access storage class (EFS IA) . Use the built-in task scheduling functionality to regularly archive data that should be retained for compliance or auditing purposes, such as logs, raw footage, or electronic medical records. 

A: With Amazon DataSync, you can periodically replicate files into any Amazon S3 storage classes, or send the data to Amazon EFS, and Amazon FSx for a standby file system. Use the built-in task scheduling functionality to ensure that changes to your dataset are regularly copied to your destination storage.

A: You can use Amazon DataSync for ongoing transfers from on-premises systems into or out of Amazon Storage services for processing. DataSync can help speed up your critical hybrid cloud storage workflows in industries that need to move active files into Amazon Storage quickly. This includes machine learning in life sciences, video production in media and entertainment, big data analytics in financial services, and seismic research in oil and gas. DataSync provides timely delivery to ensure dependent processes are not delayed. You can specify include and exclude filters  or manifests to specify which files or objects should be transferred each time your task runs.

A: Yes. Using Amazon DataSync, you can copy data from Azure Files using the SMB protocol or from Azure Blob Storage including Azure Data Lake Storage Gen 2. When using Enhanced mode tasks, no agent is required to connect to your cloud storage. Otherwise, if using Basic mode, deploy the DataSync agent in your cloud environment or on Amazon EC2, create your source and destination locations, and then start your task to begin copying data.

A: You can use DataSync to transfer files or objects between Amazon S3, Amazon EFS, and Amazon FSx within the same Amazon Web Services account. You can transfer data between Amazon Web Services in the same Amazon Web Services Region. This does not require deploying a DataSync agent, and can be configured end to end using the Amazon DataSync console, Command Line Interface (CLI), or Software Development Kit (SDK).

Usage

Open all

A: You can transfer data using Amazon DataSync with a few clicks in the Amazon Web Services Management Console  or through the Amazon Command Line Interface (CLI). To get started, follow these 3 steps:

1. To transfer data between on-premises and Amazon Storage services, deploy an agent  and associate it to your Amazon Web Services account via the Management Console or API. The agent will be used to access your NFS server, SMB file share, Hadoop cluster, or self-managed or cloud object storage to read data from it or write data to it. Deploying an agent is not required to transfer data between Amazon Storage services within the same Amazon Web Services account.

2. Create a data transfer task - Create a task by specifying the location of your data source and destination, and any options you want to use to configure the transfer, such as scheduling the task and enabling task reports.

3. Start the transfer - Start the task, monitor data movement in the console or with Amazon CloudWatch , and audit transfer tasks using task reports.

A: You deploy an Amazon DataSync agent to your on-premises hypervisor, or in Amazon EC2 . To copy data to or from an on-premises file server, you download the agent virtual machine image from the Amazon Web Services Console and deploy to your on-premises VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. When a DataSync agent is used, the agent must be deployed so that it can access your file server using the NFS, SMB protocol, access NameNodes and DataNodes in your Hadoop cluster, or access your self-managed object storage using the Amazon S3 API . Deploying an agent is not required to transfer data between Amazon Web Services Storage services within the same Amazon Web Services account. 

A: Amazon DataSync copies data when you initiate a task via the Amazon Web Services Management Console or Amazon Command Line Interface (CLI). Each time a task runs, it scans the source and destination for changes, and performs a copy of any data and metadata differences between the source to the destination. You can configure which characteristics of the source are used to determine what changed, define include and exclude filters or manifests to transfer specific file and object data, and control if files or objects in the destination should be overwritten when changed in the source or deleted when not found in the source.

A: Basic mode tasks are subject to quotas on the number of files and objects in a dataset. Basic mode sequentially prepares, transfers, and verifies files and objects in a dataset, making it slower than Enhanced mode for most workloads. With Enhanced mode, you can transfer datasets with virtually unlimited numbers of objects at higher levels of performance than Basic mode. Enhanced mode tasks optimize and streamline the data transfer process by listing, preparing, transferring, and verifying data in parallel. You also get enhanced metrics and reporting capabilities, making it easier to track and manage large data transfers. Enhanced mode is currently available for transfers between Amazon S3 buckets and between storage services in other clouds and Amazon S3. Basic mode supports all DataSync location types available today. See the DataSync documentation for a detailed list of differences between task modes. See the DataSync pricing page for differences in pricing between task modes.

A: As Amazon DataSync transfers and stores data, it performs integrity checks to ensure the data written to the destination matches the data read from the source. Additionally, an optional verification check can be performed to compare source and destination at the end of the transfer. DataSync will calculate and compare full-file checksums of the data stored in the source and in the destination. You can check either the entire dataset or just the files or objects that DataSync transferred.

A: You can use task reports to audit your data transfer processes by verifying the transfer operations across all of your task executions. Using task reports, you can get a summary report along with detailed reports for all files transferred, skipped, verified, and deleted, for each task execution. Task reports give you the total number of files and bytes transferred, and include file attributes such as size, path, timestamps, file checksums, and object version IDs where applicable. You can also leverage Amazon Glue and Amazon Athena to automatically catalog and query task reports to gain critical insights into your data transfer processes.

You can use the Amazon Web Services Management Console or CLI to monitor the status and progress of data being transferred. Using Amazon CloudWatch Metrics, you can see the number of files and amount of data which has been copied. You can also enable logging of individual files to CloudWatch Logs, to identify what was transferred at a given time, as well as the results of the content integrity verification performed by DataSync. 

These solutions together simplify auditing, monitoring, reporting, and troubleshooting, and enable you to provide timely updates to stakeholders. 

A: Yes. You can specify an exclude filter, an include filter, or both to limit which files, folders, or objects are transferred each time a task runs. Alternatively, you can use manifests to specify a subset of files or objects that should be transferred from your source location.

Include filters specify the file and folder paths or object keys that should be included when the task runs and limits the scope of what is scanned by DataSync on the source and destination. Exclude filters specify the file and folder paths or object keys that should be excluded from being copied. When creating or updating a task, you can configure both exclude and include filters. When starting a task, you can override and update the filters configured on the task. Read this Amazon Web Services Storage blog  to learn more about using common filters with DataSync.

A manifest is a CSV-formatted file that lists the file paths or object keys that should be included when the task runs and limits the scope of what is scanned by DataSync on the source and destination. When creating or updating a task, you can provide a manifest file with millions of source files or objects, and DataSync will only compare and transfer the files listed in the manifest. When starting a task, you can override and update the manifest file. When copying data from Amazon S3, you can also specify an optional S3 version ID of each object to transfer. Read this blog for more details.

Note that filters and manifests cannot be used together.

A: Whereas a manifest is an explicit list of files or objects to be transferred from the source location, an include filter is a string specifying patterns of files and folders to be transferred from the source. Only files and folders that match the patterns in the filter are copied. A pattern can be an entire file or folder path, or a prefix ending with a wildcard (*) character, indicating that all files or objects that match the prefix should be copied. Include filters are ideal for customers that only want to copy a small set of files or objects, or a few specific folders. Customers with well-known datasets, such as those moved as part of an automated workflow, can use manifests to avoid scanning their entire file or object storage systems to determine changes. Using a manifest file, customers can specify millions of source files or objects to be transferred, and DataSync will only compare the files listed in the manifest. Customers can also use manifests to copy specific versions of objects from their Amazon S3 bucket.

A: Yes. You can schedule your tasks using the Amazon DataSync Console or Amazon Web Services Command Line Interface (CLI), without needing to write and run scripts to manage repeated transfers. Task scheduling automatically runs tasks on the schedule you configure, with hourly, daily, or weekly options provided directly in the Console. This enables you to ensure that changes to your dataset are automatically detected and copied to your destination storage.

A: Yes. When transferring files, Amazon DataSync creates the same directory structure on the destination as on the source location's structure.

A: If a task is interrupted, for instance, if the network connection goes down or the Amazon DataSync agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run. Each time a task is started it performs an incremental copy, transferring only the changes from the source to the destination.

A: You can use Amazon DataSync with your Direct Connect link to access public service endpoints or private VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and Amazon Web Services does not traverse the public internet or need public IP addresses, increasing the security of data as it is copied over the network. 

A: To use VPC endpoints with Amazon DataSync, you create an Amazon PrivateLink interface VPC endpoint for the DataSync service in your chosen VPC, and then choose this endpoint elastic network interface (ENI) when creating your DataSync agent. Your agent will connect to this ENI to activate, and subsequently all data transferred by the agent will remain within your configured VPC. You can use either the Amazon DataSync Console, Amazon Command Line Interface (CLI), or Amazon SDK, to configure VPC endpoints. To learn more, see Using Amazon DataSync in a Virtual Private Cloud.

Moving to and from Amazon Storage

Open all

A: Amazon DataSync supports moving data to, from, or between Amazon Simple Storage Service (Amazon S3) , Amazon Elastic File System (Amazon EFS) , and Amazon FSx

Amazon S3

Open all

A: Yes. When configuring an S3 bucket for use with Amazon DataSync, you can select the S3 storage class that DataSync uses to store objects. DataSync supports storing data directly into S3 Standard, S3 Intelligent-Tiering, S3 Standard-Infrequent Access (S3 Standard-IA), S3 One Zone-Infrequent Access (S3 One Zone-IA), Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive). More information on Amazon S3 storage classes can be found in the Amazon Simple Storage Service Developer Guide .

Objects smaller than the minimum charge capacity per object will be stored in S3 Standard. For example, folder objects, which are zero-bytes in size and hold only metadata, will be stored in S3 Standard. Read about considerations when working with Amazon S3 storage classes in our documentation, and for more information on minimum charge capacities see Amazon S3 Pricing .

A: Yes. When using S3 as the source location for an Amazon DataSync task, the service will retrieve all objects from the bucket which need to be copied to the destination. Retrieving objects from S3 Standard-IA and S3 One Zone-IA storage will incur a retrieval fee based on the size of the objects. Read about considerations when working with Amazon S3 storage classes in our documentation.

A: When using S3 as the source location for an Amazon DataSync task, the service will attempt to retrieve all objects from the bucket which need to be copied to the destination. Retrieving objects which are archived in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class results in an error. Any errors retrieving archived objects will be logged by DataSync and will result in a failed task completion status. Read about considerations when working with Amazon S3 storage classes in our documentation.

A: Amazon DataSync assumes an IAM role that you provide. The policy you attach to the role determines which actions the role can perform. DataSync can auto generate this role on your behalf or you can manually configure a role.

A: When files or folders are copied to Amazon S3, there is a one-to-one relationship between a file or folder and an object. File and folder timestamps and POSIX permissions, including user ID, group ID, and permissions, are stored in S3 user metadata. For NFS shares, file metadata stored in S3 user metadata is fully interoperable with File Gateway, providing on-premises file-based access to data stored in Amazon S3 by Amazon DataSync.

When DataSync copies objects that contain this user metadata back to an NFS server, the file metadata is restored. Symbolic links and hard links are also restored when copying back from NFS to S3.

When copying from an SMB file share, default POSIX permissions are stored in S3 user metadata. When copying back to an SMB file share, ownership is set based on the user that was configured in DataSync to access that file share, and default permissions are assigned.

When copying from HDFS, file and folder timestamps, user and group ownership, and POSIX permissions are stored in S3 user metadata. When copying from Amazon S3 back to HDFS, file and folder metadata are restored.

Learn more about how DataSync stores files and metadata in our documentation.

A: When transferring objects between self-managed object storage or Azure Blob Storage and Amazon S3, DataSync copies objects together with object metadata and tags.

A: When transferring objects between Amazon S3 buckets, DataSync copies objects together with object metadata and tags. DataSync does not copy other object information such as object ACLs or prior object versions.

A: Some S3 storage classes have behaviors that can affect your cost, such as data retrieval, minimum storage capacities, and minimum storage durations. DataSync automates management of data to address these factors, and provides settings to minimize data retrieval.

To avoid minimum capacity charge per object, Amazon DataSync automatically stores small objects in S3 Standard. To minimize data retrieval fees, you can configure DataSync to verify only files that were transferred by a given task. To avoid minimum storage duration charges, DataSync has controls for overwriting and deleting objects. Read about considerations when working with Amazon S3 storage classes in our documentation.

Amazon EFS

Open all

A: Amazon DataSync accesses your Amazon EFS file system using the NFS protocol. The DataSync service mounts your file system from within your VPC from Elastic Network Interfaces (ENIs) managed by the DataSync service. DataSync fully manages the creation, use, and deletion of these ENIs on your behalf. You can choose to mount your EFS file system using a mount target or an EFS Access Point .

A: Yes. You can use Amazon DataSync to copy files into Amazon EFS and configure EFS Lifecycle Management to migrate files that have not been accessed for a set period of time to the Infrequent Access (IA) storage class.

A: You can use both IAM identity policies and resource policies to control client access to Amazon EFS resources in a way that is scalable and optimized for cloud environments. When you create a DataSync location for your EFS file system, you can specify an IAM role that DataSync will assume when accessing EFS. You can then use EFS file system policies to configure access for the IAM role. Because DataSync mounts EFS file systems as the root user, your IAM policy must allow the following action: elasticfilesystem: ClientRootAccess.

A: Yes. In addition to the built-in replication provided by Amazon EFS, you can also use Amazon DataSync to schedule periodic replication of your Amazon EFS file system to a second Amazon EFS file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.

A: Amazon DataSync copies file and folder timestamps and POSIX permissions, including user ID, group ID, and permissions. You can learn more and see the complete list of copied metadata in our documentation .

A: Amazon DataSync copies file and folder timestamps and POSIX permissions and applies default values for user ID and group ID. You can learn more and see the complete list of copied metadata in our documentation.

Amazon FSx for Windows File Server

Open all

A: Amazon DataSync accesses your Amazon FSx for Windows File Server file system using the SMB protocol, authenticating with the username and password you configure in the Amazon Web Services Console or CLI. The DataSync service mounts your file system from within your VPC from Elastic Network Interfaces (ENIs) managed by the DataSync service. DataSync fully manages the creation, use, and deletion of these ENIs on your behalf.

A: Amazon DataSync copies Windows metadata, including file timestamps, file owner, standard file attributes, NTFS discretionary access lists (DACLs), and NTFS system access control lists (SACLs). You can learn more and see the complete list of copied metadata in our documentation.

A: Yes. You can use Amazon DataSync to schedule periodic replication of your Amazon FSx for Windows File Server file system to a second file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.

Amazon FSx for Lustre

Open all

A: When you create a DataSync task to copy to or from your FSx for Lustre file system, the DataSync service will create Elastic Network Interfaces (ENIs) in the same VPC and subnet where your file system is located. DataSync uses these ENIs to access your FSx for Lustre file system using the Lustre protocol as the root user. When you create a DataSync location resource for your FSx for Lustre file system, you can specify up to five security groups to apply to the ENIs and configure outbound access from the DataSync service. The security groups must be configured to allow outbound traffic on the network ports required by FSx for Lustre . The security groups on your FSx for Lustre file system should be configured to allow inbound access from the security groups you assigned to the DataSync location resource for your FSx for Lustre file system.

A: Amazon DataSync copies file and folder timestamps and POSIX permissions, including user ID, group ID, and permissions. You can learn more and see the complete list of copied metadata in our documentation .

A: Yes. You can use Amazon DataSync to copy from your FSx for Lustre file system to a second file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.

A: Yes. You can use Amazon DataSync to schedule periodic replication of your Amazon FSx for Lustre file system to a second file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.

A: No. Files are written using the file layout and striping configuration on the destination’s file system.

Performance

Open all

A: The rate at which Amazon DataSync can copy a given dataset is a function of amount of data, I/O bandwidth achievable from the source and destination storage, network bandwidth available, and network conditions. For data transfer between on premises and Amazon Web Services Storage services, a single DataSync task is capable of fully utilizing a 10 Gbps network link.

A: Yes. You can control the amount of network bandwidth that Amazon DataSync will use by configuring the built-in bandwidth throttle. You can increase or decrease this limit while your data transfer task is running. This enables you to minimize impact on other users or applications who rely on the same network connection.

A: Amazon DataSync generates Amazon CloudWatch Metrics to provide granular visibility into the transfer process. Using these metrics, you can see the number of files and amount of data which has been copied, as well as file discovery and verification progress. You can see CloudWatch Graphs with these metrics directly in the DataSync Console.

A: Depending on the capacity of your on-premises file store, and the quantity and size of files to be transferred, Amazon DataSync may affect the response time of other clients when accessing the same source data store, because the agent reads or writes data from that storage system. Configuring a bandwidth limit for a task will reduce this impact by limiting the I/O against your storage system.

Security and compliance

Open all

A: Yes. All data transferred between the source and destination is encrypted via Transport Layer Security (TLS), which replaced Secure Sockets Layer (SSL). Data is never persisted in Amazon DataSync itself. The service supports using default encryption for S3 buckets , Amazon EFS file system encryption of data at rest , and Amazon FSx encryption at rest and in transit .

A: Amazon DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your files through the NFS or SMB protocol. This agent connects to DataSync service endpoints within Amazon Web Services, and is securely managed from the Amazon Web Services Management Console or CLI.

A: Amazon DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your Hadoop cluster. The DataSync agent acts as an HDFS client and communicates with the NameNodes and DataNodes in your clusters. When you start a task, DataSync queries the primary NameNode to determine the locations of files and folders on the cluster. DataSync then communicates with the DataNodes in the cluster to copy files and folders to, or from, HDFS.

A: Amazon DataSync uses the Amazon S3 API to access your S3-compatible object storage systems. To access your on-premises object storage, DataSync uses an agent that you deploy into your data center. When using Basic mode tasks for cross-cloud transfers, DataSync uses an agent you deploy in your public cloud environment, or into Amazon EC2 to access your storage in other clouds. This agent connects to DataSync service endpoints within Amazon Web Services, and is securely managed from the Amazon Web Services Management Console or CLI. When using Enhanced mode tasks, no agent is required to connect to storage in other clouds.

A: When using Basic mode tasks, Amazon DataSync uses an agent that you deploy into your Azure environment or into Amazon EC2 to access objects in your Azure Blob Storage containers. The agent connects to DataSync service endpoints within Amazon Web Services, and is securely managed from the Amazon Web Services Management Console or CLI. When using Enhanced mode tasks, no agent is required to connect to your Azure Blob Storage. DataSync authenticates to your Azure container using a SAS token that you specify when creating a DataSync Azure Blob location.

A: No. When copying data to or from your premises, there is no need to setup a VPN/tunnel or allow inbound connections. Your Amazon DataSync agent can be configured to route through a firewall using standard network ports. You can also deploy DataSync within your Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and Amazon Web Services does not need to traverse the public internet or need public IP addresses.

A: Updates to the agent VM, including both the underlying operating system and the Amazon DataSync software packages, are automatically applied by DataSync once the agent is activated. Updates are applied non-disruptively when the agent is idle and not executing a data transfer task.

A: Yes. Amazon DataSync supports IPv6 for storage resources through dual-stack (IPv4 and IPv6) functionality. You can use DataSync to connect to storage resources located on premises, using IPv4 or IPv6 addresses.

When to choose Amazon DataSync

Open all

A: Amazon DataSync fully automates and accelerates moving large active datasets to Amazon Storage services. It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx, Amazon CloudWatch , and Amazon CloudTrail , which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer.

DataSync uses a purpose-built network protocol and scale-out architecture to transfer data. For data transfer between on premises and Amazon Web Services Storage services, a single DataSync task is capable of fully utilizing a 10 Gbps network link.

DataSync fully automates the data transfer. It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, auditing via task reports, monitoring via the DataSync API and Console, and CloudWatch metrics, events, and logs that provide granular visibility into the transfer process. DataSync performs data integrity verification both during the transfer and at the end of the transfer.

DataSync provides end-to-end security, and integrates directly with Amazon Web Services Storage services. All data transferred between the source and destination is encrypted via TLS, and access to your Amazon Web Services Storage is enabled via built-in Amazon Web Services security mechanisms such as IAM roles. DataSync with VPC endpoints are enabled to ensure that data transferred between an organization and Amazon Web Services does not traverse the public internet, further increasing the security of data as it is copied over the network.

A: Amazon Web Services provides multiple tools to copy objects between your buckets.

Use Amazon DataSync for ongoing data distribution, data pipelines, and data lake ingest, as well as for consolidating or splitting data between multiple buckets.

Use S3 Replication for continuous replication of data to a specific destination bucket.

Use S3 Batch Operations for large-scale batch operations on S3 objects, such as to copy objects, set object tags or access control lists (ACLs), initiate object restores from Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier), invoke an Amazon Lambda function to perform custom actions using your objects, manage S3 Object Lock legal hold, or manage S3 Object Lock retention dates.

A: Amazon DataSync is ideal for online data transfers. You can use DataSync to migrate active data to Amazon Web Services Storage services, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to Amazon Storage services for business continuity.

Amazon Snowball is ideal for offline data transfers, for customers who are bandwidth constrained, or transferring data from remote, disconnected, or austere environments. 

A: If you currently use SFTP to exchange data with third parties, Amazon Transfer Family provides a fully managed SFTP, FTPS, and FTP transfer directly into and out of Amazon S3, while reducing your operational burden.

If you want an accelerated and automated data transfer between NFS servers, SMB file shares, Hadoop clusters, self-managed or cloud object storage, Amazon S3, Amazon EFS, and Amazon FSx, you can use Amazon DataSync. DataSync is ideal for customers who need online migrations for active data sets, timely transfers for continuously generated data, or replication for business continuity.