Amazon Glue

Simple, flexible, and cost-effective ETL

Amazon Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the Amazon Web Services Management Console. You simply point Amazon Glue to your data stored on Amazon Web Services, and Amazon Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the Amazon Glue Data Catalog. Once cataloged, your data is immediately searchable, queryable, and available for ETL.

Benefits

Less hassle

Amazon Glue is integrated across a wide range of Amazon Web Services services, meaning less hassle for you when onboarding. Amazon Glue natively supports data stored in Amazon Aurora and all other Amazon RDS engines, Amazon Redshift, and Amazon S3, as well as common database engines and databases in your Virtual Private Cloud (Amazon VPC) running on Amazon EC2.

Cost effective

Amazon Glue is serverless. There is no infrastructure to provision or manage. Amazon Glue handles provisioning, configuration, and scaling of the resources required to run your ETL jobs on a fully managed, scale-out Apache Spark environment. You pay only for the resources used while your jobs are running.

More power

Amazon Glue automates much of the effort in building, maintaining, and running ETL jobs. Amazon Glue crawls your data sources, identifies data formats, and suggests schemas and transformations. Amazon Glue automatically generates the code to execute your data transformations and loading processes.

 

How it works

Select a data source and data target. Amazon Glue will generate ETL code in Scala or Python to extract data from the source, transform the data to match the target schema, and load it into the target. You can edit, debug and test this code via the Console, in your favorite IDE, or any notebook.

Step 1: Build your Data Catalog

First, use the Amazon Web Services Management Console to register your data sources. Amazon Glue will crawl your data sources and construct your Data Catalog using pre-built classifiers for many popular source formats and data types, including JSON, CSV, Parquet, and more.

Step 2: Generate and Edit Transformations

Next, select a data source and data target. Amazon Glue will generate ETL code in Scala or Python to extract data from the source, transform the data to match the target schema, and load it into the target. You can edit, debug and test this code via the Console, in your favorite IDE, or any notebook.

Step 3: Schedule and Run Your Jobs

Amazon Glue makes it easy to schedule recurring ETL jobs, chain multiple jobs together, or invoke jobs on-demand from other services like Amazon Lambda. Amazon Glue manages the dependencies between your jobs, automatically scales underlying resources, and retries jobs if they fail.

Visit the Amazon Glue features page, or refer to our product documentation to learn more.

Use cases

Queries Against an Amazon S3 Data Lake

Data lakes are an increasingly popular way to store and analyze both structured and unstructured data. If you want to build your own custom Amazon S3 data lake, Amazon Glue can make all your data immediately available for analytics without moving the data.


Analyze Log Data in Your Data Warehouse

Prepare your clickstream or process log data for analytics by cleaning, normalizing, and enriching your data sets using Amazon Glue. Amazon Glue generates the schema for your semi-structured data, creates ETL code to transform, flatten, and enrich your data, and loads your data warehouse on a recurring basis.


Unified View of Your Data Across Multiple Data Stores

You can use the Amazon Glue Data Catalog to quickly discover and search across multiple Amazon Web Services data sets without moving the data. Once the data is cataloged, it is immediately available for search and query using Amazon EMR, and Amazon Redshift Spectrum.


Event-driven ETL Pipelines

Amazon Glue can run your ETL jobs based on an event, such as getting a new data set. For example, you can use an Amazon Lambda function to trigger your ETL jobs to run as soon as new data becomes available in Amazon S3. You can also register this new dataset in the Amazon Glue Data Catalog as part of your ETL jobs.

Webpage image
Check out the product features

Learn more about the key features of Amazon Glue.

Learn more 
Account-signup image
Sign up for an account
Sign up 
Toolbox image
Start building on the console

Get started building with Amazon Glue on the Amazon Web Services Management Console.

Sign in