Amazon Data Firehose features
Key features

Amazon Data Firehose is the easiest way capture, transform, and load streaming data into data stores and analytics tools. Firehose is a fully managed service that makes it easy to capture, transform, and load massive volumes of streaming data from hundreds of thousands of sources into Amazon S3, Amazon Redshift, Amazon OpenSearch Service (successor to Amazon Elasticsearch Service), Amazon Kinesis Data Analytics, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk, enabling near real-time analytics and insights.

Key features

Easy to use

You can launch Amazon Data Firehose and create a Firehose stream to load data into Amazon S3, Amazon Redshift, Amazon OpenSearch Service, HTTP endpoints, Datadog, New Relic, MongoDB, or Splunk with just a few clicks in the Amazon Web Services Management Console. You can send data to the Firehose stream by writing data to the API, or use the Firehose stream to consume from a Kinesis Data Stream. Firehose then continuously loads the data into Amazon S3, Amazon Redshift, and Amazon OpenSearch Service.

Pay-as-you-go pricing

You only pay for the volume of data you transmit through Firehose. There are no minimum fees or upfront commitments.

Secure

Amazon Data Firehose always encrypts data in transit using HTTPS, supports encryption at rest, and provides you the option to have your data automatically encrypted after it is uploaded to the destination.

Scales elastically

Based on your ingestion pattern, Firehose service might proactively increase the limits when excessive throttling is observed on your Firehose stream.

Inline JSON to Parquet or ORC format conversion

Columnar data formats such as Apache Parquet and Apache ORC are optimized for cost-effective storage and analytics using services such as Amazon Athena, Amazon Redshift Spectrum, Amazon EMR, and other Hadoop based tools. Amazon Data Firehose can convert the format of incoming data from JSON to Parquet or ORC formats before storing the data in Amazon S3, so you can save storage and analytics costs. Learn more »

Dynamically partition data during delivery to Amazon S3

Firehose can continuously partition streaming data by keys within data like “customer_id” or “transaction_id”, and deliver data grouped by these keys into corresponding Amazon S3 prefixes, making it easier for you to perform high performance, cost efficient analytics on streaming data in Amazon S3 using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.

Load data in near real time

You can specify a batch size or batch interval to control how quickly data is uploaded to destinations. For example, you can set the batch interval to 60 seconds if you want to receive new data within 60 seconds of sending it to your Firehose stream. Additionally, you can specify if data should be compressed. The service supports common compression algorithms including GZip and Snappy. Batching and compressing data before uploading enables you to control how quickly you receive new data at the destinations.

Custom data transformations

You can configure Amazon Data Firehose to prepare your streaming data before it is loaded to data stores. Simply select an Amazon Lambda function from the Amazon Data Firehose stream configuration tab in the Amazon Web Services Management console. Amazon Data Firehose will automatically apply that function to every input data record and load the transformed data to destinations. Amazon Data Firehose provides pre-built Lambda blueprints for converting common data sources such as Apache logs and system logs to JSON and CSV formats. You can use these pre-built blueprints without any change, or customize them further, or write your own custom functions. You can also configure Amazon Data Firehose to automatically retry failed jobs and back up the raw streaming data. Learn more »

Supports multiple destinations

Amazon Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, HTTP endpoints, Datadog, New Relic, MongoDB and Splunk as destinations. You can specify the destination Amazon S3 bucket, Amazon Redshift cluster, Amazon OpenSearch Service domain, generic HTTP endpoints, or a service provider where the data should be loaded.

Close
Hot Contact Us

Hotline Contact Us

1010 0766
Beijing Region
Operated By Sinnet
1010 0966
Ningxia Region
Operated By NWCD