Enable Logpush to Amazon S3
Cloudflare Logpush supports pushing logs directly to Amazon S3 via the Cloudflare dashboard or via API. Customers that use AWS GovCloud locations should use our S3-compatible endpoint and not the Amazon S3 endpoint.
Manage via the Cloudflare dashboard
Log in to the Cloudflare dashboard.
Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to account-scoped datasets and zone-scoped datasets, respectively.
Go to Analytics & Logs > Logpush.
Select Create a Logpush job.
In Select a destination, choose Amazon S3.
Enter or select the following destination information:
- Bucket - S3 bucket name
- Path - bucket location within the storage container
- Organize logs into daily subfolders (recommended)
- Bucket region
- If your policy requires AWS SSE-S3 AES256 Server Side Encryption.
- For Grant Cloudflare access to upload files to your bucket, make sure your bucket has a policy (if you did not add it already):
- Copy the JSON policy, then go to your bucket in the Amazon S3 console and paste the policy in Permissions > Bucket Policy and select Save.
When you are done entering the destination details, select Continue.
To prove ownership, Cloudflare will send a file to your designated destination. To find the token, select the Open button in the Overview tab of the ownership challenge file, then paste it into the Cloudflare dashboard to verify your access to the bucket. Enter the Ownership Token and select Continue.
Select the dataset to push to the storage service.
In the next step, you need to configure your logpush job:
- Enter the Job name.
- Under If logs match, you can select the events to include and/or remove from your logs. Refer to Filters for more information. Not all datasets have this option available.
- In Send the following fields, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
In Advanced Options, you can:
- Choose the format of timestamp fields in your logs (
RFC3339
(default),Unix
, orUnixNano
). - Select a sampling rate for your logs or push a randomly-sampled percentage of logs.
- Enable redaction for
CVE-2021-44228
. This option will replace every occurrence of${
withx{
.
- Choose the format of timestamp fields in your logs (
Select Submit once you are done configuring your logpush job.
Create and get access to an S3 bucket
Cloudflare uses Amazon Identity and Access Management (IAM) to gain access to your S3 bucket. The Cloudflare IAM user needs PutObject
permission for the bucket.
Logs are written into that bucket as gzipped objects using the S3 Access Control List (ACL)
Bucket-owner-full-control
permission.
For illustrative purposes, imagine that you want to store logs in the bucket burritobot
, in the logs
directory. The S3 URL would then be s3://burritobot/logs
.
Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the Roles section.
To enable Logpush to Amazon S3:
Create an S3 bucket. Refer to instructions from Amazon.
Edit and paste the policy below into S3 > Bucket > Permissions > Bucket Policy, replacing the
Resource
value with your own bucket path. TheAWS
Principal
is owned by Cloudflare and should not be changed.
{ "Id": "Policy1506627184792", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1506627150918", "Action": ["s3:PutObject"], "Effect": "Allow", "Resource": "arn:aws:s3:::burritobot/logs/*", "Principal": { "AWS": ["arn:aws:iam::391854517948:user/cloudflare-logpush"] } } ]
}