Log Explorer
Log Explorer enables you to store and explore your Cloudflare logs directly within the Cloudflare Dashboard or API. Giving you visibility into your logs without the need to forward them to third parties. Logs are stored on Cloudflare’s global network using the R2 object storage platform and can be queried via the Dashboard or SQL API.
Supported datasets
The following zone-level datasets are currently available with Log Explorer:
- HTTP requests (
FROM http_requests
) - Firewall events (
FROM firewall_events
)
Authentication
In order to communicate with the API, you will need to configure the appropriate authentication headers.
X-Auth-Email
- the Cloudflare account email address associated with the domainX-Auth-Key
- the Cloudflare API key
Alternatively, API tokens with Account and Zone level Logs Edit permissions can also be used for authentication:
Authorization: Bearer <API_TOKEN>
Enable Log Explorer
You can use the dashboard or the API to enable the datasets you want to query with Log Explorer.
- Log in to the Cloudflare dashboard and select your account and domain.
- Go to Analytics & Logs > Log Explorer.
- Select Enable a dataset to select the datasets you want to query. You can enable more datasets later.
Use the Log Explorer API to enable Log Explorer for each dataset you wish to store. It may take up to 30 minutes after a logstream is enabled before you can view the logs.
The following curl command is an example for enabling http_requests
, as well as the expected response when the command succeeds.
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/explorer/datasets \ --header 'Authorization: Bearer <API_TOKEN>' \ --header 'Content-Type: application/json' \ --data '{ "dataset": "http_requests"}'
{ "result": { "id": 1, "dataset": "http_requests", "created_at": "2023-09-25T22:12:31Z", "updated_at": "2023-09-25T22:12:31Z" }, "success": true, "errors": [], "messages": []
}
Use Log Explorer
Filtering and viewing your logs is available via the Cloudflare Dashboard or via query API.
- Log in to the Cloudflare dashboard and select your account and domain.
- Go to Analytics & Logs > Log Explorer.
- From the dropdown, select the Dataset you want to use.
- Select a Limit. That is the maximum number of results to return, for example, 50.
- Select the Time period from which you want to query, for example, the previous 12 hours.
- Select Add filter to create your query. Select a Field, an Operator, and a Value.
- A query preview is displayed. Select Use custom SQL, if you would like to change it.
- Select Run query when you are done. The results are displayed below within the Query results section.
Log Explorer exposes a query endpoint that uses a familiar SQL syntax for querying your logs generated with Cloudflare’s network.
For example, to find an HTTP request with a specific Ray ID, you can perform the following SQL query.
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/explorer/query/sql \ --header "Authorization: Bearer <API_TOKEN>" \ --url-query query="SELECT clientRequestScheme, clientRequestHost, clientRequestMethod, edgeResponseStatus, clientRequestUserAgent FROM http_requests WHERE RayID = '806c30a3cec56817' LIMIT 1"
Which returns the following HTTP request details:
{ "result": [ { "clientrequestscheme": "https", "clientrequesthost": "example.com", "clientrequestmethod": "GET", "clientrequestuseragent": "curl/7.88.1", "edgeresponsestatus": 200 } ], "success": true, "errors": [], "messages": []
}
Output formats
Log Explorer output can be presented in different formats, besides JSON: JSON Lines (also known as NDJSON), CSV, and plain text. The plain text uses ASCII tables similar to psql’s aligned
output mode. Besides the convenience factor of not having to translate the format on the client side, JSON Lines, CSV, and plain text formats have the advantage of being streamed from the API. So for large result sets, you will get a response earlier.
You can choose the output format with an HTTP Accept
header, as shown in the table below:
Output format | Content type | Streaming? |
---|---|---|
JSON | application/json | No |
JSON Lines | application/x-ndjson | Yes |
CSV | text/csv | Yes |
Plain text | text/plain | Yes |
Optimizing your queries
All the tables supported by Log Explorer contain a special column called date
, which helps to narrow down the amount of data that is scanned to respond to your query, resulting in faster query response times. The value of date
must be in the form of YYYY-MM-DD
. For example, to query logs that occurred on October 12, 2023, add the following to your WHERE
clause: date = '2023-10-12'
. The column supports the standard operators of <
, >
, and =
.
curl https://api.cloudflare.com/client/v4/zones/{zone_id}/logs/explorer/query/sql \ --header 'Authorization: Bearer <API_TOKEN>' \ --url-query query="SELECT clientRequestMethod, clientRequestPath, clientRequestProtocol FROM http_requests WHERE date = '2023-10-12' LIMIT 500"
SQL queries supported
These are the SQL query clauses supported by Log Explorer.
SELECT
The SELECT
clause specifies the columns that you want to retrieve from the database tables. It can include individual column names, expressions, or even wildcard characters to select all columns.
FROM
The FROM
clause specifies the tables from which to retrieve data. It indicates the source of the data for the SELECT
statement.
WHERE
The WHERE
clause filters the rows returned by a query based on specified conditions. It allows you to specify conditions that must be met for a row to be included in the result set.
GROUP BY
The GROUP BY
clause is used to group rows that have the same values into summary rows.
HAVING
The HAVING
clause is similar to the WHERE
clause but is used specifically with the GROUP BY
clause. It filters groups of rows based on specified conditions after the GROUP BY
operation has been performed.
ORDER BY
The ORDER BY
clause is used to sort the result set by one or more columns in ascending or descending order.
LIMIT
The LIMIT
clause is used to constrain the number of rows returned by a query. It is often used in conjunction with the ORDER BY
clause to retrieve the top N rows or to implement pagination.
FAQs
Which fields (or columns) are available for querying?
All fields listed in the datasets Log Fields are viewable in Log Explorer. For filtering, only fields with simple values, such as those of type bool
, int
, float
, or string
are supported. Fields with key-value pairs are currently not supported. For example, you cannot use the fields RequestHeaders
and Cookies
from the HTTP requests dataset in a filter.
Why does my query not complete or time out?
Log Explorer performs best when query parameters focus on narrower ranges of time. You may experience query timeouts when your query would return a large quantity of data. Consider refining your query to improve performance.
If your query times out with an HTTP status of 524 (Gateway Timeout), consider using one of the streaming output formats, such as application/x-ndjson
.
My query returned an error. How do I figure out what went wrong?
We are actively working on improving error codes. If you receive a generic error, check your SQL syntax (if you are using the custom SQL feature), make sure you have included a date and a limit, and that the field you are filtering is not a key-value pair. If the query still fails it is likely timing out. Try refining your filters.
Where is the data stored?
The data is stored in Cloudflare R2. Each Log Explorer dataset is stored on a per-customer level, similar to Cloudflare D1, ensuring that your data is kept separate from that of other customers. In the future, this single-tenant storage model will provide you with the flexibility to create your own retention policies and decide in which regions you want to store your data.
Does Log Explorer support Customer Metadata Boundary?
Customer metadata boundary is currently not supported for Log Explorer.