Logpush API configuration
Endpoints
The table below summarizes the job operations available. All the examples in this page are for zone-scoped datasets. Account-scoped datasets should use /accounts/<ACCOUNT_ID>
instead of /zone/<ZONE_ID>
. For more information, refer to the
Log fields
page.
The <ZONE_ID>
argument is the zone id (hexadecimal string). The <ACCOUNT_ID>
argument is the organization id (hexadecimal string). These arguments can be found using
API’s zones endpoint.
The <JOB_ID>
argument is the numeric job id. The <DATASET>
argument indicates the log category (such as http_requests
, spectrum_events
, firewall_events
, nel_reports
, or dns_logs
).
For concrete examples, see the tutorial Manage Logpush with cURL .
Connecting
The Logpush API requires credentials like any other Cloudflare API.
$ curl -s -H "X-Auth-Email: <EMAIL>" -H "X-Auth-Key: <API_KEY>" \
'https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs'
Ownership
Before creating a new job, ownership of the destination must be proven.
To issue an ownership challenge token to your destination:
$ curl -s -X POST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/ownership \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-H "Content-Type: application/json" \
--data '{"destination_conf":"s3://<BUCKET_PATH>?region=us-west-2"}' | jq .
A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path, if appropriate for your destination):
{
"errors": [],
"messages": [],
"result": {
"valid": true,
"message": "",
"filename": "<path-to-challenge-file>.txt"
},
"success": true
}
You will need to provide the token contained in the file when creating a job.
Destination
You can specify your cloud service provider destination via the required destination_conf parameter.
- AWS S3: bucket + optional directory + region + optional encryption parameter (if required by your policy); for example:
s3://bucket/[dir]?region=<REGION>[&sse=AES256]
- Datadog: Datadog endpoint URL + Datadog API key + optional parameters; for example:
datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>
- Google Cloud Storage: bucket + optional directory; for example:
gs://bucket/[dir]
- Microsoft Azure: service-level SAS URL with
https
replaced byazure
+ optional directory added before query string; for example:azure://<BlobContainerPath>/[dir]?<QueryString>
- New Relic New Relic endpoint URL which is
https://log-api.newrelic.com/log/v1
for US orhttps://log-api.eu.newrelic.com/log/v1
for EU + a license key + a format; for example: for US"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"
and for EU"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"
- Splunk: Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example:
splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>
- Sumo Logic: HTTP source address URL with
https
replaced bysumo
; for example:sumo://<SumoEndpoint>/receiver/v1/http/<UniqueHTTPCollectorCode>
For S3, Google Cloud Storage, and Azure, logs can be separated into daily subdirectories by using the special string {DATE}
in the URL path; for example: s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256
or azure://myblobcontainer/logs/{DATE}?[QueryString]
. It will be substituted with the date in YYYYMMDD
format, like 20180523
.
For more information on the value for your cloud storage provider, consult the following conventions:
- AWS S3 CLI (S3Uri path argument type)
- Google Cloud Storage CLI (Syntax for accessing resources)
- Microsoft Azure Shared Access Signature
- Sumo Logic HTTP Source
To check if a destination is already in use:
$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/destination/exists -d '{"destination_conf":"s3://foo"}' | jq .
Response
{
"errors": [],
"messages": [],
"result": {
"exists": false
},
"success": true
}
There can be only one job writing to each unique destination. For S3 and GCS, a destination is defined as bucket + path. This means two jobs can write to the same bucket, but must write to different subdirectories in that bucket.
Job object
Options
Logpush repeatedly pulls logs on your behalf and uploads them to your destination.
Log options, such as fields or sampling rate, are configured in the logpull_options job parameter (refer to Logpush job object definition). For example, the following query gets data from the Logpull API:
curl -sv \
-H'X-Auth-Email: <EMAIL>' \
-H'X-Auth-Key: <API_KEY>' \
"https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logs/received?start=2018-08-02T10:00:00Z&end=2018-08-02T10:01:00Z&fields=RayID,EdgeStartTimestamp"
In Logpush, the Logpull options would be: "logpull_options": "fields=RayID,EdgeStartTimestamp"
. Refer to
Logpull API parameters
for more info.
If you do not change any options, you will receive logs with default fields that are unsampled (i.e., sample=1
).
The four options that you can customize are:
- Fields: Refer to
Log fields
for the currently available fields. The list of fields is also accessible directly from the API:
https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/datasets/<DATASET>/fields
. Default fields:https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/datasets/<DATASET>/fields/default
. - Sampling rate: Value can range from
0.001
to1.0
(inclusive).sample=0.1
meansreturn 10% (1 in 10) of all records
. - Timestamp format: The format in which timestamp fields will be returned. Value options:
unixnano
(default),unix
,rfc3339
. - Optional redaction for CVE-2021-44228: This option will replace every occurrence of
${
withx{
. To enable it, setCVE-2021-44228=true
.
Note: The CVE-2021-44228 parameter can only be set through the API at this time. Updating your Logpush job through the UI will set this option to false.
To check if logpull_options are valid:
$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/origin -d '{"logpull_options":"fields=RayID,ClientIP,EdgeStartTimestamp×tamps=rfc3339&CVE-2021-44228=true","dataset": "http_requests"}' | jq .
Response
{
"errors": [],
"messages": [],
"result": {
"valid": true,
"message": "",
},
"success": true
}
Audit
The following actions are recorded in Cloudflare Audit Logs: create, update, and delete job.