Skip to main content

Configure audit streaming

Audit logs help you track and analyze your system and user events within your Harness account. You can stream these audit logs to external destinations and integrate them with Security Information and Event Management (SIEM) tools to:

  • Trigger alerts for specific events
  • Create custom views of audit data
  • Perform anomaly detection
  • Store audit data beyond Harness's 2-year retention limit
  • Maintain security compliance and regulatory requirements
warning

All audit event data is sent to your streaming destination and may include sensitive information such as user emails, account identifiers, project details, and resource information. Ensure you only configure trusted and secure destinations.

Add a streaming destination

You can only add streaming destinations at the Account scope. Follow these steps to create a new streaming destination:

Configure the streaming connector

Once a streaming destination is added, you're ready to configure the streaming connector. For object storage services(AWS S3), you can choose to stream audit events in either JSON and NDJSON formats.

To configure the Amazon S3 streaming connector:

  1. Select Amazon S3.

  2. In Select Connector, select an existing AWS Cloud Provider connector or create a new one.

    You must use the Connect through a Harness Delegate connectivity mode option when you set up your AWS Cloud Provider connector. Audit streaming does not support the Connect through Harness Platform connector option.

    Go to Add an AWS connector for steps to create a new AWS Cloud Provider connector.

  3. Select Apply Selected.

  4. Select the Format of the data — either JSON or NDJSON.

note

NDJSON format is supported in Harness Delegate version 25.10.87100 or later.

  1. In Amazon S3 Bucket, enter the bucket name.

    Harness writes all the streaming records to this destination.

  2. Select Save and Continue.

  3. After the connection test is successful, select Finish.

    The streaming destination gets configured and appears in the list of destinations under Audit Log Streaming. By default the destination is inactive.

Amazon S3 audit file details

Here is an example of an audit stream file in one of the Amazon S3 buckets.

Activate or deactivate streaming

  1. To start streaming to this destination, toggle the status to Active. Audit logs will begin writing once the destination is activated and are streamed every 30 minutes.

  2. You can pause audit streaming, preventing any new audit events from being streamed to the configured endpoint by setting the status to Inactive.

    When you reactivate the streaming destination, Harness starts streaming the audit logs from the point where it was paused.

Update audit stream

You can change the audit stream configuration by clicking three dots beside the stream destination. This opens a pop-up menu with the following options:

  • Edit: Select a different streaming destination or make changes to the existing destination.

  • Delete: Delete the audit stream destination. You must set the audit stream destination to inactive before you can delete it.

Payload schema

Streamed audit events have a predictable schema in the body of the response.

FieldDescriptionIs required
auditEventIdUnique ID for the audit event.Required
auditEventAuthorPrincipal attached with audit event.Required
auditModuleModule for which the audit event is generated.Required
auditResourceResource audited.Required
auditResourceScopeScope of the audited resource.Required
auditActionAction on the audited resource.Required
auditEventTimeDate and time of the event.Required
auditHttpRequestInfoDetails of the HTTP request.Optional

Example audit event

This file contains a list of audit events in JSON format. Key points about the audit stream file naming convention:

  • The file name includes three timestamps: <t1>_<t2>_<t3>.
  • <t1> and <t2> indicate the time range of the audit events in the file. This range is provided for reference only and may not always be accurate. Timestamps can also be out of range if there is a delay in capturing events.
  • <t3> represents the time when the file was written.

In JSON format, an array of object is streamed per batch. Each object in the array represents an audit event.

{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"auditEventId": {
"type": "string"
"description":"Unique ID for each audit event"
},
"auditEventAuthor": {
"type": "object",
"properties": {
"principal": {
"type": "object",
"properties": {
"type": {
"type": "string"
},
"identifier": {
"type": "string"
},
"email": {
"type": "string"
}
},
"required": [
"type",
"identifier",
]
}
},
"required": [
"principal"
]
"description":"Information about Author of the audit event"
},
"auditModule": {
"type": "string"
"description":"Information about Module of audit event origin"
},
"auditResource": {
"type": "object",
"properties": {
"type": {
"type": "string"
},
"identifier": {
"type": "string"
}
},
"required": [
"type",
"identifier"
]
"description":"Information about resource for which Audit event was generated"
},
"auditResourceScope": {
"type": "object",
"properties": {
"accountIdentifier": {
"type": "string"
},
"orgIdentifier": {
"type": "string"
},
"projectIdentifier": {
"type": "string"
}
},
"required": [
"accountIdentifier",
]
"description":"Information about scope of the resource in Harness"
},
"auditAction": {
"type": "string"
"description":"Action CREATE,UPDATE,DELETE,TRIGGERED,ABORTED,FAILED , Not exhaustive list of events"
},
"auditHttpRequestInfo": {
"type": "object",
"properties": {
"requestMethod": {
"type": "string"
}
"clientIP": {
"type": "string"
}
},
"required": [
"requestMethod",
"clientIP"
]
"description":"Information about HTTP Request"
},
"auditEventTime": {
"type": "string"
"description":"Time of auditEvent in milliseconds"
}
},
"required": [
"auditEventId",
"auditEventAuthor",
"auditModule",
"auditResource",
"auditResourceScope",
"auditAction",
"auditEventTime",
]
}