newrelic-probe
New Relic probe allows you to query New Relic metrics using NRQL and compare the results against specified criteria.
When to use
- Run NRQL queries to validate transaction durations, throughput, or error counts during chaos
- Use New Relic APM data as steady-state indicators for services under test
- Validate that alerting thresholds in New Relic are not breached during fault injection
Prerequisites
- An active New Relic account
- Access to the New Relic NerdGraph API from the Kubernetes execution plane
- A New Relic User API key for authentication
- Proper configuration of your application to send metrics to New Relic
Steps to configure
-
Navigate to Project Settings > Chaos Probes and click + New Probe
-
Select APM Probe, provide a name, and select New Relic under APM Type
-
Under Variables, define any reusable values you want to reference in probe properties or run properties. For each variable, specify the type (
StringorNumber), name, value (fixed or runtime input), and whether it's required at runtime. -
Under New Relic Connector, select an existing connector or click + New Connector to create one. Provide the NerdGraph API endpoint, Account ID, and a User API key (not a License key), configure the delegate, verify the connection, and click Finish. See New Relic API keys documentation for details.
noteOnly the NerdGraph API is supported for New Relic integration. Use
https://api.newrelic.com/graphql(US) orhttps://api.eu.newrelic.com/graphql(EU). -
Under Probe Properties, configure:
Field Description New Relic Query NRQL query to retrieve the desired metrics.
Example:SELECT average(duration) FROM Transaction WHERE appName = 'your-app-name' SINCE 5 minutes ago. See NRQL documentationNew Relic Query Metric The specific metric field to extract from the NRQL response.
Example:average.duration(for a query usingSELECT average(duration))Under New Relic Data Comparison, provide:
Field Description Type Data type for comparison: FloatorIntComparison Criteria Comparison operator: >=,<=,==,!=,>,<,oneOf,betweenValue The expected value to compare against the metric result -
Provide the Run Properties:
Field Description Timeout Maximum time for probe execution (e.g., 10s)Interval Time between successive executions (e.g., 2s)Attempt Number of retry attempts (e.g., 1)Polling Interval Time between retries (e.g., 30s)Initial Delay Delay before first execution (e.g., 5s)Verbosity Log detail level Stop On Failure (optional) Stop the experiment if the probe fails -
Click Create Probe