Skip to main content

dynatrace-probe

Last updated on

Dynatrace probe allows you to query Dynatrace metrics and compare the results against specified criteria.

When to use

  • Validate service response times or failure rates monitored by Dynatrace during chaos
  • Use Dynatrace entity-level metrics (e.g., per-service, per-host) as experiment pass/fail criteria
  • Confirm that Dynatrace-detected SLOs remain healthy under failure conditions

Prerequisites

  • An active Dynatrace account
  • Access to the Dynatrace API from the Kubernetes execution plane
  • A Dynatrace API token with metrics.read scope

Steps to configure

  1. Navigate to Project Settings > Chaos Probes and click + New Probe

  2. Select APM Probe, provide a name, and select Dynatrace under APM Type

  3. Under Variables, define any reusable values you want to reference in probe properties or run properties. For each variable, specify the type (String or Number), name, value (fixed or runtime input), and whether it's required at runtime.

  4. Under Dynatrace Connector, select an existing connector or click + New Connector to create one. Provide the Dynatrace environment URL and an API token with metrics.read scope, configure the delegate, verify the connection, and click Finish. See Dynatrace API tokens documentation for details.

  5. Under Probe Properties, configure:

    FieldDescription
    Metrics SelectorDynatrace metrics selector query.
    Example: builtin:service.response.time:avg:filter(eq("dt.entity.service","SERVICE-1234567890")). See Dynatrace Metrics API docs
    Entity SelectorFilter metrics by specific entities.
    Example: type("SERVICE"),tag("environment:production"). See Entity Selector docs
    Lookback Window (in minutes)Time range from the specified number of minutes ago to now, over which metrics are queried

    Under Dynatrace Data Comparison, provide:

    FieldDescription
    TypeData type for comparison: Float or Int
    Comparison CriteriaComparison operator: >=, <=, ==, !=, >, <, oneOf, between
    ValueThe expected value to compare against the metric result
  6. Provide the Run Properties:

    FieldDescription
    TimeoutMaximum time for probe execution (e.g., 10s)
    IntervalTime between successive executions (e.g., 2s)
    AttemptNumber of retry attempts (e.g., 1)
    Polling IntervalTime between retries (e.g., 30s)
    Initial DelayDelay before first execution (e.g., 5s)
    VerbosityLog detail level
    Stop On Failure (optional)Stop the experiment if the probe fails
  7. Click Create Probe