Common pod fault tunables
Introduction
Container kill
Container kill is a Kubernetes pod-level chaos fault that causes container failure on specific or random replicas of an application resource.
Disk fill
Disk fill is a Kubernetes pod-level chaos fault that applies disk stress by filling the pod's ephemeral storage on a node. This fault evicts the application pod if its capacity exceeds the pod's ephemeral storage limit.
fs fill
FS fill is a Kubernetes pod-level chaos fault that applies FS stress by filling the pod's ephemeral storage. This fault evicts the application pod if its capacity exceeds the pod's ephemeral storage limit.
Pod API block
Pod API block is a Kubernetes pod-level chaos fault that blocks the API requests through path filtering. This is achieved by starting the proxy server and redirecting the traffic through the proxy server during the chaos duration.
Pod API latency
Pod API latency is a Kubernetes pod-level chaos fault that injects api request and response latency by starting proxy server and redirecting the traffic through it.
Pod API modify body
Pod API modify body is a Kubernetes pod-level chaos fault that modifies the api request and response body by replacing any portions that match a specified regular expression with a provided value. This is achieved by starting the proxy server and redirecting the traffic through the proxy server.
Pod API modify header
Pod API modify header is a Kubernetes pod-level chaos fault that overrides the header values of API requests and responses with the user-provided values for the given keys. This is achieved by starting the proxy server and redirecting the traffic through the proxy server.
Pod API status code
Pod API status code is a Kubernetes pod-level chaos fault that change the API response status code and optionally api response body through path filtering. This is achieved by starting the proxy server and redirecting the traffic through the proxy server.
Pod autoscaler
Pod autoscaler is a Kubernetes pod-level chaos fault that determines whether nodes can accomodate multiple replicas of a given application pod. This fault examines the node auto-scaling feature by determining whether the pods were successfully rescheduled within a specified time frame if the existing nodes are running at the specified limits.
Pod CPU hog exec
Pod CPU hog exec is a Kubernetes pod-level chaos fault that consumes excess CPU resources of the application container. This fault applies stress on the target pods by simulating a lack of CPU for processes running on the Kubernetes application. This degrades the performance of the application.
Pod CPU hog
Pod CPU hog is a Kubernetes pod-level chaos fault that excessively consumes CPU resources, resulting in a significant increase in the CPU resource usage of a pod. This fault applies stress on the target pods by simulating lack of CPU for processes running on the Kubernetes application. This degrades the performance of the application.
Pod delete
Pod delete is a Kubernetes pod-level chaos fault that causes specific (or random) replicas of an application resource to fail forcibly (or gracefully).
Pod DNS error
Pod DNS error is a Kubernetes pod-level chaos fault that injects chaos to disrupt DNS resolution in pods. It removes access to services by blocking the DNS resolution of host names or domains.
Pod DNS spoof
Pod DNS spoof is a Kubernetes pod-level chaos fault that injects chaos into pods to mimic DNS resolution. This fault resolves DNS target host names or domains to other IPs as specified in the SPOOF_MAP environment variable in the chaos engine configuration.
Pod HTTP latency
Pod HTTP latency is a Kubernetes pod-level chaos fault that injects HTTP response latency by starting the proxy server and redirecting the traffic through it. This fault:
Pod HTTP modify body
Pod HTTP modify body is a Kubernetes pod-level chaos fault that injects chaos on the service whose port is provided using the TARGETSERVICEPORT environment variable. This is achieved by starting the proxy server and redirecting the traffic through the proxy server. This fault can be used to overwrite the HTTP response body by providing the new body value as RESPONSE_BODY.
Pod HTTP modify header
Pod HTTP modify header is a Kubernetes pod-level chaos fault that injects chaos on the service whose port is provided using the TARGETSERVICEPORT environment variable. This is done by starting the proxy server and redirecting the traffic through the proxy server. This fault modifies headers of the requests and responses of the service.
Pod HTTP reset peer
Pod HTTP reset peer is a Kubernetes pod-level chaos fault that injects chaos on the service whose port is specified using the TARGETSERVICEPORT environment variable. This fault:
Pod HTTP status code
Pod HTTP status code is a Kubernetes pod-level fault that injects chaos inside the pod by modifying the status code of the response from the application server to the desired status code provided by the user. This is achieved by starting the proxy server and redirecting the traffic through the proxy server.
Pod IO attribute override
Pod IO attribute override modifies the properties of files located within the mounted volume of the pod.
Pod IO error
The pod IO error simulates an error that can occur during system calls of the files located within the mounted volume of the pod.
Pod IO latency
Pod IO latency simulates slow I/O operations by introducing delays in system calls of the files located within the mounted volume of the pod. This fault is used for testing the resilience, performance, and scalability of the pod.
Pod IO mistake
Pod IO mistake simulates a scenario where the file system in the mounted volume of the pod reads or writes incorrect values. This fault determines how the application handles data corruption or errors during file operations, ensuring robustness and stability under adverse conditions.
Pod IO stress
Pod I/O stress is a Kubernetes pod-level chaos fault that causes I/O stress on the application pod by increasing the number of input and output requests. Applying stress on the disk with continuous and heavy I/O degrades the reads and writes with respect to the microservices. Scratch space consumed on a node may lead to lack of memory for new containers to be scheduled. All these aspects increase resilience to stress.
Pod JVM CPU stress
Pod JVM CPU stress injects JVM CPU stress for a Java process executing in a Kubernetes pod by consuming excessive CPU threads of the JVM.
Pod JVM method exception
Pod JVM method exception injects chaos into a Java application executing in a Kubernetes pod by invoking an exception.
Pod JVM method latency
Pod JVM method latency slows down the Java application executing on Kubernetes pod by introducing delays in executing the method calls.
Pod JVM modify return
Pod JVM modify return modifies the return value of a method in a Java application executing on a Kubernetes pod, for a specific duration.
Pod JVM trigger gc
Pod JVM trigger gc triggers the garbage collector for a Java process executing in a Kubernetes pod. This causes unused (or out of scope) objects and variables to be garbage collected and recycled, thereby freeing up memory space.
Pod memory hog exec
Pod memory hog exec is a Kubernetes pod-level chaos fault that consumes excessive memory resources on the application container. Since this fault stresses the target container, the primary process within the container may consume the available system memory on the node.
Pod memory hog
Pod memory hog is a Kubernetes pod-level chaos fault that consumes excessive memory resources on the application container. Since this fault stresses the target container, the primary process within the container may consume the available system memory on the node.
Pod network corruption
Pod network corruption is a Kubernetes pod-level chaos fault that injects corrupted packets of data into the specified container. This is achieved by starting a traffic control (tc) process with netem rules to add egress packet corruption.
Pod network duplication
Pod network duplication is a Kubernetes pod-level chaos fault that injects chaos to disrupt the network connectivity to Kubernetes pods. This fault injects chaos on the specific container by starting a traffic control (tc) process with netem rules to add egress delays.
Pod network latency
Pod network latency is a Kubernetes pod-level chaos fault that introduces latency (delay) to a specific container. This fault:
Pod network loss
Pod network loss is a Kubernetes pod-level chaos fault that causes packet loss in a specific container by starting a traffic control (tc) process with netem rules to add egress (or ingress) loss.
Pod network partition
Pod network partition is a Kubernetes pod-level fault that blocks 100 percent ingress and egress traffic of the target application by creating a network policy.
Pod network rate limit
Pod network rate limit is a Kubernetes pod-level chaos fault that generates Traffic Control (tc) rules with Token Bucket Filter (TBF) to assess Kubernetes pod resilience under limited network bandwidth condition.
Redis cache expire
Redis cache expire expires a given key (or all keys) for a specified duration. During this period of chaos, you can't access the keys associated with the cache.
Redis cache limit
Redis cache limit fault limits the amount of memory used by a Redis cache and restores it after the chaos duration.
Redis cache penetration
Redis cache penetration fault continuously sends cache requests to the Redis database to find the value for a key that does not exist. This continuous request reduces the performance of the application.
Time chaos
Time chaos is a Kubernetes pod-level fault that introduces controlled time offsets to disrupt the system time of the target pod.