Instrumented container reachability
Instrumented container reachability is an advanced OS package reachability mode that embeds a runtime sensor in your container image to record how the image is used in a real environment. Use it when your container relies on complex external services that cannot be exercised during a short local run, so that reachability results then reflect actual usage.
Use instrumented reachability when:
- The container has complex external dependencies such as databases, message queues, third‑party services that cannot be realistically exercised in a local ephemeral run.
- You want profiling to happen in a realistic environment, such as staging, that mirrors production traffic.
- You already have integration or end‑to‑end tests and want reachability to reflect those tests.
endorctl container scan command. The endorctl scan --container command does not support container reachability.
Prerequisites
To perform instrumented container reachability analysis, ensure that:
- Container scanning is enabled using the
--os-reachabilityflag. - endorctl is installed and authenticated.
- Docker daemon
dockerdis installed on the host, runnable and accessible to the current user without elevated privileges. For example,docker imagesshould work withoutsudo. - The negotiated Docker API version between the client and server is
1.48or higher. - You must run the scan from either a Linux or a macOS host machine. Container reachability is supported for both amd64 and arm64 architectures.
To run the instrumented container images, you need either Docker or Kubernetes, based on your setup:
- Docker: Docker daemon is available and you can run the instrumented image locally.
- Kubernetes:
kubectlis configured with access to your cluster so you can deploy and run the instrumented image in a pod.
Determine instrumented container reachability
Follow these steps to scan the original image, collect runtime profiling data, and determine reachability for containers.
-
Run
endorctl container instrumentto create a new image with a lightweight sensor injected into the filesystem. The resulting image will have-instrumentedappended to the tag.endorctl container instrument \ --image=<image_name-tag> \ --app-stop-signal=QUIT \ --load-instrumented-image=true--image: Original container image to instrument.--app-stop-signal: Signal used to stop the application. This is required so that the sensor can flush profiling data before the container exits.--load-instrumented-image: Loads the instrumented image into your local Docker runtime so it can be referenced by Kubernetes.
-
Define how the instrumented image runs in a manifest. Create a manifest file such as
demo-manifest-file.yamlto identify your workload. In the manifest, reference the instrumented image from step 1 and use a pod name and container name you can reuse later. You can also addenv,volumes, and other options as needed for your application.apiVersion: v1 kind: Pod metadata: name: <pod-name> spec: restartPolicy: OnFailure containers: - name: <container-name> image: <instrumented-image> ports: - containerPort: <container-port> hostPort: <host-port> securityContext: privileged: true- Set
securityContext.privilegedtotrueso the profiling sensor can run. - Set
restartPolicytoOnFailureso that the pod does not restart automatically after you stop the app to generate the report.
- Set
-
Deploy the instrumented image to Kubernetes. The application runs normally while the sensor observes file access and process activity during execution.
kubectl apply -f <manifest-file> kubectl get pods <pod-name>Replace
<manifest-file>,<pod-name>, and<container-name>with the values from your manifest.
-
If you need to access the application locally, run:
kubectl port-forward pod/<pod-name> <host-port>:<container-port> -
You can also run your tests or interact with the application normally. The profiling sensor will capture runtime activity.
-
After you finish testing, send the
--app-stop-signal, for example,QUIT, to stop the application gracefully. This signal triggers the profiling sensor to write thecreport.jsonfile and generate the profiling data.kubectl exec -it <pod-name> -c <container-name> -- sh -c "kill -QUIT 1" -
The sensor writes a profiling report to a known artifacts directory inside the container.
-
Verify that the
creport.jsonfile exists in the container:kubectl exec -it <pod-name> -c <container-name> -- sh -c "ls -ls /opt/_instrumented/artifacts"
-
Create a local directory and copy the report to it.
# Create output directory mkdir -p collect_output # Copy the profiling data kubectl cp <pod-name>:/opt/_instrumented/artifacts/creport.json collect_output/creport.json -c <container-name>
-
To verify that the file is copied, run:
ls -la collect_output/ -
Alternatively, you can use
endorctl container collectto stop the running application and retrieve the profiling report from the instrumented container into a local directory. Skip steps 4, 5, and 6 if you are using this command.endorctl container collect \ --dynamic-profiling-data=true \ --output-dir=collect_output \ --image=<instrumented-image> -
Set
--dynamic-profiling-datatotrueto collect profiling data from the instrumented container. -
Set
--output-dirto the local directory where the collected data is saved. A subdirectory is created under this pathcluster/pod/container. Use that path for--profiling-data-dirin the next step.
-
Run
endorctl container scanwith the path to the directory that contains the collected profiling data, and OS reachability enabled. Endor Labs loads the report, maps runtime files to OS packages, and marks the corresponding packages as reachable.endorctl container scan \ --image=<original-image:tag> \ --profiling-data-dir=collect_output \ --project-name=<project-name> \ --os-reachability -
Remove the pod after completing the analysis:
kubectl delete pod <pod-name>
Instrumented reachability options
You can run the endorctl container instrument command with the following options.
| Flag | Type | Description |
|---|---|---|
--app-stop-signal |
string | Signal sent to the app so the sensor can flush profiling data before the container exits, for example, QUIT or TERM. Ensure the signal is compatible with your application. |
--app-stop-grace-period |
string | Grace period for app shutdown, for example 10s, 1m. Use when the app needs time to flush before exit. |
--entrypoint |
string | Override the image entrypoint (JSON array or shell string). Use when the image has a custom entrypoint. |
--cmd |
string | Override the image CMD (JSON array or shell string). Use when the image has a custom CMD. |
--load-instrumented-image |
boolean | Load the instrumented image into the local Docker daemon so Kubernetes or a registry can use it. Default false. |
--output-image-tar |
string | Output tar path for the instrumented image. Default instrumented-image.tar. |
You can run the endorctl container collect command with the following options.
| Flag | Type | Description |
|---|---|---|
--output-dir |
string | Local directory where collected profiling data is saved. A subdirectory cluster/pod/container is created. Use that path for --profiling-data-dir in the scan step. |
--dynamic-profiling-data |
boolean (Default: true) | Collect dynamic profiling data from the instrumented container. |
--kubeconfig-path |
string | Path to the kubeconfig file for the target Kubernetes cluster. Use when not using the default kubeconfig. |
--kubeconfig-context |
string | Kubeconfig context to use for the target cluster. Use when you have multiple clusters. |
Troubleshoot issues
Profiling data is not generated
-
Ensure that the QUIT signal is sent correctly.
-
Check that the container has privileged: true in security context.
-
Verify that the
--app-stop-signalmatches the signal your application handles.
Can’t find the image in Kubernetes
-
Run
kind load docker-image <image>to load the image into kind. -
Push the image to a container registry.
Permission denied errors
securityContext.privileged: true is set in the pod manifest.
Feedback
Was this page helpful?
Thanks for the feedback. Write to us at support@endor.ai to tell us more.
Thanks for the feedback. Write to us at support@endor.ai to tell us more.