I'm receiving constant spurious "Cannot start: predefined port 10001 is already in use" messages (i.e. when the port is not actually in use)
I've read and understand a previous issue at
viewtopic.php?f=3&t=4965 which describes how the profiler agent uses an "enhanced" port-in-use detection mechanism involving shared memory. At the time, that issue lead to somewhat of an impasse:
> Please let me ask my question once again: why reuse the PIDs? You are the first one who reports this problem. I guess your case is unusual.
But since that issue was raised, Docker and Kubernetes have become much more popular. Docker reuses PIDs constantly - you'll almost always be running your main application process as PID 1. And to make matters worse, Kubernetes gives each container its own process namespace but
shares the IPC namespace between containers in a pod. (In k8s terms, a pod is a closely related set of Docker containers)
The upshot of this is:
- The first time the container starts within a fresh pod, YourKit starts happily
- But if the pod ever restarts for any reason (OOM, SIGTERM to PID 1, etc.) then, when the container is restarted, it still has the same shared memory entry for the PID and port
- So, YourKit refuses to attempt to bind the port, because it thinks it is in use (because /proc/<1> exists)
The workarounds provided so far are inadequate:
- Using `listen=localhost` will mean I can't access the profiler port (I want it exposed as a container port)
- Using a range of ports would mean I'd need to expose every one, and which one is used would depend on how many restarts the container has done (and, besides, it'd still fail after 10 restarts)
Ultimately, I don't think you should be assuming that the process and IPC namespaces are aligned. But in the meantime, there should be a direct way to disable the shared memory "enhanced" port-in-use detection, because at the moment it causes way too many false positives in Kubernetes deployment scenarios.