Skip to content

Comments

Serialize process spawning across threads with a lock#728

Open
veeceey wants to merge 2 commits intoMagicStack:masterfrom
veeceey:fix/issue-508-process-spawn-race
Open

Serialize process spawning across threads with a lock#728
veeceey wants to merge 2 commits intoMagicStack:masterfrom
veeceey:fix/issue-508-process-spawn-race

Conversation

@veeceey
Copy link

@veeceey veeceey commented Feb 23, 2026

When multiple threads each run their own event loop and try to spawn processes concurrently, they race on the global pthread_atfork handlers, which can only be active for one loop at a time. The current code detects this race and raises RuntimeError("Racing with another loop to spawn a process"), which isn't great — it means users have to build their own retry logic or serialization.

This replaces the error-raising approach with a threading.Lock that serializes process spawning across threads. Instead of failing, concurrent spawns just wait their turn. The lock is held only during the critical fork section (setting up pthread_atfork handlers, calling uv_spawn, and cleaning up), so it doesn't block any longer than necessary.

The lock is properly released in all error paths through a finally block that also cleans up the __forking state if something goes wrong mid-fork.

Repro from the issue now works without errors:

import asyncio
from threading import Thread
import uvloop

def create_processes(i):
    async def inner():
        processes = []
        for _ in range(100):
            p = await asyncio.create_subprocess_exec("true")
            processes.append(p)
        for p in processes:
            await p.wait()
    try:
        asyncio.run(inner())
        print(f"[{i}] Success.")
    except Exception as e:
        print(f"[{i}] Fail: {repr(e)}")

uvloop.install()
threads = [Thread(target=lambda: create_processes(i)) for i in range(10)]
for t in threads: t.start()
for t in threads: t.join()
# All threads now succeed instead of some failing with RuntimeError

Fixes #508

Replace the RuntimeError-raising check for concurrent process spawning
with a threading.Lock that serializes spawns across different event
loops running in separate threads. The old approach would fail with
"Racing with another loop to spawn a process" when multiple threads
tried to spawn processes concurrently, since the global pthread_atfork
handlers can only be active for one loop at a time.

Now instead of failing, concurrent spawns wait for the lock, allowing
them to proceed sequentially. The lock is properly released in all
error paths via a finally block.

Fixes MagicStack#508
@veeceey
Copy link
Author

veeceey commented Feb 23, 2026

Couldn't build locally (shallow clone, no libuv submodule), but the change is straightforward — replacing the RuntimeError with a threading.Lock that serializes the fork section.

The key points:

  • Lock is acquired before entering the critical section (setting up pthread_atfork handlers + uv_spawn)
  • Lock is released right after uv_spawn returns and the fork state is cleaned up
  • If anything goes wrong mid-fork, the finally block ensures the lock is released and fork state is reset
  • The active_process_handler check is kept for the same-loop case (which would be a programming error)

Looking forward to CI results to confirm this works end-to-end.

setuptools>=78 removed the bundled pkg_resources module, causing
ModuleNotFoundError on all CI jobs except Python 3.8-ubuntu (which
uses an older setuptools). Use packaging.requirements.Requirement
instead, which provides the same version-specifier matching and is
always available as a dependency of setuptools.
@veeceey
Copy link
Author

veeceey commented Feb 24, 2026

The CI failures were not caused by the process-spawning changes in this PR. All 14 failing jobs (everything except Python 3.8-ubuntu) hit the same error during pip install -e .[test,dev]:

ModuleNotFoundError: No module named 'pkg_resources'

This is because setuptools>=78 removed the bundled pkg_resources module, and setup.py line 111 does import pkg_resources to validate the Cython version. The Python 3.8-ubuntu job passed because it uses an older setuptools that still includes pkg_resources.

The 3.8-macos failure is a separate, pre-existing flaky test in test_context.py (AssertionError: True is not false), unrelated to this PR.

I pushed a fix that replaces pkg_resources.Requirement.parse() with packaging.requirements.Requirement, which provides the same PEP 440 version-specifier matching and is always available as a dependency of setuptools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

"RuntimeError: Racing with another loop to spawn a process" when spawning many processes from multiple threads

1 participant