I'd be fascinated to see if there's a difference (hopefully, further improvement) running under my own ScalingPoolExecutor: https://github.com/marrow/util/blob/develop/marrow/util/futures.py?ts=4#L64 — this is a tweaked ThreadPoolExecutor that does Less Dumb™ on each work unit submission (default thread pool attempts to spawn a thread on each submission regardless of the existence of idle threads), scales the pool size based on pending backlog size, and encourages memory cleanup by limiting the amount of work completed by any given thread.
Comment
I'd be fascinated to see if there's a difference (hopefully, further improvement) running under my own ScalingPoolExecutor: https://github.com/marrow/util/blob/develop/marrow/util/futures.py?ts=4#L64 — this is a tweaked ThreadPoolExecutor that does Less Dumb™ on each work unit submission (default thread pool attempts to spawn a thread on each submission regardless of the existence of idle threads), scales the pool size based on pending backlog size, and encourages memory cleanup by limiting the amount of work completed by any given thread.