Parallel Processing advice

Associate
Joined
30 Mar 2004
Posts
1,148
Location
West Wing
Hi,

I have a python application that uses threading to connect with a range of TCP endpoints in parallel. The application imports a list of say 100 IP addresses, splits the list into chunks of 10, then starts threads for each chunk of 10 to poll the sockets sequentially. Thereby doings its job in a small space of time instead of reading the big list sequentially which would take ages.

The problem is when the list gets really big, like over 10,000 sockets. Then you have potentially hundreds of threads and things start to break down.

Whats the best way of approaching this? Should I split the main list into smaller chunks then feed into multiple instances of the application? Then it would have say 10 instances of the application, each one splitting that list into threads. Or do i need a beefier machine and just run 10,000 threads at once in the same application. Appreciate any advice!
 
Back
Top Bottom