How to make a Python generator execution asynchronous?
up vote
0
down vote
favorite
I have a piece of of code that looks like this:
def generator():
while True:
result = very_long_computation()
yield result
def caller():
g = generator()
for i in range(n):
element = next(g)
another_very_long_computation()
Basically, I'd like to overlap the execution of very_long_computation()
and another_very_long_computation()
as much as possible.
Is there simple way to make the generator asynchronous? I'd like the generator to start computing the next iteration of the while loop right after result
has been yielded, so that (ideally) the next result
is ready to be yielded before the succesive next()
call in caller()
.
python asynchronous generator yield
add a comment |
up vote
0
down vote
favorite
I have a piece of of code that looks like this:
def generator():
while True:
result = very_long_computation()
yield result
def caller():
g = generator()
for i in range(n):
element = next(g)
another_very_long_computation()
Basically, I'd like to overlap the execution of very_long_computation()
and another_very_long_computation()
as much as possible.
Is there simple way to make the generator asynchronous? I'd like the generator to start computing the next iteration of the while loop right after result
has been yielded, so that (ideally) the next result
is ready to be yielded before the succesive next()
call in caller()
.
python asynchronous generator yield
1
FWIW, there's a very interesting video youtube tutorial by David Beazley titled A Curious Course on Coroutines and Concurrency from PyCon 2009 which you would probably find very enlightening.
– martineau
Nov 8 at 19:02
1
I'll check it out, thanks!
– miditower
Nov 9 at 13:06
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have a piece of of code that looks like this:
def generator():
while True:
result = very_long_computation()
yield result
def caller():
g = generator()
for i in range(n):
element = next(g)
another_very_long_computation()
Basically, I'd like to overlap the execution of very_long_computation()
and another_very_long_computation()
as much as possible.
Is there simple way to make the generator asynchronous? I'd like the generator to start computing the next iteration of the while loop right after result
has been yielded, so that (ideally) the next result
is ready to be yielded before the succesive next()
call in caller()
.
python asynchronous generator yield
I have a piece of of code that looks like this:
def generator():
while True:
result = very_long_computation()
yield result
def caller():
g = generator()
for i in range(n):
element = next(g)
another_very_long_computation()
Basically, I'd like to overlap the execution of very_long_computation()
and another_very_long_computation()
as much as possible.
Is there simple way to make the generator asynchronous? I'd like the generator to start computing the next iteration of the while loop right after result
has been yielded, so that (ideally) the next result
is ready to be yielded before the succesive next()
call in caller()
.
python asynchronous generator yield
python asynchronous generator yield
asked Nov 8 at 18:46
miditower
173
173
1
FWIW, there's a very interesting video youtube tutorial by David Beazley titled A Curious Course on Coroutines and Concurrency from PyCon 2009 which you would probably find very enlightening.
– martineau
Nov 8 at 19:02
1
I'll check it out, thanks!
– miditower
Nov 9 at 13:06
add a comment |
1
FWIW, there's a very interesting video youtube tutorial by David Beazley titled A Curious Course on Coroutines and Concurrency from PyCon 2009 which you would probably find very enlightening.
– martineau
Nov 8 at 19:02
1
I'll check it out, thanks!
– miditower
Nov 9 at 13:06
1
1
FWIW, there's a very interesting video youtube tutorial by David Beazley titled A Curious Course on Coroutines and Concurrency from PyCon 2009 which you would probably find very enlightening.
– martineau
Nov 8 at 19:02
FWIW, there's a very interesting video youtube tutorial by David Beazley titled A Curious Course on Coroutines and Concurrency from PyCon 2009 which you would probably find very enlightening.
– martineau
Nov 8 at 19:02
1
1
I'll check it out, thanks!
– miditower
Nov 9 at 13:06
I'll check it out, thanks!
– miditower
Nov 9 at 13:06
add a comment |
1 Answer
1
active
oldest
votes
up vote
3
down vote
accepted
There is no simple way, especially since you've got very_long_computation
and another_very_long_computation
instead of very_slow_io
. Even if you moved generator
into its own thread, you'd be limited by CPython's global interpreter lock, preventing any performance benefit.
You could move the work into a worker process, but the multiprocessing
module isn't the drop-in replacement for threading
it likes to pretend to be. It's full of weird copy semantics, unintuitive restrictions, and platform-dependent behavior, as well as just having a lot of communication overhead.
If you've got I/O along with your computation, it's fairly simple to shove the generator's work into its own thread to at least get some work done during the I/O:
from queue import Queue
import threading
def worker(queue, n):
gen = generator()
for i in range(n):
queue.put(next(gen))
def caller():
queue = Queue()
worker_thread = threading.Thread(worker, args=(queue, n))
worker_thread.start()
for i in range(n):
element = queue.get()
another_very_long_computation()
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
There is no simple way, especially since you've got very_long_computation
and another_very_long_computation
instead of very_slow_io
. Even if you moved generator
into its own thread, you'd be limited by CPython's global interpreter lock, preventing any performance benefit.
You could move the work into a worker process, but the multiprocessing
module isn't the drop-in replacement for threading
it likes to pretend to be. It's full of weird copy semantics, unintuitive restrictions, and platform-dependent behavior, as well as just having a lot of communication overhead.
If you've got I/O along with your computation, it's fairly simple to shove the generator's work into its own thread to at least get some work done during the I/O:
from queue import Queue
import threading
def worker(queue, n):
gen = generator()
for i in range(n):
queue.put(next(gen))
def caller():
queue = Queue()
worker_thread = threading.Thread(worker, args=(queue, n))
worker_thread.start()
for i in range(n):
element = queue.get()
another_very_long_computation()
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
add a comment |
up vote
3
down vote
accepted
There is no simple way, especially since you've got very_long_computation
and another_very_long_computation
instead of very_slow_io
. Even if you moved generator
into its own thread, you'd be limited by CPython's global interpreter lock, preventing any performance benefit.
You could move the work into a worker process, but the multiprocessing
module isn't the drop-in replacement for threading
it likes to pretend to be. It's full of weird copy semantics, unintuitive restrictions, and platform-dependent behavior, as well as just having a lot of communication overhead.
If you've got I/O along with your computation, it's fairly simple to shove the generator's work into its own thread to at least get some work done during the I/O:
from queue import Queue
import threading
def worker(queue, n):
gen = generator()
for i in range(n):
queue.put(next(gen))
def caller():
queue = Queue()
worker_thread = threading.Thread(worker, args=(queue, n))
worker_thread.start()
for i in range(n):
element = queue.get()
another_very_long_computation()
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
add a comment |
up vote
3
down vote
accepted
up vote
3
down vote
accepted
There is no simple way, especially since you've got very_long_computation
and another_very_long_computation
instead of very_slow_io
. Even if you moved generator
into its own thread, you'd be limited by CPython's global interpreter lock, preventing any performance benefit.
You could move the work into a worker process, but the multiprocessing
module isn't the drop-in replacement for threading
it likes to pretend to be. It's full of weird copy semantics, unintuitive restrictions, and platform-dependent behavior, as well as just having a lot of communication overhead.
If you've got I/O along with your computation, it's fairly simple to shove the generator's work into its own thread to at least get some work done during the I/O:
from queue import Queue
import threading
def worker(queue, n):
gen = generator()
for i in range(n):
queue.put(next(gen))
def caller():
queue = Queue()
worker_thread = threading.Thread(worker, args=(queue, n))
worker_thread.start()
for i in range(n):
element = queue.get()
another_very_long_computation()
There is no simple way, especially since you've got very_long_computation
and another_very_long_computation
instead of very_slow_io
. Even if you moved generator
into its own thread, you'd be limited by CPython's global interpreter lock, preventing any performance benefit.
You could move the work into a worker process, but the multiprocessing
module isn't the drop-in replacement for threading
it likes to pretend to be. It's full of weird copy semantics, unintuitive restrictions, and platform-dependent behavior, as well as just having a lot of communication overhead.
If you've got I/O along with your computation, it's fairly simple to shove the generator's work into its own thread to at least get some work done during the I/O:
from queue import Queue
import threading
def worker(queue, n):
gen = generator()
for i in range(n):
queue.put(next(gen))
def caller():
queue = Queue()
worker_thread = threading.Thread(worker, args=(queue, n))
worker_thread.start()
for i in range(n):
element = queue.get()
another_very_long_computation()
edited Nov 10 at 5:41
answered Nov 8 at 18:54
user2357112
148k12154242
148k12154242
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
add a comment |
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
Actually very_long_computation() is about 50% of computation and 50% I/O. Do you think it is possible to overlap with at least the I/O part?
– miditower
Nov 9 at 13:05
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
@miditower: See the edit.
– user2357112
Nov 10 at 5:41
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53214243%2fhow-to-make-a-python-generator-execution-asynchronous%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
FWIW, there's a very interesting video youtube tutorial by David Beazley titled A Curious Course on Coroutines and Concurrency from PyCon 2009 which you would probably find very enlightening.
– martineau
Nov 8 at 19:02
1
I'll check it out, thanks!
– miditower
Nov 9 at 13:06