-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make pytest unit tests more robust #16
Comments
It looks like we may need to add some cleanup sequence which ensures that kernel workers get stopped. It looks like the idea is for them to shut down when kernel does but it is not yet implemented. Currently they just keep running until they receive jupyter-server-nbmodel/jupyter_server_nbmodel/handlers.py Lines 259 to 284 in ef8181c
but it seems like in tests teardown the read from the queue will happen beforehand in:
Resulting in: =================== 1 failed, 4 passed, 1 warning in 14.57s ====================
Deleting active ExecutionStack. Be sure to call `await ExecutionStack.dispose()`.
/home/runner/work/jupyter-server-nbmodel/jupyter-server-nbmodel/jupyter_server_nbmodel/handlers.py:340: RuntimeWarning: coroutine 'ExecutionStack.dispose' was never awaited
self.dispose()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Could not destroy zmq context for <jupyter_client.asynchronous.client.AsyncKernelClient object at 0x7f063efd5df0>
Task was destroyed but it is pending!
task: <Task pending name='Task-124' coro=<_kernel_worker() running at /home/runner/work/jupyter-server-nbmodel/jupyter-server-nbmodel/jupyter_server_nbmodel/handlers.py:272> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Exception ignored in: <coroutine object _kernel_worker at 0x7f063ceb5fe0>
Traceback (most recent call last):
File "/home/runner/work/jupyter-server-nbmodel/jupyter-server-nbmodel/jupyter_server_nbmodel/handlers.py", line 301, in _kernel_worker
raise to_raise
File "/home/runner/work/jupyter-server-nbmodel/jupyter-server-nbmodel/jupyter_server_nbmodel/handlers.py", line 272, in _kernel_worker
uid, snippet, metadata = await queue.get()
^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/asyncio/queues.py", line 160, in get
getter.cancel() # Just in case getter is not done yet.
^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/asyncio/base_events.py", line 794, in call_soon
self._check_closed()
File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/asyncio/base_events.py", line 540, in _check_closed I guess we could implement something like putting a special value on the queue (or just cancelling the worker but I could not get it to work) when the kernel is shut down? |
Currently tests are flaky
They pass when run individually but when running them all it always fails.
The text was updated successfully, but these errors were encountered: