Testing JupyterHub and linting code#
Unit testing helps to validate that JupyterHub works the way we think it does, and continues to do so when changes occur. They also help communicate precisely what we expect our code to do.
Running the tests#
Make sure you have completed Setting up a development install. Once you are done, you would be able to run
jupyterhubfrom the command line and access it from your web browser. This ensures that the dev environment is properly set up for tests to run.
You can run all tests in JupyterHub
pytest -v jupyterhub/tests
This should display progress as it runs all the tests, printing information about any test failures as they occur.
If you wish to confirm test coverage the run tests with the
pytest -v --cov=jupyterhub jupyterhub/tests
You can also run tests in just a specific file:
pytest -v jupyterhub/tests/<test-file-name>
To run a specific test only, you can do:
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
This runs the test with function name
<test-file-name>. This is very useful when you are iteratively developing a single test.
For example, to run the test
test_shutdownin the file
test_api.py, you would run:
pytest -v jupyterhub/tests/test_api.py::test_shutdown
For more details, refer to the pytest usage documentation.
The tests live in
jupyterhub/tests and are organized roughly into:
test_api.pytests the REST API
test_pages.pytests loading the HTML pages
and other collections of tests for different components. When writing a new test, there should usually be a test of similar functionality already written and related tests should be added nearby.
The fixtures live in
jupyterhub/tests/conftest.py. There are
fixtures that can be used for JupyterHub components, such as:
app: an instance of JupyterHub with mocked parts
auth_state_enabled: enables persisting auth_state (like authentication tokens)
db: a sqlite in-memory DB session
io_loop`: a Tornado event loop
event_loop: a new asyncio event loop
user: creates a new temporary user
admin_user: creates a new temporary admin user
single user servers -
cleanup_after: allows cleanup of single user servers between tests
mocked service -
MockServiceSpawner: a spawner that mocks services for testing with a short poll interval -
mockservice`: mocked service with no external service url -
mockservice_url: mocked service with a url to test external services
And fixtures to add functionality or spawning behavior:
admin_access: grants admin access
no_patience`: sets slow-spawning timeouts to zero
slow_spawn: enables the SlowSpawner (a spawner that takes a few seconds to start)
never_spawn: enables the NeverSpawner (a spawner that will never start)
bad_spawn: enables the BadSpawner (a spawner that fails immediately)
slow_bad_spawn: enables the SlowBadSpawner (a spawner that fails after a short delay)
Refer to the pytest fixtures documentation to learn how to use fixtures that exists already and to create new ones.
The Pytest-Asyncio Plugin#
When testing the various JupyterHub components and their various implementations, it sometimes becomes necessary to have a running instance of JupyterHub to test against.
app fixture mocks a JupyterHub application for use in testing by:
enabling ssl if internal certificates are available
creating an instance of MockHub using any provided configurations as arguments
initializing the mocked instance
starting the mocked instance
finally, a registered finalizer function performs a cleanup and stops the mocked instance
The JupyterHub test suite uses the pytest-asyncio plugin that handles event-loop integration in Tornado applications. This allows for the use of top-level awaits when calling async functions or fixtures during testing. All test functions and fixtures labelled as
async will run on the same event loop.
With the introduction of top-level awaits, the use of the
io_loop fixture of the pytest-tornado plugin is no longer necessary. It was initially used to call coroutines. With the upgrades made to
pytest-asyncio, this usage is now deprecated. It is now, only utilized within the JupyterHub test suite to ensure complete cleanup of resources used during testing such as open file descriptors. This is demonstrated in this pull request.
More information is provided below.
One of the general goals of the JupyterHub Pytest Plugin project is to ensure the MockHub cleanup fully closes and stops all utilized resources during testing so the use of the
io_loop fixture for teardown is not necessary. This was highlighted in this issue
For more information on asyncio and event-loops, here are some resources:
Troubleshooting Test Failures#
All the tests are failing#
Make sure you have completed all the steps in Setting up a development install successfully, and are able to access JupyterHub from your browser at http://localhost:8000 after starting
jupyterhub in your command line.
Code formatting and linting#
JupyterHub automatically enforces code formatting. This means that pull requests with changes breaking this formatting will receive a commit from pre-commit.ci automatically.
To automatically format code locally, you can install pre-commit and register a git hook to automatically check with pre-commit before you make a commit if the formatting is okay.
pip install pre-commit pre-commit install --install-hooks
To run pre-commit manually you would do:
# check for changes to code not yet committed pre-commit run # check for changes also in already committed code pre-commit run --all-files
You may also install black integration into your text editor to format code automatically.