Skip to content

test_util: Add test for download_awacy_gamelist#535

Merged
DavidoTek merged 4 commits intoDavidoTek:mainfrom
sonic2kk:test-download-awacy-gamelist
Jul 25, 2025
Merged

test_util: Add test for download_awacy_gamelist#535
DavidoTek merged 4 commits intoDavidoTek:mainfrom
sonic2kk:test-download-awacy-gamelist

Conversation

@sonic2kk
Copy link
Copy Markdown
Contributor

@sonic2kk sonic2kk commented Apr 12, 2025

Depends on #529, because we need to add pyfakefs and pytest-mock to tests/requirements.txt (or whatever solution we land on) so that CI can have access to these PyTest plugins. Since this test also makes use of responses, we also need access to that as it was implemented in that PR.

CI will fail on this PR because we don't have access to those plugins.

Overview

This PR adds a test for the download_awacy_gamelist utility function.

To do this, we mock the response from a GET request to our AWACY_GAME_LIST_URL constant. Then we mocked out is_online to always return True. Finally we ensure the text written into the file from download_awacy_gamelist() matches the response we get back from our mocked GET request.

The purpose of this test is to ensure that we create a real file at the LOCAL_AWACY_GAME_LIST constant path, and that its contents match the response the way we would expect.

This PR is opened as a draft as it depends on a PR that is not yet merged and the outcome of that PR may impact this one, and because other test cases should be added before I take this out of draft. However I wanted to get this PR up as a proof-of-concept on how our test suite can evolve and begin to cover more complex test cases and functions.

Implementation

Two new test dependencies were introduced in order to test this function: pyfakefs in order to mock calls to the filesystem, and pytest-mock which allows us to stub out functions ourselves. pyfakefs is particularly cool, as it allows all filesystem operations (with some notable exceptions, such as Pathlib) to be done in an in-memory filesystem. This means no real filesystem operations are performed except where we explicitly tell pyfakefs to use the real filesystem.

In PyTest, we can use the fs fixture to access the pyfakefs fake filesystem, and we can use mocker fixture to access mocking operations from pytest-mocker. Since these are fixtures, by default they are per-test, meaning any fake files created or any functions mocked should only apply within the scope of the function being ran. So if we mock a function in our test_download_awacy_gamelist, it should not interfere with any other tests that depend on the "real" function. Likewise with pyfakefs, any fake files created only exist within the scope of the function that uses the fs fixture (if we ever needed other functionality, pyfakefs provides fixtures with other scopes).

A new JSON file was created at tests/fixtures/util/awacy_game_list.json. This is the first ten objects from AWACY_GAME_LIST_URL stored in a local JSON file. In order to use this in a test, I set it up as a PyTest Fixture, meaning that it is set up and torn down after each call, and available in our test function as a parameter. This fixture returns the File object. Since we're using pyfakefs, we have to tell it to load the real JSON file, otherwise it'll try to load it from its fake filesystem. To do this, we mount it with fs.add_real_file. Note the fs is one of the fixtures available to us from pyfakefs for use with PyTest.

When we mock the response from our GET request to AWACY_GAME_LIST_URL. This response is generated based on the contents of the awacy_game_list fixture described above.

Before calling download_awacy_gamelist, we need to mock our is_online function. For the happy path, we want to make sure that it always returns True. So we mock it to do just that, so that when we call download_awacy_gamelist we can ensure it runs.

Now we can call download_awacy_gamelist! One change I chose to make inside of download_awacy_gamelist was to name the thread that we create to call our inner function that actually makes the request and writes the response content to the AWACY JSON file. The reason I chose to do this was to make it easier to identify and wait on this thread finishing before continuing with the test. We need to ensure that this thread closes before we check the file contents, otherwise our test will run so fast that we run our asserts before the file has even been created. While we could have done this against the thread count and waited until the thread count was at 1, we really only care about this specific thread finishing and the simplest approach was to name this thread and identify it in a for loop in our test. This allows us to say "wait for the thread named _download_awacy_gamelist to finish before continuing with our test".

Once the thread has ended, we can read the file from our fake filesystem. Since we're using pyfakefs which mocks our standard file IO operations, the LOCAL_AWACY_GAME_LIST will exist in our fake filesystem (pyfakefs will always use the fake filesystem by default and can only access real files if we explicitly tell it to become aware of real files/directories/paths, as we did for our awacy_game_list fixture). We read the content of this file into a variable, because later on we will want to make sure it matches the response we wanted our mocked GET request to return; we want to make sure the content is being written to our expected value (the mocked response).

Finally we can do our assertions:

  • The LOCAL_AWACY_GAME_LIST should exist.
  • The content of this file should match the body of our get_mock (the data written to the file should be the content of the response body).
  • is_online should have been called once, to ensure we don't make the request if we aren't online (since is_online will catch timeout and connection errors for us).

Concerns

I think using a "real" JSON file that matches an expected response is a good idea. We may have to keep it in sync, but running our tests against "real" data in a local file should give us confidence that they are performing as expected when given the data we expect. We can do similar stuff with other responses, like GitHub API responses.

My concern, however, is the location, structure, and naming that I've gone with. From what I've seen, fixtures tends to be a generic directory name for these kinds of files, but I'm happy with any other directory name that might be preferred. Similarly, would we want this to be in a utils folder, a responses folder, or a utils/responses or maybe even responses/utils folder? Essentially, I think having this file and other files like it for our tests is a good idea, but I don't know what the best structure for storing it is. 😅


Since we do a lot of filesystem work in ProtonUp-Qt, I think pyfakefs is a very useful tool for us to use in our tests, and hopefully this PR outlines a way that we can use it to begin writing some neat tests!

All feedback is welcome! And if I didn't explain any of the libraries or how we use them properly, I'm happy to clarify. Also, I spent a lot of time on and off bashing my head against the wall trying to figure out a pattern for how to begin writing tests like this. I finally figured it out a couple days ago and finally got to work expanding the tests we have for ProtonUp-Qt, hence the massive number of PRs for tests. But it took me a while to get up to speed, so if I've done anything wrong or similarly if there's something I can explain better, please ask!

Thanks!

@sonic2kk sonic2kk force-pushed the test-download-awacy-gamelist branch from 51b1ee5 to 8ed914f Compare May 7, 2025 16:58
@sonic2kk sonic2kk marked this pull request as ready for review May 7, 2025 17:02
@sonic2kk
Copy link
Copy Markdown
Contributor Author

sonic2kk commented May 7, 2025

Now that #529 is merged, I have rebased and added pytest-mock and pyfakefs to tests/requirements.txt. This should allow these tests to pass.

Since opening this PR I did a fresh install so I could not run the tests on this branch locally, so I added those two dependencies to tests/requirements.txt, installed to my venv with pip install -r requirements.txt, and ran pytest tests/`. It went from complaining about failed imports to passing.

Aside a rebase and adding the dependencies to the now-available (and used in CI) tests/requirements.txt, there have been no changes since this was taken out of draft. It is now ready for review :-)

@DavidoTek DavidoTek merged commit 16c3594 into DavidoTek:main Jul 25, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants