test_util: Add test for download_awacy_gamelist#535
Merged
DavidoTek merged 4 commits intoDavidoTek:mainfrom Jul 25, 2025
Merged
test_util: Add test for download_awacy_gamelist#535DavidoTek merged 4 commits intoDavidoTek:mainfrom
download_awacy_gamelist#535DavidoTek merged 4 commits intoDavidoTek:mainfrom
Conversation
51b1ee5 to
8ed914f
Compare
Contributor
Author
|
Now that #529 is merged, I have rebased and added Since opening this PR I did a fresh install so I could not run the tests on this branch locally, so I added those two dependencies to Aside a rebase and adding the dependencies to the now-available (and used in CI) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Depends on #529, because we need to add
pyfakefsandpytest-mocktotests/requirements.txt(or whatever solution we land on) so that CI can have access to these PyTest plugins. Since this test also makes use ofresponses, we also need access to that as it was implemented in that PR.CI will fail on this PR because we don't have access to those plugins.
Overview
This PR adds a test for the
download_awacy_gamelistutility function.To do this, we mock the response from a GET request to our
AWACY_GAME_LIST_URLconstant. Then we mocked outis_onlineto always returnTrue. Finally we ensure the text written into the file fromdownload_awacy_gamelist()matches the response we get back from our mocked GET request.The purpose of this test is to ensure that we create a real file at the
LOCAL_AWACY_GAME_LISTconstant path, and that its contents match the response the way we would expect.This PR is opened as a draft as it depends on a PR that is not yet merged and the outcome of that PR may impact this one, and because other test cases should be added before I take this out of draft. However I wanted to get this PR up as a proof-of-concept on how our test suite can evolve and begin to cover more complex test cases and functions.
Implementation
Two new test dependencies were introduced in order to test this function:
pyfakefsin order to mock calls to the filesystem, andpytest-mockwhich allows us to stub out functions ourselves.pyfakefsis particularly cool, as it allows all filesystem operations (with some notable exceptions, such asPathlib) to be done in an in-memory filesystem. This means no real filesystem operations are performed except where we explicitly tellpyfakefsto use the real filesystem.In PyTest, we can use the
fsfixture to access thepyfakefsfake filesystem, and we can usemockerfixture to access mocking operations frompytest-mocker. Since these are fixtures, by default they are per-test, meaning any fake files created or any functions mocked should only apply within the scope of the function being ran. So if we mock a function in ourtest_download_awacy_gamelist, it should not interfere with any other tests that depend on the "real" function. Likewise withpyfakefs, any fake files created only exist within the scope of the function that uses thefsfixture (if we ever needed other functionality,pyfakefsprovides fixtures with other scopes).A new JSON file was created at
tests/fixtures/util/awacy_game_list.json. This is the first ten objects fromAWACY_GAME_LIST_URLstored in a local JSON file. In order to use this in a test, I set it up as a PyTest Fixture, meaning that it is set up and torn down after each call, and available in our test function as a parameter. This fixture returns the File object. Since we're usingpyfakefs, we have to tell it to load the real JSON file, otherwise it'll try to load it from its fake filesystem. To do this, we mount it withfs.add_real_file. Note thefsis one of the fixtures available to us frompyfakefsfor use with PyTest.When we mock the response from our GET request to
AWACY_GAME_LIST_URL. This response is generated based on the contents of theawacy_game_listfixture described above.Before calling
download_awacy_gamelist, we need to mock ouris_onlinefunction. For the happy path, we want to make sure that it always returnsTrue. So we mock it to do just that, so that when we calldownload_awacy_gamelistwe can ensure it runs.Now we can call
download_awacy_gamelist! One change I chose to make inside ofdownload_awacy_gamelistwas to name the thread that we create to call our inner function that actually makes the request and writes the response content to the AWACY JSON file. The reason I chose to do this was to make it easier to identify and wait on this thread finishing before continuing with the test. We need to ensure that this thread closes before we check the file contents, otherwise our test will run so fast that we run ourassertsbefore the file has even been created. While we could have done this against the thread count and waited until the thread count was at1, we really only care about this specific thread finishing and the simplest approach was to name this thread and identify it in aforloop in our test. This allows us to say "wait for the thread named_download_awacy_gamelistto finish before continuing with our test".Once the thread has ended, we can read the file from our fake filesystem. Since we're using
pyfakefswhich mocks our standard file IO operations, theLOCAL_AWACY_GAME_LISTwill exist in our fake filesystem (pyfakefswill always use the fake filesystem by default and can only access real files if we explicitly tell it to become aware of real files/directories/paths, as we did for ourawacy_game_listfixture). We read the content of this file into a variable, because later on we will want to make sure it matches the response we wanted our mocked GET request to return; we want to make sure the content is being written to our expected value (the mocked response).Finally we can do our assertions:
LOCAL_AWACY_GAME_LISTshould exist.get_mock(the data written to the file should be the content of the response body).is_onlineshould have been called once, to ensure we don't make the request if we aren't online (sinceis_onlinewill catch timeout and connection errors for us).Concerns
I think using a "real" JSON file that matches an expected response is a good idea. We may have to keep it in sync, but running our tests against "real" data in a local file should give us confidence that they are performing as expected when given the data we expect. We can do similar stuff with other responses, like GitHub API responses.
My concern, however, is the location, structure, and naming that I've gone with. From what I've seen,
fixturestends to be a generic directory name for these kinds of files, but I'm happy with any other directory name that might be preferred. Similarly, would we want this to be in autilsfolder, aresponsesfolder, or autils/responsesor maybe evenresponses/utilsfolder? Essentially, I think having this file and other files like it for our tests is a good idea, but I don't know what the best structure for storing it is. 😅Since we do a lot of filesystem work in ProtonUp-Qt, I think
pyfakefsis a very useful tool for us to use in our tests, and hopefully this PR outlines a way that we can use it to begin writing some neat tests!All feedback is welcome! And if I didn't explain any of the libraries or how we use them properly, I'm happy to clarify. Also, I spent a lot of time on and off bashing my head against the wall trying to figure out a pattern for how to begin writing tests like this. I finally figured it out a couple days ago and finally got to work expanding the tests we have for ProtonUp-Qt, hence the massive number of PRs for tests. But it took me a while to get up to speed, so if I've done anything wrong or similarly if there's something I can explain better, please ask!
Thanks!