Skip to content

iotools tests fail when unable to connect to remote API #917

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
CameronTStark opened this issue Feb 27, 2020 · 3 comments · Fixed by #919
Closed

iotools tests fail when unable to connect to remote API #917

CameronTStark opened this issue Feb 27, 2020 · 3 comments · Fixed by #919
Assignees
Labels
Milestone

Comments

@CameronTStark
Copy link
Contributor

CameronTStark commented Feb 27, 2020

Describe the bug
Failures of network dependent iotools tests due to failures of the API server receiving the request affect consistency and confidence in our CI test results.

To Reproduce

  • This is difficult to reproduce since API failures are sporadic.
  • An example of our CI failure can be found here.

Expected behavior
There are two important functionalities that are desirable:

  1. Ideally, a failure due to API server failure would be repeated with the hope that communication can be established.

  2. In the event that communication cannot be established during the test, it should be conditionally skipped with a warning that the test wasn't properly run.

Additional context

This was originally identified by @wholmgren in #897 (comment)_

Proposed solution
Implement @pytest.mark.flaky()for all remote API dependent tests to reattempt if API is temporarily unreachable. This is already done for tests in test_psm3.py decorated with @pytest.mark.remote_data.

I'm going to look for a pytest functionality to enable a conditional skip of the test in the event the API server is completely unavailable during the test duration. If you have a suggestion on how to accomplish this please let me know.
@pytest.mark.skipif() should work well for this application but the trick will be the implementation for each network dependent API.

@CameronTStark
Copy link
Contributor Author

It might be ideal that this PR is merged as is after some final polish. The implementation of @pytest.mark.flaky() currently in this PR can be a quick win for 0.7.2 in terms of fixing sporadic dropouts in server dropouts during tests.

The @pytest.mark.flaky() or mock solutions would take (me) significantly more time to figure out and technically solves a separate issue of complete dropout of the external API server.

What do you think @wholmgren, @mikofski, @cwhanse?

@mikofski
Copy link
Member

mikofski commented Mar 1, 2020

It's fine, but pvlib was not the only one affected by NREL's PSM3 failure. I think we need to coordinate with NREL to get some public accounting for what happened.

@wholmgren
Copy link
Member

Ideally, a failure due to API server failure would be repeated with the hope that communication can be established... The implementation of @pytest.mark.flaky() currently in this PR can be a quick win for 0.7.2 in terms of fixing sporadic dropouts in server dropouts during tests.

I agree.

In the event that communication cannot be established during the test, it should be conditionally skipped with a warning that the test wasn't properly run.

I disagree. If GitHub shows all green then I don't look at the CI results, so to me it defeats the purpose of running the test at all.

I'm all for using mock to further minimize the reliance on the network code, but that will take more effort to do right. We should start by ensuring that all of the API-testing code is decoupled from the parser-testing code (as some recent PRs have addressed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants