Conversation
|
I leave a question here @znichollscr . I built some tests for the post-processing and they seem to work fine. Now I was trying to build a test that runs the whole |
Correct
Yep. Option b, which isn't a bad one so up to you which you prefer: we've tested all the individual pieces. Therefore, all we're really doing by plumbing them altogether is just checking that the plumbing works. So, we could instead do a test where we just run the pipeline once and use https://pytest-regressions.readthedocs.io/en/latest to save the output. Then we just assume that this output is correct (because we've tested all the individual pieces), but now we have a test that a) makes sure all the pieces work together and b) ensures that, if we break anything in the pipeline in future, we'll get notified about it. |
|
Should we merge this one? |
|
You'll need to fix up the merge conflicts now that we merged #52. Also what is your thought about this comment? #49 (comment) But yes, overall happy to merge if you need this in quickly |
Description
Added post-processing in line with the CMIP7 ScenarioMIP workflow.
Checklist
Please confirm that this pull request has done the following:
changelog/