Bill Deegan
2018-02-19 20:24:28 UTC
Ignoring failures in the CI environment would negate their value.
Ideally we see all pass which we expect to pass and then build is marked
successful and we don't need to dig into the details of each build's tests.
Otherwise you need to go look at each build * N (where N = platforms *
python versions) to know if the changes introduced any new bugs
That said, since we see failure often for that single test, I'd be o.k.
with letting the tests pass if it fails, and skip trying to run it 10
times.
Just run it singly and then run the rest. Ignoring the return value for the
single test, but allowing the full test - jobstests.py run to define a
passing or failing build.
And of course even more ideally we'd figure out why that test fails in that
environment and fix it, but it may not be worth the time at this point.
Ideally we see all pass which we expect to pass and then build is marked
successful and we don't need to dig into the details of each build's tests.
Otherwise you need to go look at each build * N (where N = platforms *
python versions) to know if the changes introduced any new bugs
That said, since we see failure often for that single test, I'd be o.k.
with letting the tests pass if it fails, and skip trying to run it 10
times.
Just run it singly and then run the rest. Ignoring the return value for the
single test, but allowing the full test - jobstests.py run to define a
passing or failing build.
And of course even more ideally we'd figure out why that test fails in that
environment and fix it, but it may not be worth the time at this point.
Actually there are several that are passing in the build history after the
patch.
The issue for the ones that are failing is that JobsTest.py is
intermittently failing. I tried to do a workaround that retried the test 10
times, but sometimes it just continually fails.
I think it's because of the VM environment that travis uses.
I was thinking instead of trying to workaround it, we could ignore
failures while in the CI environments.
patch.
The issue for the ones that are failing is that JobsTest.py is
intermittently failing. I tried to do a workaround that retried the test 10
times, but sometimes it just continually fails.
I think it's because of the VM environment that travis uses.
I was thinking instead of trying to workaround it, we could ignore
failures while in the CI environments.
https://travis-ci.org/SCons/scons/jobs/343424806
(or any since we put in the patch to bump the pickle version up in order
to fix the py3.* builds)
(or any since we put in the patch to bump the pickle version up in order
to fix the py3.* builds)
The double bar (||) means execute the next command on non zero exit from
previous command. So if there is exit 1 or 2, then it checks if 2, which
means that it was only no results, which is considered passing. If there is
a 1 exitcode then it fails.
Do you have a specific travis build in question?
previous command. So if there is exit 1 or 2, then it checks if 2, which
means that it was only no results, which is considered passing. If there is
a 1 exitcode then it fails.
Do you have a specific travis build in question?
Daniel,
I think it's this line in the .travis.yml
- python runtest.py -a --exclude-list exclude_jobtest || if [[ $? == 2
]]; then true; else false; fi
Do?
$ echo "BILL"
BILL
$ echo $?
0
$ if [[ $? == 2 ]]; then true; else false; fi
$ echo $?
1
-Bill
I think it's this line in the .travis.yml
- python runtest.py -a --exclude-list exclude_jobtest || if [[ $? == 2
]]; then true; else false; fi
Do?
$ echo "BILL"
BILL
$ echo $?
0
$ if [[ $? == 2 ]]; then true; else false; fi
$ echo $?
1
-Bill