I want to have a way to accumulate test result files across pipelines and then have a last pipeline read them and analyse/upload to an analysis service/do whatever with them.
I have tried using drone-s3-cache with this workflow:
- init-pipeline - creates a unique folder like
commit-abc1234567890
in the cache using the commit id of the PR. Rebuild-flush the cache back to the S3 storage. - multiple test pipelines running at the same time on different agents - get the cache, run the test(s), put the test result file (with unique name) into
commit-abc1234567890
so that there is, for example,commit-abc1234567890/test-results-nnn.xml
in the cache. Rebuild-flush the cache back to the S3 storage. - after all test pipeline are finished, run a last pipeline - get the cache, see that all the
commit-abc1234567890/test-results-*.xml
are there. (then do some processing of them)
The problem is that (3) only sees a single commit-abc1234567890/test-results-999.xml
test result of the test pipeline that finished last.
I was hoping that when a test pipeline does rebuild-flush of the local cache up to the S3 storage, that the local changes to the cache (in this case the addition of a single file) would be merged into whatever the cache on the S3 storage currently looks like.
But it seems that rebuild-flush simply overwrites the cache in the S3 storage with the local content, losing any changes that were made by other test pipelines running and finishing…
Is this “just the way it is” for https://github.com/drone-plugins/drone-s3-cache plugin?
Or is there something I can do to get it to do a “flush-merge” of local to S3 storage cache?