Multiple Parallel Pipelines (not Steps) with file sharing

We have a fairly heavy build/testing step and packaging step. We have autoscale on and noticed it will spin up a machine (AWS) for each pipeline when I have a diamond pattern with pipeline depends.

a starts with early exit tests
b and c both depend on a so we don’t spend tons of time if early exit fails.
d depends on b and c with failure notification or pushing archives if testing was good.

   a
 /   \
b     c
 \   /
   d

If all pipelines were self-contained, this works great. b and c are run on different machines, so the heavy processes we want in parallel work as expected. My only thing to solve is publishing artifacts from b if both b and c are successful.

One option is pushing files from c into d somehow and then this automatically doesn’t happen if both b and c are successful.

Alternatively, if there is a way of depending on a step in pipeline b on pipeline c completing correctly.

A little more detailed diamond:

         pre-check  ->  on failure pipeline
          /   \
  packaging    testing   (both depends_on pre-check)
         \     /             (run in parallel with two machines)
          \   /
         push_packaging (on success)
         notify (on failure)
          (both depends_on  packaging and testing)

The only thing I can figure out is some external temp file store that we push and pull to. Anyone have a nice solution to this?

I’m using

DRONE_UNIQUE="{DRONE_BUILD_NUMBER}_{DRONE_REPO///_}"

To name a bucket for S3 and s3cmd sync in and out in pipelines. Then del --recursive in both of my end pipelines. This is working for what I need as temp storage between our AWS autoscaled drone instances.