Clone on "skipped" steps

At least on drone 1.0(latest RC), there are steps for which the default clone step is always executed, even if the step is not meant to be executed.

Lets say a drone config defines a pipeline for push events and another for pull_request events.
Both pipelines will execute the default clone step even though only one runs at a time.
For big projects, one might grow as big as 50 pipelines(this is a rough low estimate for one of our projects).

Drone knows which piipelines are involved before starting to run them or, how would the UI know what to show?

So, why not go as far as inspecting if a pipeline is suppose to run for the current event?
If its not supposed to run, don’t even clone.

And, as a feature request(suggestion), don’t even show steps on the UI which are not supposed to run.

Can you imagine going through 100+ pipelines, half of which are skipped and the other half runs?

So, why not go as far as inspecting if a pipeline is suppose to run for the current event?
If its not supposed to run, don’t even clone.

hmm, I would probably need more details because this is how Drone works. You should be using the trigger section of the Pipeline to prevent the entire pipeline from being executed. The only exception is when you define trigger.status which is evaluated at runtime, and results in an entry in the database and user-interface.

Example configuration:

kind: pipeline
name: foo

steps: [ ... ]

trigger:
  event: [ push ]

---
kind: pipeline
name: bar

steps: [ ... ]

trigger:
  event: [ pull_request ]

Pipeline foo in the above example is executed for a push event, and Pipeline bar is ignored and no entry is created in the database (nor displayed in the user-interface) because we know up front that it can be skipped.

We’re actually using when, here’s 2 example pipelines:

---
kind: pipeline
name: lint-arch-2019-01-09

platform:
  os: linux
  arch: amd64

steps:
- name: lint-arch-2019-01-09
  image: hashicorp/packer
  commands:
  - apk --no-cache add make
  - make validate OS=arch OS_REV=2019-01-09
  when:
    event:
    - pull_request

---
kind: pipeline
name: arch-2019-01-09-2017.7

platform:
  os: linux
  arch: amd64

steps:
- name: throttle build
  image: alpine
  commands:
  - "sh -c 't=$(shuf -i 20-120 -n 1); echo Sleeping $t seconds; sleep $t'"
  when:
    branch:
    - ci

- name: 2017.7
  image: hashicorp/packer
  commands:
  - apk --no-cache add make curl grep gawk sed
  - make build OS=arch OS_REV=2019-01-09 SALT_BRANCH=2017.7
  environment:
    AWS_ACCESS_KEY_ID:
      from_secret: username
    AWS_DEFAULT_REGION: us-west-2
    AWS_SECRET_ACCESS_KEY:
      from_secret: password
  when:
    branch:
    - ci

I’ll try using triggers instead.

Yep, we were using it the wrong way.
This topic should be marked as solved.

Thank You!

excellent, glad this was helpful. I would still ask that, if you have a build with 50+ pipelines that perhaps you provide a screenshot. We can show this to our UX designers so that future iterations of the UI can be optimized for such complex configurations.


and

It would be awesome if we could group some pipelines(just for UI’s sake, not to mix with serial vs parallel execution).