Upload to s3 bucket

As we are migrating from drone 0.4 to 0.7 we are running in to an issue with our s3 uploads.

During the s3 step the process exits with ‘file does not exist’.

Is this perhaps because the ‘plugins/s3’ image cannot access the workspace directory used in the build-step?
Any tips on where to look would be helpful, as the current documentation is quite lacking on these points.

Is this perhaps because the ‘plugins/s3’ image cannot access the workspace directory used in the build-step?

plugins always start with the root of your repository as the working directory, so this would not be an issue. There are no known issues with the s3 plugin at this time.

The latest documentation for this plugin can be found here

I recommend testing your configuration directly from the command line as shown here and then digging into the code to the plugin code to debug as needed.

And where would I find the root of the repo exactly?

This I cannot find in the drone documentation.

http://docs.drone.io/workspace/

But this is within the container right, or on the host?

Because we don’t have a /drone on the host.

I’ve tried to copy the file we’re trying to upload to a new volume mounted on the host in the build step to verify that it is created.

However when I try to upload this file using the full-path (and mounting the same volume) in the s3 step I still get the ‘file does not exist’ error.

Manually running the ‘plugin/s3’ image with source-file and same targets and configurations, as you suggested @bradrydzewski , works correctly.

I cannot advise further without seeing your yaml configuration.

The likely root cause is that you are trying to upload a file outside the workspace which is not possible. The S3 plugin will only have access to the workspace and files in the workspace. You can learn more about the worskpace at http://docs.drone.io/workspace/

For example, this will not work because the file is outside the workspace:

pipeline:
  create:
    image: alpine
    commands:
      - touch /tmp/foo.txt
  upload:
    image: plugins/s3
    bucket: my-bucket-name
    source: /tmp/foo.txt
    target: /

And this will work because the file is in the workspace:

pipeline:
  create:
    image: alpine
    commands:
-     - touch /tmp/foo.txt
+     - touch foo.txt
  upload:
    image: plugins/s3
    bucket: my-bucket-name
-   source: /tmp/foo.txt
+   source: foo.txt
    target: /

Ok, in our build-step we create a directory in the workspace called ‘build/’ and subsequently place a tarball there.

Is it that the s3 plugin also cannot traverse into directories in the workspace? ie. does the file that is uploaded have to be directly in the root of the workspace as in your example?

On our drone 0.4 install we use the same method without issue btw.

have to be directly in the root of the workspace as in your example?

Nope. Below is a yaml configuration that I just used with 0.8.2+build.1334 that successfully uploaded the file to my bucket. Everything worked as expected.

pipeline:
  test:
    image: alpine
    commands:
      - mkdir foo
      - touch foo/bar.txt
      
  upload:
    image: plugins/s3
    source: foo/bar.txt
    target: /
    bucket: my-test-bucket-1234
    secrets: [AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY]

I am unable to reproduce the issues described in this thread. To further assist I would need a sample yaml file that can be used to consistently reproduce the problem.

Ok I find out the issue:

We where using $COMMIT and $BRANCH variables to create folder and point to the file that was created. These have now changed since 0.4, when I use the new reference variables this goes right:
http://readme.drone.io/usage/environment-reference/

However, the ‘source’ is not taken as a file but the entire path is placed in the bucket. This is definitely very different behaviour as before. We could just point to the file using the source-path and put the file in the target-path. Now it creates the entire source-path inside the target. This to me is very odd behaviour and not what we want to use at all, is there a way to get the old behaviour?

maybe the strip prefix parameter is what you are looking for?

pipeline:
  s3:
    image: plugins/s3
    bucket: my-bucket-name
    source: public/**/*
    target: /target/location
+   strip_prefix: public/

from http://plugins.drone.io/drone-plugins/drone-s3/

Thnx I’ll try this tomorrow.

Still a bit odd that this is new behaviour. Variable names I can understand but this is quite radical somehow …

Still a bit odd that this is new behaviour.

The plugin now supports globbing. Have you considered how this might impact the design or the ability for the plugin to automatically trim the path prefix? The current approach does not try to be clever. It is simple and declarative and documented. If you think it can be improved, send a pull request.

It is not documented that the old behaviour has clearly changed. I would expect something non-obvious like this to be clearly mentioned in the documentation.

Maybe just my 2cts

Closing this issue because the question was answered.