Running out of memory compiling on Cloud.drone.io

I am compiling an Alpine package for armv7 on cloud.drone.io and its failing due to running out of memory:
cc1plus: out of memory allocating 153592 bytes after a total of 131518464 bytes

When I start the build it tells me the host system has:

  • Memory: 125.65 Gb

But when add running free as a command in my build I find I only have 128Mb of RAM for my container:
total
Mem: 131753800

Is this actually the case? Is there a way to request more memory for a compile? 128Mb does seem very low…

I find I only have 128Mb of RAM for my container. Is this actually the case?

nope, we place no such memory restrictions on builds at cloud.drone.io. Is it possible your build is consuming all ram on the machine?

image
I dont think so, I have cross compiled it just fine on a VM with 2Gb of RAM.

I have included an image of the free output which definitely says it only has 128Mb of RAM total so I am not sure what is going on. Is it possibly a defect? Unfortunately I cant monitor mem usage while compiling (I think?) so not sure what the next step would be.

Sorry my bad, that free mem is in K not bytes, however the error message is definitely in bytes. I might see if I can find a way to check disk usage right after it fails as that may be a cause.

memory is managed by Docker, not Drone. So if there is a defect related to memory management or allocation it would probably a defect related to Docker.

Is there any swap allocated to the container? If so is there any way to increase it from the .drone.yml file?

there is no swap. we use bare metal servers and each server has 128gb ram. I cannot imagine any pipeline needing swap with that much ram.

perhaps it would be more productive to tell us the name of your repository, the build number, the pipeline name, the step name, whether this happened once or every time or just sometimes, and how we might reproduce? provide us with a direct link to a build that demonstrates the issue. it feels a bit like random guessing right now, however, with proper information, we can login to the server while your build is running and observe the system to see exactly what is going on.

@ashwilliams1 In my experience even if there is a large amount of memory you usually still have a small swap.

My repo is PhoenixMage/aports/

I am currently trying to build PR https://github.com/PhoenixMage/aports/pull/6

I only have a single pipeline and step in my .drone.yml.

The problem happens every time around the same place, the amount of memory in the error changes but everything else seems to be fairly consistent. Restarting the build should retrigger it. The last build that had the issue is https://cloud.drone.io/PhoenixMage/aports/57

I have also tried building for armhf and have the same issue in the same place. I am currently using gcc 9.2.0 however I have also switched to using gcc 8.2.0 with the same result in about the same place but saying the build is out of virtual memory which tends to imply it is trying to swap.

Happy to chat on gitter if that is easier to attempt to resolve the issue.

I ran your build and then logged-in to the server to monitor. Note that your build is the only build running on the machine:

Here are the results of docker ps

Here are the results of docker status after 60 seconds using 20GiB:

Here are the result of docker status after 120 seconds using 40GiB:

According to your build output it looks like cc1plus allocated 132341760 bytes of memory which is pretty much all available memory on the machine:

cc1plus: out of memory allocating 44512 bytes after a total of 132341760 bytes

It definitely looks like this build is using atypical amounts of memory which leads to all memory on the machine being exhausted. I checked builds that ran on the machine before and after this build, and did not observe any issues with those builds.

The next step, if you are dissatisfied with my assessment of the situation, is to setup your own Drone installation so that you can continue troubleshooting with direct access to the instance. In order to replicate our cloud environment you can provision a Packet c2-large-arm instance. You can use code DRONE100 to get $100 in free Packet credits.

I dont dispute your findings, but I can’t repeat them.

I ended up using that code (really appreciated btw) and firing up an instance of c2-large-arm running Ubuntu 16.04 for a host OS. I am running 2 simulataneous builds, one armv6 and one armv7 and this is the output from docker stats after 10 minutes:

Is there a build document for what you guys are doing? I’d be interested in looking at it.

Just an update on this. Turns out we were looking in the wrong place.

The out of memory issues were tied to a particular program that was being compiled and gcc was hitting the 4Gb memory limit of a 32 bit process.

If I disable the building of that particular piece of code I can build successfully.

Thanks @bradrydzewski and @ashwilliams1 for your help