On Fri, 28 Feb 2020 at 08:48, Dave Airlie airlied@gmail.com wrote:
On Fri, 28 Feb 2020 at 18:18, Daniel Stone daniel@fooishbar.org wrote:
The last I looked, Google GCP / Amazon AWS / Azure were all pretty comparable in terms of what you get and what you pay for them. Obviously providers like Packet and Digital Ocean who offer bare-metal services are cheaper, but then you need to find someone who is going to properly administer the various machines, install decent monitoring, make sure that more storage is provisioned when we need more storage (which is basically all the time), make sure that the hardware is maintained in decent shape (pretty sure one of the fd.o machines has had a drive in imminent-failure state for the last few months), etc.
Given the size of our service, that's a much better plan (IMO) than relying on someone who a) isn't an admin by trade, b) has a million other things to do, and c) hasn't wanted to do it for the past several years. But as long as that's the resources we have, then we're paying the cloud tradeoff, where we pay more money in exchange for fewer problems.
Admin for gitlab and CI is a full time role anyways. The system is definitely not self sustaining without time being put in by you and anholt still. If we have $75k to burn on credits, and it was diverted to just pay an admin to admin the real hw + gitlab/CI would that not be a better use of the money? I didn't know if we can afford $75k for an admin, but suddenly we can afford it for gitlab credits?
s/gitlab credits/GCP credits/
I took a quick look at HPE, which we previously used for bare metal, and it looks like we'd be spending $25-50k (depending on how much storage you want to provision, how much room you want to leave to provision more storage later, how much you care about backups) to run a similar level of service so that'd put a bit of a dint in your year-one budget.
The bare-metal hosting providers also add up to more expensive than you might think, again especially if you want either redundancy or just backups.
Yes, we could federate everything back out so everyone runs their own builds and executes those. Tinderbox did something really similar to that IIRC; not sure if Buildbot does as well. Probably rules out pre-merge testing, mind.
Why? does gitlab not support the model? having builds done in parallel on runners closer to the test runners seems like it should be a thing. I guess artifact transfer would cost less then as a result.
It does support the model but if every single build executor is also compiling Mesa from scratch locally, how long do you think that's going to take?
Again, if you want everything to be centrally designed/approved/monitored/controlled, that's a fine enough idea, and I'd be happy to support whoever it was who was doing that for all of fd.o.
I don't think we have any choice but to have someone centrally controlling it, You can't have a system in place that lets CI users burn largs sums of money without authorisation, and that is what we have now.
OK, not sure who it is who's going to be approving every update to every .gitlab-ci.yml in the repository, or maybe we just have zero shared runners and anyone who wants to do builds can BYO.