Le samedi 29 février 2020 à 19:14 +0100, Timur Kristóf a écrit :
On Fri, 2020-02-28 at 10:43 +0000, Daniel Stone wrote:
On Fri, 28 Feb 2020 at 10:06, Erik Faye-Lund erik.faye-lund@collabora.com wrote:
On Fri, 2020-02-28 at 11:40 +0200, Lionel Landwerlin wrote:
Yeah, changes on vulkan drivers or backend compilers should be fairly sandboxed.
We also have tools that only work for intel stuff, that should never trigger anything on other people's HW.
Could something be worked out using the tags?
I think so! We have the pre-defined environment variable CI_MERGE_REQUEST_LABELS, and we can do variable conditions:
https://docs.gitlab.com/ee/ci/yaml/#onlyvariablesexceptvariables
That sounds like a pretty neat middle-ground to me. I just hope that new pipelines are triggered if new labels are added, because not everyone is allowed to set labels, and sometimes people forget...
There's also this which is somewhat more robust: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/2569
My 20 cents:
- I think we should completely disable running the CI on MRs which are
marked WIP. Speaking from personal experience, I usually make a lot of changes to my MRs before they are merged, so it is a waste of CI resources.
In the mean time, you can help by taking the habit to use:
git push -o ci.skip
CI is in fact run for all branches that you push. When we (GStreamer Project) started our CI we wanted to limit this to MR, but haven't found a good way yet (and Gitlab is not helping much). The main issue is that it's near impossible to use gitlab web API from a runner (requires private key, in an all or nothing manner). But with the current situation we are revisiting this.
The truth is that probably every CI have lot of room for optimization, but it can be really time consuming. So until we have a reason to, we live with inefficiency, like over sized artifact, unused artifacts, over-sized docker image, etc. Doing a new round of optimization is obviously a clear short term goals for project, including GStreamer project. We have discussions going on and are trying to find solutions. Notably, we would like to get rid of the post merge CI, as in a rebase flow like we have in GStreamer, it's a really minor risk.
- Maybe we could take this one step further and only allow the CI to
be only triggered manually instead of automatically on every push.
- I completely agree with Pierre-Eric on MR 2569, let's not run the
full CI pipeline on every change, only those parts which are affected by the change. It not only costs money, but is also frustrating when you submit a change and you get unrelated failures from a completely unrelated driver.
That's a much more difficult goal then it looks like. Let each projects manage their CI graph and content, as each case is unique. Running more tests, or building more code isn't the main issue as the CPU time is mostly sponsored. The data transfers between the cloud of gitlab and the runners (which are external), along to sending OS image to Lava labs is what is likely the most expensive.
As it was already mention in the thread, what we are missing now, and being worked on, is per group/project statistics that give us the hotspot so we can better target the optimization work.
Best regards, Timur
gstreamer-devel mailing list gstreamer-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel