I think contemporary development culture has a CI-fetishization problem and this should be talked about a lot more.
I don't know if this is an unpopular opinion or just not interesting to a lot of people (at least I don't see it pop up much in my feeds for whatever reasons), but the rising amount of commit-triggered senseless cloud computation I see in recent years is really worrisome to me. I can appreciate the utility of a thoughtful automated process that thoroughly checks an entire release before it rolls out on big critical infrastructure, but what I see around me is so often overblown pseudo-testing that provisions entire serverfarms to crunch numbers for half an hour everytime any dev at an org commits as a much as a fix for a typo on some random work-in-progress branch. In the face of our existential climate emergency I can only consider this practice completely mindless, and complete madness. We can not keep doing this.
@freebliss In my experience, CI is most useful when it's cumbersome to test all supported configurations locally.
@freebliss this can describe the modern web itself, where megabytes of application/feature-poor ui code, typically in js, and useless fillers like ads, may get forced down the line just to present a few kilobytes of content. That becomes absurd, too.
@freebliss Most CIs are missing what Jenkins had forever, which is to use a commit as a trigger, and build some time later with all the commits made in the meantime. It effectively bunches up lots of small commits into one larger build.
Arguably, with feature branches in git and pushing being the trigger, that's less useful.
TL;DR I've had similar thoughts, but it's complicated.
@jens Thumbs up and fully agree on «it's complicated». There's a spectrum of solutions we will have to look into and combine, some of which I believe will have to hurt us too, as in needing to sacrifice some amenities here and there if we are to keep things reasonable on a global scale. Whatever the answers are, "compute first, think later" (or even "never" as I see just as frequently) is not a luxury we should grant ourselves anymore.
@freebliss Yeah... I was shocked that an early 20s dev I met some 5 years ago or so would commit & push (to their branch) for the CI to do syntax checking for them. They didn't even bother to do that locally.
That's definitely in the "gone too far" category.
But I think that same mentality is behind squash merges.
@jens Do you mean the part about squash merges in the sense that people don't want to bother to manually structure their commit history for PRs (interactive rebase etc.), or in some other way?
@freebliss I mean that I grew up learning how to make commits relatively cohesive. They don't have to complete a feature, but they should always compile and not break anything. Which makes them well sized for a half decent commit message as well, because there should be some meaningful progress to report. You don't really want to squash those reports or commits; they're already meaningful.
Tons of tiny commits are effectively just noise. It makes sense to want to squash them.
I would assume...
@freebliss ... that just as having feature branches and fast CI in reach leads one to make tiny, dumb commits, the existence of those commits leads one to want squash merges.
@jens Ah yes that makes sense thanks! I've recently observed that part of this piecemeal commiting approach is also encouraged through review workflows that platforms implicitly dictate, e.g. using the review feature on github and then a process comes out of it where single review comments are adressed in single commits and so on, and of course each one-line fix triggers the ci pipeline anew, etc. ... oof ^^
@lanodan @freebliss This has happened because making a distribution package is nontrivial compared to not making one. If you want people to use distribution packages you need to make them as easy to make as NPM packages. The cambrian explosion of GitHub will never go away. The genie is out of the bottle. What we do with this knowledge is what matters.
@freebliss in my experience at a company where we burn way too much time on CI compute, its not that anyone is a particularly big fan of CI, its that CI is useful in some cases so we decided to just apply it defensively in every case and then nobody is paid to fix all the deficiencies that came as a result
@freebliss so i wouldn't call it CI fetishization, its just capitalist cost cutting business as usual. gating all our PR's behind a CI pass probably saves our ass 1 every 5 PR's and the company doesn't want to spend money fixing the CI churn when you're just trying to correct a typo
@freebliss Is this really how people use CI? I’m a fan of CI, but my use case is that automated CI testing is only a last check before release.
If you have a test suite, you should be running it locally. These CI hooks should be triggered very infrequently.
@maddiefuzz I've unfortunately seen it being used indiscriminately and without measure to a significant degree by now. Thumbs up if you're putting more thought into it and I hope there's more that follow your practice!
@freebliss For cross-platform stuff its a life saver for me. I can identify a bug on linux without having to test there myself, which is how I found the baffling bug I'm currently debugging.
@cinebox Good to hear! Having CI in general is not the issue really. It's foremost about how we can employ it in less wasteful and more thoughtful ways. (which is also not to say that some people are not already doing this :))
@freebliss I'm also very concerned about the carbon cost of CI.
Since developers usually run at least some tests locally, it would be good to be able to automatically let CI know it can skip those tests.
Another issue with complex CIs is that they ultimately become flakey and then consume much development resource in diagnosis and maintenance.
Another solution is smaller repos and definitely avoiding uber repos.
@freebliss I guess another help would be more intelligent test frameworks which skip unit tests which a given set of changes cannot have affected. (Doesn't help intermittent flake detection though.)
@underlap Also a good point, although the road of fighting complexity with complexity is a slippery one of course :), probably depends a bit on the language and technology how feasible it is to determine a reliable dependency graph for conditional testing.
@underlap Flakeyness and maintenance cost is an *excellent* point! Come to think of it I've seen multiple environments in which e.g. the UI tests for a web service were just officially known to be broken (over months!), yet they would be run on every single change in every PR throughout that entire time - and even re-run multiple times as it was configured to just reattempt failing tests a number of times. So besides the environmental cost, this whole stuff might even have had a detrimental effect on productivity, given that it completely ruins your confidence in code quality and sends you off chasing CI issues on a regular basis instead of working on your actual codebase. /o\
@freebliss Yes, re-running CIs is a common ploy for putting off fixing flakey tests. Or even coding retry loops. Both these increase the carbon cost.
Maybe the first step would be to report an estimate of the carbon cost of each CI run?
I shudder to think what some CI pipelines I've worked on would cost. E.g. deploying to Kubernetes on every push.
@freebliss Step #0 is, IMHO, to ensure that there are policies in place that set a reasonable price for energy and emissions. The negative externalities need to become internalized, so that polluters pay, and thereby are incentivized to use less dirty energy. I’m thinking of mechanisms like EU-ETS.
And obviously countries like USA and China need to stop burning coal.
@giffengrabber No objections there that this needs to be fixed at the root as well eventually, we've been playing sustainability whack-a-mole for long enough now ... :(
@freebliss I see lots of good initiatives when it comes to clean power. But yeah, much much faster pace of development would be very welcome!
@freebliss I have been thinking similarly recently, FWIW. The thought occurred to me while I was casually pushing new commits to a PR on GitHub and noticed that each one was occupying a dozen GitHub Actions workers for >20 minutes running an army of different linters and running the test suite on 4 different platforms and whatever else.
@freebliss as a maintainer of a relatively popular open source project myself I have come to appreciate that most community contributions have already been tested in some sense by a robot before I arrived, but I also can't get away from being bothered by the waste of it.
For now I worked towards being selective about prioritizing which checks give the most value and deferring others to be run in a different setting that runs less frequently...
@freebliss but it's frustrating that GutHub Actions in particular makes these things "feel" free and so collaborators are so tempted to add _just one more check_ just in case it catches a once-per-year problem we've seen twice now. 😖 Arguing that it's okay for some rare things to be caught only later in the process (in the release step rather than the review step) has worked because my collaborators are reasonable and similarly motivated, but not for all projects.
@apparentlymart Hey thanks those are some great insights! The time one needs to dedicate (especially in an open source project) to look at contributions is an interesting additional factor. It's also somewhat exemplary of our entire dilemma: We're all so short on (human) time to dedicate to projects and tasks that we really care about because most of us are struggling to make ends meet in the daily capitalist treadmill (although we'd be so many minds eager to work on the relevant stuff), while on the other hand machine labor and services are thrown around at dumping prices with all the externalized damage wreaking havoc on the environment. Either way, good to hear you and your colleagues are putting thought and action into this in your projects, keep it up! \o/
A fediverse community for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.