10 years ago this meme said “compiling” shows how much docker has made things more “efficient”
We’ve created this hell for ourselves
For me, Docker has been amazing. It’s probably my single most favorite tool in my tool belt. It has made my life so much easier over the years. It’s far from hell for me! 🐳
As a self-hoster, I love docker. It’s been an amazing deployment tool.
Tbh, all of web development has become this… efficient. I remember the days where I could create a website in PHP and have it done in a couple of hours (per page), and now the only way I can do that would be using AI and going full on “vibe coding” mode.
What do you mean? You can just make some react/typescript template and fastapi server thing, or any of dozens of equivalents, extremely quickly. I’m by no means an expert on web stuff as I develop software for controlling machines, but we used the above for some internal services in my last job and I could get a clean and functional site running in a day with no prior experience. I get that for public facing stuff you’ll have some higher requirements but I couldn’t imagine those wouldn’t apply just because you’re coding in PHP…
It’s all the extra requirements, all the extra engineering that needs to be added that is IMO ruining web applications. Sure, they have huge benefits, but I hate when the application is simple but the backend is so overly engineered that it takes a week to completely build a fully fleshed out application. You have to organize your components, add styled-components.js, make sure it’s compatible with mui.js, create test cases for each component, setup a DB and integrate it to hold all copy as well as any input from the customer, make sure that it’s accessible (this part I admit that it’s important), make sure your test cases always pass, setup routing tables, add analytics, add pixel campaign api, squash git conflicts, integrate some other weirdo apis that marketing and leadership pulled from some obscure service no one has ever heard off, debug some weird edge case error caused by a node dependency 3 levels down, present the finished website to leadership only to be destroyed and now you have to redo 75% of the site with leadership changes… rinse and repeat.
It’s a good thing I fucking love my job 🙃
Ok yeah I totally get how that would be a burden… But I wouldn’t like to attempt doing all that stuff in PHP ;)
Yes. Sorry. I expected everyone to know this, but in hindsight this is of course a bad assumption.
Someone doesn’t know how to leverage Docker BuildKit
Is there more to it than using multistage builds when appropriate?
Oh yeah there is a lot you can implement to really get the most out of your architecture via docker and minimize your build times.
One is using BuildKit with BuildX and Docker Build Cache.
BuildX is the one I highly recommend getting familiar with as it’s essentially an extension of BuildKit.
I’m a solutions architect so I was literally building with these tools 15 minutes ago lol. Send any other questions my way if you have any!
Ah thanks, I do have another question actually! So aside from speeding up builds by parallelizing different stages, so that
FROM alpine AS two RUN sleep 5 && touch /a FROM alpine AS one RUN sleep 5 && touch /b FROM alpine AS three COPY --from=two /a /a COPY --from=one /b /b
takes 5 iso 10 seconds, are there any other ways buildkit speeds up builds?
Yeah! So the first thing that BuildKit provides that greatly improves build time is that it will detect and run the two stages (one, two) in parallel so the wall-clock time for your example is 5s (excluding any overhead). Without BuildKit, these would be built serially resulting in a wall-clock time of 10s (excluding any overhead).
Additionally, BuildKit uses a content-based cache rather than a step-by-step key cache used by classical Docker. This content-based cache is intelligently reused across different builds and even re-ordered instructions. If you were to build then rebuild your example, the sleep steps would be skipped entirely as those steps are fully completed and unchanged in the content-based cache from the previous build. It will detect changes and re-build accordingly.
Lastly, (albiet not a BuildKit feature directly) is to leverage inline build caching for things such as dependencies so they are persisted to your filesystem and mounted during build time such that you don’t have to fetch them constantly. This isn’t really necessary if leveraging BuildKit fully since the content-based cache will also handle the dependencies and only pull if changed. i.e:
RUN --mount=type=cache,target=/root/.cache \ your-build-command
I’m waiting for the LLM to reply
I was going to watch a tuto on how to be more efficient but YouTube is still buffering
Delayed because of your ad-blocker :p
I swear it’s gonna load any second now and I’ll be able to do something productive!
Still better than ads, though 😄
DevOps, not programmer.
Why not? Why doesn’t the programmer want to test a container?
True. Nothing beats running your unit tests in the actual container image that will be run in production.
Yeah, and it’s useful to just check everything so you don’t forget to add some essential system package for e.g. SSL, especially when working with Alpine.
Race condition that only happens on the much faster production hardware: Allow me to introduce myself
Unit tests can’t win ’em all. That’s where things like integration tests, staging environments, and load testing come in.
The final layer of protection is the deployment strategy, be it rolling, canary, or blue-geen.
Or an issue that only appears when using ARM and not on my AMD64 dev machine
Unit tests? No matter where you run them, and normally this is done by CI in a prebuilt container image, so you don’t have to wait for “docker building”. Acceptance tests must be run in an environment as close to production as possible, but that’s definitely not a programmer’s job.
Guess I must turn in my programmer-badge.
what if I’m doing my programming inside a devcontainer?
How often do you rebuild the image?