I wasn't sure how to load the images back into docker at first. I tried `docker load` but I get this error:
$ (cd ci-repack && tar cfv - .) | docker load
./
./oci-layout
./index.json
./blobs/
./blobs/sha256/
./blobs/sha256/2ad6ec1b7ff57802445459ed00e36c2d8e556c5b3cad7f32512c9146909b8ef8
./blobs/sha256/9f3908db1ae67d2622a0e2052a0364ed1a3927c4cebf7e3cc521ba8fe7ca66f1
open /var/lib/docker/tmp/docker-import-1084022012/blobs/json: no such file or directory
Then I noticed the `skopeo copy` in one of the github actions workflows. That got me further. The image was able to be pushed to a registry. But I am getting this error when pulling the repacked image:
failed to register layer: duplicates of file paths not supported
I created this tool whilst I was learning about Docker/OCI image internals.
The tool repacks and optimizes Docker images by efficiently repacking the contents into equal sized layers. The speed improvements for large images are significant: from 2 minutes to 16 seconds in some cases.
I'm not sure how useful it is, but I find the subject quite interesting and it might be useful to others.
Be sure to actually use `--pull' to use the latest base image versus the local one when building. Only realized that recently, and it wasn't obvious the same tag can "float" between vendor images.
Love this idea! ty
I wasn't sure how to load the images back into docker at first. I tried `docker load` but I get this error:
Then I noticed the `skopeo copy` in one of the github actions workflows. That got me further. The image was able to be pushed to a registry. But I am getting this error when pulling the repacked image:Thanks for trying it out! I’ve tested this on a lot of public images, but it hasn’t been “battle tested” yet.
Are you able to share the image you’re using with me, or a reproduction case? Even the base images would help.
I created this tool whilst I was learning about Docker/OCI image internals.
The tool repacks and optimizes Docker images by efficiently repacking the contents into equal sized layers. The speed improvements for large images are significant: from 2 minutes to 16 seconds in some cases.
I'm not sure how useful it is, but I find the subject quite interesting and it might be useful to others.
Wow. Some of the savings are huge on your github page. Well done!
Thank you! The most surprising one for me was the size reduction for the google cloud SDK image - 1.1GB to 187MB.
Be sure to actually use `--pull' to use the latest base image versus the local one when building. Only realized that recently, and it wasn't obvious the same tag can "float" between vendor images.