1. push 推送 images

术语表:

Pull 意味着直接从远程 Registry 下载容器镜像。

Push 意味着直接将容器镜像上传到远程 Registry。

Load 加载归档文件作为可使用的镜像,并使其在集群中可用。

Save 将镜像保存到为归档文件。

Build 获取一个“构建上下文”(目录),并从中在集群中创建一个新映像。

Tag 意味着分配名称和标记。

1.1. 不同方法对照表

将镜像推送到 minikube 的最佳方法取决于构建集群时使用的容器运行时(默认是 docker)。

下面是一个对比表,可以帮助你选择:

方式 支持的运行时 性能 加载 构建
docker-env command only docker good yes yes
cache command all ok yes no
podman-env command only cri-o good yes yes
registry addon all ok yes no
minikube ssh all best yes* yes*
ctr/buildctl command only containerd good yes yes
image load command all ok yes no
image build command all ok no yes
  • 注意1: minikube 默认的容器运行时是 docker
  • 注意2: node 驱动(bare metal) 不需要推送镜像到集群中,本地所有的镜像 Kubernetes 可以直接使用。
  • 注意3: 当使用 ssh 运行命令时,load 导入或 build 构建依赖的文件必须在节点上已经存在了(不是在客户端主机上)。

1.2. 1. Pushing directly to the in-cluster Docker daemon (docker-env)

This is similar to podman-env but only for Docker runtime. When using a container or VM driver (all drivers except none), you can reuse the Docker daemon inside minikube cluster. This means you don't have to build on your host machine and push the image into a docker registry. You can just build inside the same docker daemon as minikube which speeds up local experiments.

To point your terminal to use the docker daemon inside minikube run this:

  • Linux 和 MacOS

    eval $(minikube docker-env)
    
  • Windows

PowerShell

& minikube -p minikube docker-env --shell powershell | Invoke-Expression

cmd

@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i

Now any 'docker' command you run in this current terminal will run against the docker inside minikube cluster.

So if you do the following commands, it will show you the containers inside the minikube, inside minikube's VM or Container.

docker ps

Now you can 'build' against the docker inside minikube, which is instantly accessible to kubernetes cluster.

docker build -t my_image .

To verify your terminal is using minikube's docker-env you can check the value of the environment variable MINIKUBE_ACTIVE_DOCKERD to reflect the cluster name.

[!TIP|style:flat] Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won't use your locally build image and it will pull from the network.

[!TIP|style:flat] Tip 2: Evaluating the docker-env is only valid for the current terminal. By closing the terminal, you will go back to using your own system's docker daemon.

[!TIP|style:flat] Tip 3: In container-based drivers such as Docker or Podman, you will need to re-do docker-env each time you restart your minikube cluster.

More information on docker-env


1.3. 2. Push images using 'cache' command.

From your host, you can push a Docker image directly to minikube. This image will be cached and automatically pulled into all future minikube clusters created on the machine

minikube cache add alpine:latest

The add command will store the requested image to $MINIKUBE_HOME/cache/images, and load it into the minikube cluster's container runtime environment automatically.

[!TIP|style:flat] Tip 1 : If your image changes after your cached it, you need to do 'cache reload'.

minikube refreshes the cache images on each start. However to reload all the cached images on demand, run this command :

minikube cache reload

[!TIP|style:flat] Tip 2 : If you have multiple clusters, the cache command will load the image for all of them.

To display images you have added to the cache:

minikube cache list

This listing will not include the images minikube's built-in system images.

minikube cache delete <image name>

For more information, see:


1.4. 3. Pushing directly to in-cluster CRI-O. (podman-env)

  • Linux

This is similar to docker-env but only for CRI-O runtime. To push directly to CRI-O, configure podman client on your host using the podman-env command in your shell:

eval $(minikube podman-env)

You should now be able to use podman client on the command line on your host machine talking to the podman service inside the minikube VM:

podman-remote help

Now you can 'build' against the storage inside minikube, which is instantly accessible to kubernetes cluster.

podman-remote build -t my_image .

[!TIP|style:flat] Note: On Linux the remote client is called "podman-remote", while the local program is called "podman".

  • MacOS

This is similar to docker-env but only for CRI-O runtime. To push directly to CRI-O, configure Podman client on your host using the podman-env command in your shell:

eval $(minikube podman-env)

You should now be able to use Podman client on the command line on your host machine talking to the Podman service inside the minikube VM:

podman help

Now you can 'build' against the storage inside minikube, which is instantly accessible to Kubernetes cluster.

podman build -t my_image .

[!TIP|sytle:flat] Note: On macOS the remote client is called "podman", since there is no local "podman" program available.

  • Windows

This is similar to docker-env but only for CRI-O runtime. To push directly to CRI-O, configure Podman client on your host using the podman-env command in your shell:

PowerShell

& minikube -p minikube podman-env --shell powershell | Invoke-Expression

cmd

@FOR /f "tokens=*" %i IN ('minikube -p minikube podman-env --shell cmd') DO @%i

You should now be able to use Podman client on the command line on your host machine talking to the Podman service inside the minikube VM:

Now you can 'build' against the storage inside minikube, which is instantly accessible to Kubernetes cluster.

podman help
podman build -t my_image .

[!TIP|sytle:flat] Note: On Windows the remote client is called "podman", since there is no local "podman" program available.

Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never), as otherwise Kubernetes won't use images you built locally.


1.5. 4. Pushing to an in-cluster using Registry addon

For illustration purpose, we will assume that minikube VM has one of the ip from 192.168.39.0/24 subnet. If you have not overridden these subnets as per networking guide, you can find out default subnet being used by minikube for a specific OS and driver combination here which is subject to change. Replace 192.168.39.0/24 with appropriate values for your environment wherever applicable.

Ensure that docker is configured to use 192.168.39.0/24 as insecure registry. Refer here for instructions.

Ensure that 192.168.39.0/24 is enabled as insecure registry in minikube. Refer here for instructions..

Enable minikube registry addon:

minikube addons enable registry

Build docker image and tag it appropriately:

docker build --tag $(minikube ip):5000/test-img .

Push docker image to minikube registry:

docker push $(minikube ip):5000/test-img

1.6. 5. Building images inside of minikube using SSH

Use minikube ssh to run commands inside the minikube node, and run the build command directly there. Any command you run there will run against the same daemon / storage that kubernetes cluster is using.

For Docker, use:

docker build

For more information on the docker build command, read the Docker documentation (docker.com).

For CRI-O, use:

sudo podman build

For more information on the podman build command, read the Podman documentation (podman.io).

For Containerd, use:

sudo ctr images import
sudo buildctl build

For more information on the ctr images command, read the containerd documentation (containerd.io)

For more information on the buildctl build command, read the Buildkit documentation (mobyproject.org).

To exit minikube ssh and come back to your terminal type:

exit

1.7. 6. Pushing directly to in-cluster containerd (buildkitd)

This is similar to docker-env and podman-env but only for Containerd runtime.

Currently it requires starting the daemon and setting up the tunnels manually.

1.7.1. ctr instructions

In order to access containerd, you need to log in as root. This requires adding the ssh key to /root/authorized_keys..

docker@minikube:~$ sudo mkdir /root/.ssh
docker@minikube:~$ sudo chmod 700 /root/.ssh
docker@minikube:~$ sudo cp .ssh/authorized_keys /root/.ssh/authorized_keys
docker@minikube:~$ sudo chmod 600 /root/.ssh

Note the flags that are needed for the ssh command.

minikube --alsologtostderr ssh --native-ssh=false

Tunnel the containerd socket to the host, from the machine. (Use above ssh flags (most notably the -p port and root@host))

ssh -nNT -L ./containerd.sock:/run/containerd/containerd.sock ... &

Now you can run command to this unix socket, tunneled over ssh.

ctr --address ./containerd.sock help

Images in "k8s.io" namespace are accessible to kubernetes cluster.

1.7.2. buildctl instructions

Start the BuildKit daemon, using the containerd backend.

docker@minikube:~$ sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io

Make the BuildKit socket accessible to the regular user.

docker@minikube:~$ sudo groupadd buildkit
docker@minikube:~$ sudo chgrp -R buildkit /run/buildkit
docker@minikube:~$ sudo usermod -aG buildkit $USER
docker@minikube:~$ exit

Note the flags that are needed for the ssh command.

minikube --alsologtostderr ssh --native-ssh=false

Tunnel the BuildKit socket to the host, from the machine. (Use above ssh flags (most notably the -p port and user@host))

ssh -nNT -L ./buildkitd.sock:/run/buildkit/buildkitd.sock ... &

After that, it should now be possible to use buildctl:

buildctl --addr unix://buildkitd.sock build \
    --frontend=dockerfile.v0 \
    --local context=. \
    --local dockerfile=. \
    --output type=image,name=k8s.gcr.io/username/imagename:latest

Now you can 'build' against the storage inside minikube. which is instantly accessible to kubernetes cluster.


1.8. 7. Loading directly to in-cluster container runtime

The minikube client will talk directly to the container runtime in the cluster, and run the load commands there - against the same storage.

minikube image load my_image

For more information, see:


1.9. 8. Building images to in-cluster container runtime

The minikube client will talk directly to the container runtime in the cluster, and run the build commands there - against the same storage.

minikube image build -t my_image .

For more information, see:

Copyright © 温玉 2021 | 浙ICP备2020032454号 all right reserved,powered by Gitbook该文件修订时间: 2022-06-12 13:37:44

results matching ""

    No results matching ""