Contents

ARST打卡第192周[192/521]

Algorithm

lc2185_统计包含给定前缀的字符串

今天的每日一题过于简单了

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import "strings"
// golang好像有封装好的函数可以用,当然实现也很容易,就是简单的前缀对比
func prefixCount(words []string, pref string) int {
    cnt := 0
    for _, str := range(words) {
        if strings.HasPrefix(str, pref) {
            cnt++
        }
    }
    return cnt
}

Review

【TED演讲】为什么学生应该有心理健康日?

心理健康和生理健康同样重要,学会放慢自己的脚步,在感到心理疲惫的时候,给与自己适当的休息,这样才能张弛有度,能够更好地实现自己的目标

Tips

Gartner发布2023年十大战略技术趋势

了解新一年的发展趋势,是2023年第一周比较推荐的行为(当然2022年末就开始看更好)

Share-helm安装etcd-ha的失败的原因是bitnami不支持ARM架构-过程分享

按照指定步骤后,会得到po和pvc都处在pending状态

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
[root@arm download]# wget https://get.helm.sh/helm-v3.10.3-linux-arm64.tar.gz
--2022-12-30 19:07:28--  https://get.helm.sh/helm-v3.10.3-linux-arm64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.39.108, 2606:2800:247:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.39.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13085851 (12M) [application/x-tar]
Saving to: ‘helm-v3.10.3-linux-arm64.tar.gz’

helm-v3.10.3-linux-arm64.tar.gz            100%[======================================================================================>]  12.48M  --.-KB/s    in 0.04s

2022-12-30 19:07:29 (332 MB/s) - ‘helm-v3.10.3-linux-arm64.tar.gz’ saved [13085851/13085851]

[root@arm download]# ll
total 125260
drwxr-xr-x  2 root root      4096 Dec 30 19:07 ./
drwx------ 12 root root      4096 Dec 29 18:54 ../
-rw-r--r--  1 root root 115165850 Dec  7 03:31 go1.19.4.linux-arm64.tar.gz
-rw-r--r--  1 root root  13085851 Dec 14 23:44 helm-v3.10.3-linux-arm64.tar.gz
[root@arm download]# tar -zxvf helm-v3.10.3-linux-arm64.tar.gz
linux-arm64/
linux-arm64/helm
linux-arm64/LICENSE
linux-arm64/README.md
[root@arm download]# mv linux-arm64/helm /usr/local/bin/helm
[root@arm download]# helm help
The Kubernetes package manager
...

[root@arm download]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@arm download]# helm pull bitnami/etcd
[root@arm download]# tar -xvf etcd-8.5.11.tgz
[root@arm download]# vi etcd/values.yaml # 新版本已经是false了,不用改
# 后面证实发现自己看错地方了...)应该是588行...

## persistentVolumeClaimRetentionPolicy
473 ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
474 ## @param persistentVolumeClaimRetentionPolicy.enabled Controls if and how PVCs are deleted during the lifecycle of a StatefulSet
475 ## @param persistentVolumeClaimRetentionPolicy.whenScaled Volume retention behavior when the replica count of the StatefulSet is reduced
476 ## @param persistentVolumeClaimRetentionPolicy.whenDeleted Volume retention behavior that applies when the StatefulSet is deleted
477 persistentVolumeClaimRetentionPolicy:
478   enabled: false

[root@arm download]# helm install my-release ./etcd
...
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: etcd
CHART VERSION: 8.5.11
APP VERSION: 3.5.6

** Please be patient while the chart is being deployed **

etcd can be accessed via port 2379 on the following DNS name from within your cluster:

    my-release-etcd.default.svc.cluster.local

To create a pod that you can use as a etcd client run the following command:

    kubectl run my-release-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.6-debian-11-r10 --env ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d) --env ETCDCTL_ENDPOINTS="my-release-etcd.default.svc.cluster.local:2379" --namespace default --command -- sleep infinity

Then, you can set/get a key using the commands below:

    kubectl exec --namespace default -it my-release-etcd-client -- bash
    etcdctl --user root:$ROOT_PASSWORD put /message Hello
    etcdctl --user root:$ROOT_PASSWORD get /message

To connect to your etcd server from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/my-release-etcd 2379:2379 &
    echo "etcd URL: http://127.0.0.1:2379"

 * As rbac is enabled you should add the flag `--user root:$ETCD_ROOT_PASSWORD` to the etcdctl commands. Use the command below to export the password:

    export ETCD_ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d)

[root@arm download]# kubectl run my-release-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.6-debian-11-r10 --env ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d) --env ETCDCTL_ENDPOINTS="my-release-etcd.default.svc.cluster.local:2379" --namespace default --command -- sleep infinity
pod/my-release-etcd-client created
[root@arm download]# kubectl exec --namespace default -it my-release-etcd-client -- bash
error: cannot exec into a container in a completed pod; current phase is Failed
[root@arm download]# k get po
NAME                     READY   STATUS    RESTARTS   AGE
envoy-fb5d77cc9-rjw9w    1/1     Running   0          24h
my-release-etcd-0        0/1     Pending   0          4m7s
my-release-etcd-client   0/1     Error     0          18s
[root@arm download]# k get po
NAME                     READY   STATUS    RESTARTS   AGE
envoy-fb5d77cc9-rjw9w    1/1     Running   0          24h
my-release-etcd-0        0/1     Pending   0          4m23s
my-release-etcd-client   0/1     Error     0          34s
[root@arm download]# k get po
NAME                     READY   STATUS    RESTARTS   AGE
envoy-fb5d77cc9-rjw9w    1/1     Running   0          24h
my-release-etcd-0        0/1     Pending   0          5m47s
my-release-etcd-client   0/1     Error     0          118s
## 自己猜测应该是端口冲突,于是helm help , helm install -h 研究了一下
# 后面想到这个端口是pod内的,所以不会冲突,所以继续研究pv,pvc

然后阅读pv,pvc的官方文档发现:

静态制备 集群管理员创建若干 PV 卷。这些卷对象带有真实存储的细节信息, 并且对集群用户可用(可见)。PV 卷对象存在于 Kubernetes API 中,可供用户消费(使用)。

动态制备 如果管理员所创建的所有静态 PV 卷都无法与用户的 PersistentVolumeClaim 匹配, 集群可以尝试为该 PVC 申领动态制备一个存储卷。 这一制备操作是基于 StorageClass 来实现的:PVC 申领必须请求某个 存储类, 同时集群管理员必须已经创建并配置了该类,这样动态制备卷的动作才会发生。 如果 PVC 申领指定存储类为 "",则相当于为自身禁止使用动态制备的卷。

加上看到pvc的详细信息,确实是StorageClass为空,所以只能手动制备PV存储卷先了

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
[root@arm ~]# k describe pvc data-my-release-etcd-0
Name:          data-my-release-etcd-0
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        app.kubernetes.io/instance=my-release
               app.kubernetes.io/name=etcd
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       my-release-etcd-0
Events:
  Type    Reason         Age                     From                         Message
  ----    ------         ----                    ----                         -------
  Normal  FailedBinding  2m32s (x50807 over 8d)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

创建PV后成功绑定了一个pvc,但是还是不太行

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
[root@arm pv]# cat po-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
[root@arm pv]# k apply -f po-volume.yaml
persistentvolume/task-pv-volume created
[root@arm pv]# k get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE
task-pv-volume   8Gi        RWO            Retain           Bound    default/data-my-release-etcd-2                           3m26s
[root@arm pv]# k get pvc
NAME                     STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-my-release-etcd-0   Pending                                                             9d
data-my-release-etcd-1   Pending                                                             8d
data-my-release-etcd-2   Bound     task-pv-volume   8Gi        RWO                           8d

[root@arm pv]# cat po-volume1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume-1
  labels:
    type: local
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

[root@arm pv]# cat po-volume2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume-2
  labels:
    type: local
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
[root@arm pv]# k apply -f po-volume1.yaml
persistentvolume/task-pv-volume-1 created
[root@arm pv]# k apply -f po-volume2.yaml
persistentvolume/task-pv-volume-2 created
[root@arm pv]# k get pvc
NAME                     STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-my-release-etcd-0   Pending                                                             9d
data-my-release-etcd-1   Pending                                                             9d
data-my-release-etcd-2   Bound     task-pv-volume   8Gi        RWO                           9d

[root@arm pv]# k get pvc
NAME                     STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-my-release-etcd-0   Bound    task-pv-volume-1   8Gi        RWO                           9d
data-my-release-etcd-1   Bound    task-pv-volume-2   8Gi        RWO                           9d
data-my-release-etcd-2   Bound    task-pv-volume     8Gi        RWO                           9d
[root@arm pv]# k get po
NAME                     READY   STATUS             RESTARTS      AGE
envoy-fb5d77cc9-rjw9w    1/1     Running            0             10d
my-release-etcd-0        0/1     Pending            0             9d
my-release-etcd-1        0/1     Pending            0             9d
my-release-etcd-2        0/1     CrashLoopBackOff   7 (79s ago)   9d
my-release-etcd-client   0/1     Error              0             9d
[root@arm pv]# k get po
NAME                     READY   STATUS             RESTARTS      AGE
envoy-fb5d77cc9-rjw9w    1/1     Running            0             10d
my-release-etcd-0        0/1     Error              0             9d
my-release-etcd-1        0/1     Error              0             9d
my-release-etcd-2        0/1     CrashLoopBackOff   7 (81s ago)   9d
my-release-etcd-client   0/1     Error              0             9d
[root@arm download]# k describe pod my-release-etcd-0
Name:         my-release-etcd-0
Namespace:    default
Priority:     0
Node:         arm/10.0.0.29
Start Time:   Sun, 08 Jan 2023 19:44:00 +0800
Labels:       app.kubernetes.io/instance=my-release
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=etcd
              controller-revision-hash=my-release-etcd-5d49546c66
              helm.sh/chart=etcd-8.5.11
              statefulset.kubernetes.io/pod-name=my-release-etcd-0
Annotations:  checksum/token-secret: b9cdb65acc8d3eff297975d64902520093c035e029074d0dc7b172f405f46e00
              cni.projectcalico.org/containerID: 27fecd909420c116cc20862804e4c98aec2c37620b88a710dfead06d252eb863
              cni.projectcalico.org/podIP: 192.168.64.206/32
              cni.projectcalico.org/podIPs: 192.168.64.206/32
Status:       Running
IP:           192.168.64.206
IPs:
  IP:           192.168.64.206
Controlled By:  StatefulSet/my-release-etcd
Containers:
  etcd:
    Container ID:   docker://c65f7b13411d6117954fd50fe35f826189667c6d1bf53ce8ef571a82cec165ff
    Image:          docker.io/bitnami/etcd:3.5.6-debian-11-r10
    Image ID:       docker-pullable://bitnami/etcd@sha256:2d7b831769734bb97a5c1cfd2fe46e29f422b70b5ba9f9aedfd91300839ac3ee
    Ports:          2379/TCP, 2380/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 08 Jan 2023 19:45:40 +0800
      Finished:     Sun, 08 Jan 2023 19:45:40 +0800
    Ready:          False
    Restart Count:  4
    Liveness:       exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=30s #success=1 #failure=5
    Readiness:      exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:
      BITNAMI_DEBUG:                     false
      MY_POD_IP:                          (v1:status.podIP)
      MY_POD_NAME:                       my-release-etcd-0 (v1:metadata.name)
      MY_STS_NAME:                       my-release-etcd
      ETCDCTL_API:                       3
      ETCD_ON_K8S:                       yes
      ETCD_START_FROM_SNAPSHOT:          no
      ETCD_DISASTER_RECOVERY:            no
      ETCD_NAME:                         $(MY_POD_NAME)
      ETCD_DATA_DIR:                     /bitnami/etcd/data
      ETCD_LOG_LEVEL:                    info
      ALLOW_NONE_AUTHENTICATION:         no
      ETCD_ROOT_PASSWORD:                <set to the key 'etcd-root-password' in secret 'my-release-etcd'>  Optional: false
      ETCD_AUTH_TOKEN:                   jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10m
      ETCD_ADVERTISE_CLIENT_URLS:        http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2379,http://my-release-etcd.default.svc.cluster.local:2379
      ETCD_LISTEN_CLIENT_URLS:           http://0.0.0.0:2379
      ETCD_INITIAL_ADVERTISE_PEER_URLS:  http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2380
      ETCD_LISTEN_PEER_URLS:             http://0.0.0.0:2380
      ETCD_CLUSTER_DOMAIN:               my-release-etcd-headless.default.svc.cluster.local
    Mounts:
      /bitnami/etcd from data (rw)
      /opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bz6gj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  etcd-jwt-token:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-release-etcd-jwt-token
    Optional:    false
  data:
    Type:       EmptyDir (a temporary directory that shares a pod·s lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-bz6gj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m36s                 default-scheduler  Successfully assigned default/my-release-etcd-0 to arm
  Normal   Created    101s (x4 over 2m35s)  kubelet            Created container etcd
  Normal   Started    101s (x4 over 2m35s)  kubelet            Started container etcd
  Warning  BackOff    70s (x12 over 2m33s)  kubelet            Back-off restarting failed container
  Normal   Pulled     57s (x5 over 2m35s)   kubelet            Container image "docker.io/bitnami/etcd:3.5.6-debian-11-r10" already present on machine
[root@arm download]# k logs my-release-etcd-0
exec /opt/bitnami/scripts/etcd/entrypoint.sh: exec format error

后面看网上文章发现自己忘了设置helm中的value.yaml中的persistence的enable为false了,就又试了一下empty-dir模式

然后

1
2
cd /root/download
vim etcd/values.yaml

And set persistence to false:

1
2
3
4
558 persistence:
559   ## @param persistence.enabled If true, use a Persistent Volume Claim. If false, use emptyDir.
560   ##
561   enabled: false

然后

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
[root@arm download]# helm uninstall my-release
release "my-release" uninstalled
[root@arm download]# k get pod
NAME                     READY   STATUS    RESTARTS   AGE
envoy-fb5d77cc9-rjw9w    1/1     Running   0          10d
my-release-etcd-client   0/1     Error     0          9d
[root@arm download]# k get pvc
NAME                     STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-my-release-etcd-0   Bound    task-pv-volume-1   8Gi        RWO                           9d
data-my-release-etcd-1   Bound    task-pv-volume-2   8Gi        RWO                           9d
data-my-release-etcd-2   Bound    task-pv-volume     8Gi        RWO                           9d
[root@arm download]# k delete pvc data-my-release-etcd-0 data-my-release-etcd-1 data-my-release-etcd-2
persistentvolumeclaim "data-my-release-etcd-0" deleted
persistentvolumeclaim "data-my-release-etcd-1" deleted
persistentvolumeclaim "data-my-release-etcd-2" deleted
[root@arm download]# k delete pv task-pv-volume task-pv-volume-1 task-pv-volume-2
persistentvolume "task-pv-volume" deleted
persistentvolume "task-pv-volume-1" deleted
persistentvolume "task-pv-volume-2" deleted
[root@arm download]# helm install my-release ./etcd
NAME: my-release
LAST DEPLOYED: Sun Jan  8 19:43:49 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: etcd
CHART VERSION: 8.5.11
APP VERSION: 3.5.6

** Please be patient while the chart is being deployed **

etcd can be accessed via port 2379 on the following DNS name from within your cluster:

    my-release-etcd.default.svc.cluster.local

To create a pod that you can use as a etcd client run the following command:

    kubectl run my-release-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.6-debian-11-r10 --env ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d) --env ETCDCTL_ENDPOINTS="my-release-etcd.default.svc.cluster.local:2379" --namespace default --command -- sleep infinity

Then, you can set/get a key using the commands below:

    kubectl exec --namespace default -it my-release-etcd-client -- bash
    etcdctl --user root:$ROOT_PASSWORD put /message Hello
    etcdctl --user root:$ROOT_PASSWORD get /message

To connect to your etcd server from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/my-release-etcd 2379:2379 &
    echo "etcd URL: http://127.0.0.1:2379"

 * As rbac is enabled you should add the flag `--user root:$ETCD_ROOT_PASSWORD` to the etcdctl commands. Use the command below to export the password:

    export ETCD_ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-etcd -o jsonpath="{.data.etcd-root-password}" | base64 -d)
[root@arm download]# k get pod
NAME                     READY   STATUS             RESTARTS      AGE
envoy-fb5d77cc9-rjw9w    1/1     Running            0             10d
my-release-etcd-0        0/1     CrashLoopBackOff   2 (18s ago)   44s
my-release-etcd-client   0/1     Error              0             9d
[root@arm download]# k describe pod my-release-etcd-0
Name:         my-release-etcd-0
Namespace:    default
Priority:     0
Node:         arm/10.0.0.29
Start Time:   Sun, 08 Jan 2023 19:44:00 +0800
Labels:       app.kubernetes.io/instance=my-release
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=etcd
              controller-revision-hash=my-release-etcd-5d49546c66
              helm.sh/chart=etcd-8.5.11
              statefulset.kubernetes.io/pod-name=my-release-etcd-0
Annotations:  checksum/token-secret: b9cdb65acc8d3eff297975d64902520093c035e029074d0dc7b172f405f46e00
              cni.projectcalico.org/containerID: 27fecd909420c116cc20862804e4c98aec2c37620b88a710dfead06d252eb863
              cni.projectcalico.org/podIP: 192.168.64.206/32
              cni.projectcalico.org/podIPs: 192.168.64.206/32
Status:       Running
IP:           192.168.64.206
IPs:
  IP:           192.168.64.206
Controlled By:  StatefulSet/my-release-etcd
Containers:
  etcd:
    Container ID:   docker://c65f7b13411d6117954fd50fe35f826189667c6d1bf53ce8ef571a82cec165ff
    Image:          docker.io/bitnami/etcd:3.5.6-debian-11-r10
    Image ID:       docker-pullable://bitnami/etcd@sha256:2d7b831769734bb97a5c1cfd2fe46e29f422b70b5ba9f9aedfd91300839ac3ee
    Ports:          2379/TCP, 2380/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 08 Jan 2023 19:45:40 +0800
      Finished:     Sun, 08 Jan 2023 19:45:40 +0800
    Ready:          False
    Restart Count:  4
    Liveness:       exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=30s #success=1 #failure=5
    Readiness:      exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:
      BITNAMI_DEBUG:                     false
      MY_POD_IP:                          (v1:status.podIP)
      MY_POD_NAME:                       my-release-etcd-0 (v1:metadata.name)
      MY_STS_NAME:                       my-release-etcd
      ETCDCTL_API:                       3
      ETCD_ON_K8S:                       yes
      ETCD_START_FROM_SNAPSHOT:          no
      ETCD_DISASTER_RECOVERY:            no
      ETCD_NAME:                         $(MY_POD_NAME)
      ETCD_DATA_DIR:                     /bitnami/etcd/data
      ETCD_LOG_LEVEL:                    info
      ALLOW_NONE_AUTHENTICATION:         no
      ETCD_ROOT_PASSWORD:                <set to the key 'etcd-root-password' in secret 'my-release-etcd'>  Optional: false
      ETCD_AUTH_TOKEN:                   jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10m
      ETCD_ADVERTISE_CLIENT_URLS:        http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2379,http://my-release-etcd.default.svc.cluster.local:2379
      ETCD_LISTEN_CLIENT_URLS:           http://0.0.0.0:2379
      ETCD_INITIAL_ADVERTISE_PEER_URLS:  http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2380
      ETCD_LISTEN_PEER_URLS:             http://0.0.0.0:2380
      ETCD_CLUSTER_DOMAIN:               my-release-etcd-headless.default.svc.cluster.local
    Mounts:
      /bitnami/etcd from data (rw)
      /opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bz6gj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  etcd-jwt-token:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-release-etcd-jwt-token
    Optional:    false
  data:
    Type:       EmptyDir (a temporary directory that shares a pod·s lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-bz6gj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m36s                 default-scheduler  Successfully assigned default/my-release-etcd-0 to arm
  Normal   Created    101s (x4 over 2m35s)  kubelet            Created container etcd
  Normal   Started    101s (x4 over 2m35s)  kubelet            Started container etcd
  Warning  BackOff    70s (x12 over 2m33s)  kubelet            Back-off restarting failed container
  Normal   Pulled     57s (x5 over 2m35s)   kubelet            Container image "docker.io/bitnami/etcd:3.5.6-debian-11-r10" already present on machine
[root@arm download]# k logs my-release-etcd-0
exec /opt/bitnami/scripts/etcd/entrypoint.sh: exec format error
[root@arm download]# 

发现得到同样的错误,然后去网上查找发现一个issue: Running the container fails with ’exec /opt/bitnami/scripts/zookeeper/entrypoint.sh: exec format error’

I’m afraid we currently don’t have support for ARM architecture in our containers. It’s something that we have in our backlog, but there are no immediate plans to work on it. As soon as there are news, we will let you know.

原来bitnami不支持ARM架构的服务器…….我吐血了