kube cluster master / worker

[root@localhost ~]# firewall-cmd --permanent --add-port=6443/tcp
success
[root@localhost ~]# firewall-cmd --permanent --add-port=2379-2380/tcp
success
[root@localhost ~]#  firewall-cmd --permanent --add-port=10250/tcp
success
[root@localhost ~]# firewall-cmd --permanent --add-port=10251/tcp
success
[root@localhost ~]# firewall-cmd --permanent --add-port=10252/tcp
success
[root@localhost ~]#  firewall-cmd --permanent --add-port=10255/tcp
success
[root@localhost ~]# firewall-cmd --reload
success


root@localhost ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@localhost ~]# kubeadm init
I1121 21:57:43.758717   10203 version.go:240] remote version is much newer: v1.16.3; falling back to: stable-1.14
[init] Using Kubernetes version: v1.14.9
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.105]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.105 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 32.016737 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: enfl8b.t1pp71kev1gnsarr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.105:6443 --token enfl8b.t1pp71kev1gnsarr \
    --discovery-token-ca-cert-hash sha256:7ae778e5fd88ababe4a19ba03426638fa90ebc84ecbec7671530ca06be87719e 

issue 
[root@localhost ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

resolutuin

[root@localhost ~]# mkdir -p $HOME/.kube
[root@localhost ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@localhost ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@localhost ~]# kubectl get nodes
NAME                    STATUS     ROLES    AGE   VERSION
localhost.localdomain   NotReady   master   94s   v1.14.0


[root@localhost ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "localhost.localdomain" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W1121 22:09:48.930252   13390 reset.go:158] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.


[root@localhost ~]# kubeadm init
I1121 22:10:05.763463   13745 version.go:240] remote version is much newer: v1.16.3; falling back to: stable-1.14
[init] Using Kubernetes version: v1.14.9
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING Hostname]: hostname "localhost.localdomain" could not be reached
[WARNING Hostname]: hostname "localhost.localdomain": lookup localhost.localdomain on 192.168.0.1:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.105 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.105]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.506319 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mh66uo.ruqv7wdu9093lqfl
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.105:6443 --token mh66uo.ruqv7wdu9093lqfl \
    --discovery-token-ca-cert-hash sha256:b5cd5c1902806fe49b1cff559b810e38c55d9be8720cacc170c7721b96b6e19e 

issue 

[root@localhost ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

resolution

[root@localhost ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@localhost ~]# kubectl get nodes
NAME                    STATUS     ROLES    AGE     VERSION
localhost.localdomain   NotReady   master   3m51s   v1.14.0

root@localhost ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.0.105
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.14.9
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.0.105
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
status: {}


[root@localhost ~]# kubectl describe nodes
Name:               localhost.localdomain
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=localhost.localdomain
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 21 Nov 2019 22:10:41 +0530
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 21 Nov 2019 22:30:16 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 21 Nov 2019 22:30:16 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 21 Nov 2019 22:30:16 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 21 Nov 2019 22:30:16 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.0.106
  Hostname:    localhost.localdomain
Capacity:
 cpu:                2
 ephemeral-storage:  47285700Ki
 hugepages-2Mi:      0
 memory:             3863540Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  43578501048
 hugepages-2Mi:      0
 memory:             3761140Ki
 pods:               110
System Info:
 Machine ID:                 87c5aa6adf574ef287647f04bef0b340
 System UUID:                F0344D56-F9A3-825E-E287-83DA96AE884D
 Boot ID:                    0943e6af-e37f-4ac3-bedc-dcdc909c062f
 Kernel Version:             3.10.0-862.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.14.0
 Kube-Proxy Version:         v1.14.0
Non-terminated Pods:         (5 in total)
  Namespace                  Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                             ------------  ----------  ---------------  -------------  ---
  kube-system                etcd-localhost.localdomain                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
  kube-system                kube-apiserver-localhost.localdomain             250m (12%)    0 (0%)      0 (0%)           0 (0%)         0s
  kube-system                kube-controller-manager-localhost.localdomain    200m (10%)    0 (0%)      0 (0%)           0 (0%)         1s
  kube-system                kube-proxy-fwdcr                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
  kube-system                kube-scheduler-localhost.localdomain             100m (5%)     0 (0%)      0 (0%)           0 (0%)         1s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                550m (27%)  0 (0%)
  memory             0 (0%)      0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                   From                               Message
  ----     ------                   ----                  ----                               -------
  Normal   NodeHasSufficientMemory  19m (x8 over 19m)     kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)     kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     19m (x7 over 19m)     kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientPID
  Normal   Starting                 19m                   kube-proxy, localhost.localdomain  Starting kube-proxy.
  Normal   Starting                 19m                   kubelet, localhost.localdomain     Starting kubelet.
  Normal   NodeHasSufficientMemory  19m                   kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    19m                   kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     19m                   kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  19m                   kubelet, localhost.localdomain     Updated Node Allocatable limit across pods
  Normal   Starting                 19m                   kube-proxy, localhost.localdomain  Starting kube-proxy.
  Warning  Rebooted                 4m57s (x78 over 19m)  kubelet, localhost.localdomain     Node localhost.localdomain has been rebooted, boot id: a4e79676-62a9-463e-9f56-73736ace9c40
  Warning  Rebooted                 4m3s (x84 over 19m)   kubelet, localhost.localdomain     Node localhost.localdomain has been rebooted, boot id: 0943e6af-e37f-4ac3-bedc-dcdc909c062f


[root@localhost ~]# kubectl get nodes
NAME                    STATUS     ROLES    AGE   VERSION
localhost.localdomain   NotReady   master   25m   v1.14.0


[root@localhost ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@localhost ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

root@localhost ~]# kubectl get nodes
NAME                    STATUS     ROLES    AGE   VERSION
localhost.localdomain   NotReady   master   29m   v1.14.0


[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart kubelet


[root@localhost ~]# kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
localhost.localdomain   Ready    master   47m   v1.14.0
[root@localhost ~]# kubectl describe nodes
Name:               localhost.localdomain
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=localhost.localdomain
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 21 Nov 2019 22:10:41 +0530
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Thu, 21 Nov 2019 22:43:10 +0530   Thu, 21 Nov 2019 22:43:10 +0530   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Thu, 21 Nov 2019 22:57:35 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Thu, 21 Nov 2019 22:57:35 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Thu, 21 Nov 2019 22:57:35 +0530   Thu, 21 Nov 2019 22:10:32 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Thu, 21 Nov 2019 22:57:35 +0530   Thu, 21 Nov 2019 22:42:33 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.0.106
  Hostname:    localhost.localdomain
Capacity:
 cpu:                2
 ephemeral-storage:  47285700Ki
 hugepages-2Mi:      0
 memory:             3863540Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  43578501048
 hugepages-2Mi:      0
 memory:             3761140Ki
 pods:               110
System Info:
 Machine ID:                 87c5aa6adf574ef287647f04bef0b340
 System UUID:                F0344D56-F9A3-825E-E287-83DA96AE884D
 Boot ID:                    0943e6af-e37f-4ac3-bedc-dcdc909c062f
 Kernel Version:             3.10.0-862.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.14.0
 Kube-Proxy Version:         v1.14.0
Non-terminated Pods:         (4 in total)
  Namespace                  Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                        ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-584795fc57-8dhdt    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     47m
  kube-system                coredns-584795fc57-vfww5    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     47m
  kube-system                kube-proxy-fwdcr            0 (0%)        0 (0%)      0 (0%)           0 (0%)         47m
  kube-system                weave-net-864ct             20m (1%)      0 (0%)      0 (0%)           0 (0%)         18m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                220m (11%)  0 (0%)
  memory             140Mi (3%)  340Mi (9%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                  From                               Message
  ----     ------                   ----                 ----                               -------
  Normal   NodeHasSufficientMemory  48m (x8 over 48m)    kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    48m (x8 over 48m)    kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     48m (x7 over 48m)    kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientPID
  Normal   Starting                 47m                  kube-proxy, localhost.localdomain  Starting kube-proxy.
  Normal   Starting                 47m                  kubelet, localhost.localdomain     Starting kubelet.
  Normal   NodeHasSufficientMemory  47m                  kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    47m                  kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     47m                  kubelet, localhost.localdomain     Node localhost.localdomain status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  47m                  kubelet, localhost.localdomain     Updated Node Allocatable limit across pods
  Normal   Starting                 47m                  kube-proxy, localhost.localdomain  Starting kube-proxy.
  Warning  Rebooted                 17m (x152 over 47m)  kubelet, localhost.localdomain     Node localhost.localdomain has been rebooted, boot id: a4e79676-62a9-463e-9f56-73736ace9c40
  Warning  Rebooted                 16m (x156 over 47m)  kubelet, localhost.localdomain     Node localhost.localdomain has been rebooted, boot id: 0943e6af-e37f-4ac3-bedc-dcdc909c062f
[root@localhost ~]# 

[root@localhost ~]# kubectl  get pods  --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-584795fc57-8dhdt   1/1     Running   0          35m
kube-system   coredns-584795fc57-vfww5   1/1     Running   0          35m
kube-system   kube-proxy-fwdcr           1/1     Running   0          35m
kube-system   weave-net-864ct            2/2     Running   0          6m30s


Comments

Popular posts from this blog

ansible redhat cluster qorum qdevice

PE 3.9.0 on centos 6

gfs2 cluster