While creating the Azure Kubernetes Services (AKS) cluster, Azure, by default, creates a certain number of systems with default system names and configurations. The Administration Console pod does not get deployed in the AKS cluster when the system name exceeds 32 characters.
Workaround: Append the --nodepool-name string while running the az aks create command.
For example, az aks create --resource-group <resource-group-name> --name <AKS-cluster-name> --node-count 3 --nodepool-name n --generate-ssh-keys --attach-acr <ACR-name>
After you run the Access Manager helm chart, the helm chart gets deployed, and a message STATUS: deployed is displayed. However, the Access Manager resources do not get deployed immediately. To check the status of the resources, you must run some commands.
To check the status of the Access Manager pods, run the following command:
kubectl get --namespace <name-of-the-namespace> pods
To check the status of all the Access Manager resources, run the following command:
kubectl get --namespace <name-of-the-namespace> statefulset,pods,pv,pvc,svc,ingress
For information about viewing the logs, see Section 25.6.3, Debugging Pods.
Run the following command to view the names of the Access Manager pods:
kubectl get pods -n <name-of-the-namespace>
To view the configuration logs of the Access Manager pods, run the following commands:
Administration Console: kubectl logs -f pod/<name-of-the-administration-console-pod> am-ac --namespace <name-of-the-namespace>
eDirectory: kubectl logs -f pod/<name-of-the-administration-console-pod> am-edir --namespace <name-of-the-namespace>
Identity Server: kubectl logs -f pod/<name-of-the-identity-server-pod> --namespace <name-of-the-namespace>
Access Gateway: kubectl logs -f pod/<name-of-the-access-gateway-pod> --namespace <name-of-the-namespace>
Analytics Server: kubectl logs -f pod/<name-of-the-analytics-dashboard-pod> --namespace <name-of-the-namespace>
To get inside a pod, run the following commands:
Administration Console: kubectl exec -it pod/<name-of-the-administration-console-pod> -c am-ac bash --namespace <name-of-the-namespace>
eDirectory: kubectl exec -it pod/<name-of-the-administration-console-pod> -c am-edir bash --namespace <name-of-the-namespace>
Identity Server: kubectl exec -it pod/<name-of-the-identity-server-pod> bash --namespace <name-of-the-namespace>
Access Gateway: kubectl exec -it pod/<name-of-the-access-gateway-pod> bash --namespace <name-of-the-namespace>
Analytics Server: kubectl exec -it pod/<name-of-the-analytics-dashboard-pod> bash --namespace <name-of-the-namespace>
To retrieve more information about each pod, run the following commands:
Administration Console: kubectl describe pod/<name-of-the-administration-console-pod> --namespace <name-of-the-namespace>
Identity Server: kubectl describe pod/<name-of-the-identity-server-pod> --namespace <name-of-the-namespace>
Access Gateway: kubectl describe pod/<name-of-the-access-gateway-pod> --namespace <name-of-the-namespace>
Analytics Server: kubectl describe pod/<name-of-the-analytics-dashboard-pod> --namespace <name-of-the-namespace>
Cannot use a Release Name that is currently in use.
Workaround: Uninstall the release.
View the available releases:
helm list -n <name-of-the-namespace>
Uninstall the release:
helm uninstall --namespace <name-of-the-namespace> <release-name>
Running the kubectl describe pod command can throw the following error messages:
probe errored: rpc error: code = DeadlineExceeded desc = context deadline exceeded
pod has unbound immediate PersistentVolumeClaims
Workaround: Ignore the message.
After deploying, Access Gateway nodes display warnings that the nodes cannot connect to the DNS server.
Workaround: Check and rectify the Container Network Interface (CNI) plugin configuration, or deploy Access Manager again with another CNI plugin applied to the Kubernetes cluster.
This issue can occur if Swap is enabled on the host machine.
Workaround: Disable Swap by one of the following ways:
Use the swapoff -a command.
Or,
Open the /etc/fstab file, and comment out the swap entry.
This issue can occur if you revert the master node. However, the worker nodes still assume the connection with the old master node.
Workaround:
Run the following command to retrieve a token from the master node:
//get token - kubeadm token create --print-join-command
Remove the following files from the worker nodes:
ca.crt
// sudo rm /etc/kubernetes/pki/ca.crt
kubelet.conf
// sudo rm /etc/kubernetes/kubelet.conf
Delete the worker nodes:
Run commands:
kubectl drain <name-of-the-node> --ignore-daemonsets
kubectl delete node <name-of-the-node>
Repeat the previous step for all the nodes.
Connect the master node with the worker nodes:
// kubeadm join <master-node-IP>:6443 --token n5hxyu.v0wzsc0zk9rosohw --discovery-token-ca-cert-hash sha256:4e891f83f3aaa75832d8a955e25ed50111d6bc3b26146180e2c4d48f9fa5556d
This issue occurs when the host entries are not available.
Workaround: Add the host entries to the worker nodes.