FAQ

1. In which regions is M-FKE supported?

Currently, FPT Cloud supports 4 regions.

  • HAN (Hanoi)

  • SGN (Saigon/Ho Chi Minh City)

  • HAN2 (Hoa Lac)

  • JPN01 (Japan)

M-FKE is supported in all four of the above regions.

2. Can a single M-FKE cluster be deployed across multiple regions?

M-FKE does not support clusters running across multiple regions. You can create a cluster per region to implement BC&DR for the same application.

3. Does M-FKE support multiple VM configurations within a single cluster?

M-FKE supports multiple VM configurations within a single cluster using worker groups, where each worker group can have a different configuration. Worker nodes within the same worker group share the same configuration (CPU, RAM, disk).

4. How many worker nodes does M-FKE support in a single cluster?

M-FKE has a default upper limit of 100 worker nodes per worker group and 100 worker groups per cluster. If you need to increase the worker node limit, please get in touch with FPT Cloud.

5. Is M-FKE compatible with existing Kubernetes applications?

Since M-FKE uses native Kubernetes, it is fully compatible with Kubernetes platforms on other clouds such as AWS, Azure, GCP, and DO, as well as Kubernetes clusters installed on your own infrastructure. This enables easy application migration between FPT Cloud and your data centers, as well as other clouds.

6. How do I expose applications outside the cluster?

There are several ways to expose applications outside the cluster for customer access. One of the simplest methods is to use a LoadBalancer Service Type following the guide below: https://fptcloud.com/documents/managed-fpt-kubernetes-engine/?doc=service-type-load-balancer

7. How can I monitor cluster performance and alert settings?

FPT Cloud provides the FMON product, which allows you to monitor the performance and alert settings of your Kubernetes cluster. Additionally, FMON offers logging and tracing capabilities that integrate easily with FKE.

8. What is a worker group base? Can a worker group base be deleted?

M-FKE clusters always contain a worker group base that includes system components in the kube-system namespace, such as coreDNS, CNI controller, and metrics server. The worker group base cannot be deleted from the cluster.

9. How can I change the current worker group's flavor or disk configuration?

M-FKE does not support directly modifying the flavor or disk size of an existing worker group. To freely change the flavor or disk configuration, create a new worker group with the desired settings, migrate your applications from the old worker group to the new one, and then delete the obsolete old worker group.

10. Why doesn't the cluster scale in new nodes when the CPU resources and memory of a worker group's nodes are overloaded?

Cluster Autoscaler (CA) scales in/out based on the resource demands (including CPU and memory) of the pods deployed on nodes, not the actual resource usage of the nodes themselves. The Cluster Autoscaler scales in new nodes when pods are queued because there are no nodes with sufficient resources to meet the pods' demands. In that case, CA scales in new nodes, and the previously queued pods are deployed onto these new nodes.

11. Why doesn't the cluster scale out worker nodes when the CPU and memory resources of nodes in a worker group are very low?

The Cluster Autoscaler (CA) scales in/out based on the resource demands (including CPU and memory) of the pods deployed on the nodes, not the actual resource usage of the nodes. The Cluster Autoscaler scales out nodes that do not meet a 50% utilization rate (resource demand / allocated resources) within 30 minutes.

12. Is the cluster upgrade process fully automated and guaranteed to succeed 100% of the time? Is there a possibility of service downtime?

M-FKE upgrades the cluster following the worker node rollout mechanism. New k8s worker nodes are created and join the cluster. Pods running on older k8s worker nodes are then migrated to the new k8s worker nodes. Cluster upgrades are successful automatically in most cases. However, please note that M-FKE may not be able to automatically evict pods from older k8s worker nodes in certain cases, such as pods violating PDB policies. During the cluster upgrade, service downtime may occur from the time pods on the older k8s worker nodes are deleted until new pods are deployed on the newer k8s worker nodes. If pods use Persistent Volumes, the wait time until old pods are ejected and new pods fully execute may be longer. Therefore, to ensure system stability, users should actively monitor the upgrade process.

13. Is it possible to set taints on a worker group basis?

Worker group-based deployment only supports label assignment and does not support taint assignment. Applying a taint to a worker group-based deployment when worker nodes within that group lack the tolerance to deploy system pods on that worker base can cause issues with cluster operation. MFKE recommends administrators deploy applications to other worker groups to avoid impacting system operation.

Last updated