Kubernetes Release Calendar

Kubernetes Version

Upstream Release

FKE Preview

FKE GA

FKE Standard Support End

1.21

April 2021

June 2022

July 2022

September 2024

1.22

August 2021

February 2023

March 2023

November 2024

1.23

December 2021

June 2023

July 2023

February 2025

1.24

May 2022

August 2023

September 2023

May 2025

1.25

August 2022

September 2023

October 2023

August 2025

1.26

December 2022

December 2023

January 2024

November 2025

1.27

April 2023

December 2023

February 2024

February 2026

1.28

August 2023

February 2024

March 2024

May 2026

1.29

January 2024

April 2024

May 2024

August 2026

1.30

April 2024

April 2025

May 2025

November 2026

1.31

August 2024

April 2025

May 2025

February 2027

1.32

Important Notes on Using M-FKE

- Use of Namespaces: Create namespaces to separate and manage applications and environments more easily. Avoid using namespaces pre-created by the system for application deployment. Avoid deploying applications using namespaces created by the system.

- Worker Group Usage: When creating a k8s cluster, the system requires at least one worker group (base) to store system components (connectors, metrics servers, etc.). For production environments requiring high availability, we recommend configuring at least three workers in the base group and using separate worker groups for applications.

- Use Readiness & Liveness Probes: Ensure application availability.

Readiness Probes ensure requests are forwarded to a pod only when it is ready to accept them. Since pods typically take time to start up, configuring Readiness Probes prevents the service from forwarding requests to the pod during startup (when the application is not yet ready).

Liveness Probes ensure the pod running the application is in the Running state. If a Liveness Probe fails, the pod is restarted.

- Setting Resource Requests and Limits: This ensures containers have sufficient resources to run and do not exceed permitted resource amounts. Without limits, pods could consume resources beyond permitted amounts, potentially causing node crashes.

- Use autoscaling: Utilizing the autoscaling feature of Kubernetes HPA-based FKE enables your application to respond quickly to increased traffic. When traffic usage is low, the system automatically minimizes the number of Pods/nodes.

- Use multiple pods (two or more): To ensure high availability, we recommend using two or more pods per service. Use anti-affinity to ensure replica pods are deployed on different nodes.

- Use persistent volumes: M-FKE supports block storage.

Block storage is the system's default choice, supports RWO, and delivers excellent performance according to storage policies.

- Backup: Users must perform their own backups of data on PVCs (if any). After backing up to a VM, you can use the FCloud Backup & Recovery solution to back up the VM.

- Monitoring and Logging: Integrate monitoring and logging into your Kubernetes cluster using FMON. Set up alerts for your system.

Last updated