Modify worker groups
Requirements:
CPU, GPU, RAM, Storage, Instance quotas: Must be sufficient for the Worker Group configuration changes.
The number of GPUs must meet the Min node + 1 requirement so that Worker Nodes can roll out the configuration. If using Autoscale, the number of GPUs must meet the desired Max node requirement.
01 Network subnet: Network used for Kubernetes Nodes, the subnet must have a Static IP Pool.
Step by Step
Step 1: Access the FPT Cloud portal console.fptcloud.com, select Kubernetes, click on the cluster you want to change, select Node Pools, and click the "Edit Workers" icon.
Step 2: In addition to the configuration information for the standard Worker Group, you need to select the configuration for the GPU:
Select instance type: GPU
Select GPU type (A30, A100, H100, H200, etc.)
Select the GPU sharing configuration (None/Single/Mixed) Select the GPU type configuration (CPU/RAM/GPU RAM)
Note:
Changing the GPU sharing method will require all GPU-related workloads to be redeployed, so before making changes, users must scale the application down to 0 to avoid errors.
If GPU sharing was previously set to None or None with Operator, you cannot change GPU sharing to Single or Mixed.
If you previously selected Single for GPU sharing, you can only change it to the corresponding Single modes.
Step 3: Review the initialization information and click Save.
Step 4: Monitor the initialization status of adding the Worker Group to the Kubernetes cluster. Once the status shows Successed (Running), proceed to use and deploy the application.
Last updated
