# How to Use OpenClaw with FPT AI Inference

### &#x20;What is FPT AI Inference?&#x20;

FPT AI Inference gives you access to leading open-source models — Llama 4, Qwen3, DeepSeek, and more — through a single OpenAI-compatible API, backed by data centers in Vietnam and Japan.&#x20;

By pairing OpenClaw with FPT AI Inference, you get a fully local agent powered by frontier open-source models — your machine, your keys, your data.&#x20;

#### Get started in 5 minutes&#x20;

Prerequisites&#x20;

* An OpenClaw installation (docs.openclaw\.ai/install)&#x20;
* An FPT AI API key — grab one at marketplace.fptcloud.com (new users get $100 free credit — including $70 for AI Inference, enough for millions of tokens to power your first OpenClaw workflows)&#x20;

**Step 1: Install OpenClaw**&#x20;

Since FPT AI Inference is fully OpenAI-compatible, you can connect it to OpenClaw using a custom base URL. Open your OpenClaw config file.’&#x20;

**Step 2: Choose Model/auth provider**&#x20;

![](https://2158065032-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FDB5fvsiFxYI972UDTv%2Fuploads%2FWpYAAsSL93GQ8AFqkpSy%2Funknown.png?alt=media\&token=1bd312c3-0f23-4049-a407-67533ffbed22) &#x20;

**Step 3: Enter FPT Marketplace API endpoint and Key**&#x20;

![](https://2158065032-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FDB5fvsiFxYI972UDTv%2Fuploads%2Fb8Ni1oGwroBAr1WVDl7J%2Funknown.png?alt=media\&token=5925226c-1ca5-4e94-98ed-916c38c7abbe)&#x20;

**Step 4: Default Model**&#x20;

![](https://2158065032-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FDB5fvsiFxYI972UDTv%2Fuploads%2FGJwbEL7Mq5DLHD9MlK4h%2Funknown.png?alt=media\&token=2778066b-448b-4cd7-824e-e87a8c066ec5)&#x20;

***Environment note***&#x20;

If the Gateway runs as a daemon (launchd / systemd), make sure <mark style="color:$danger;">FPT\_API\_KEY</mark> is available to that process. The easiest way is to keep it in <mark style="color:$danger;">\~/.openclaw/.env</mark> — OpenClaw loads this file automatically on startup.&#x20;

**Security**&#x20;

OpenClaw runs on your machine and calls external APIs — here's what you need to know to stay secure.&#x20;

**1. API key storage**&#x20;

Never hardcode your API key in config files or source code. Always use environment variables or the <mark style="color:$danger;">.env</mark> file that OpenClaw manages:&#x20;

<mark style="color:$info;">`# Good — stored in ~/.openclaw/.env (not committed to git)`</mark>&#x20;

<mark style="color:$info;">`FPT_API_KEY=fpt_xxxxxxxxxxxx`</mark>&#x20;

<mark style="color:$info;">`# Bad — hardcoded in config`</mark>&#x20;

<mark style="color:$info;">`"apiKey": "fpt_xxxxxxxxxxxx"   // ❌ don't do this`</mark>&#x20;

If you suspect your key has been exposed, rotate it immediately at [<mark style="color:blue;">marketplace.fptcloud.com</mark>](https://marketplace.fptcloud.com)<mark style="color:blue;">.</mark>&#x20;

**2. Prompt injection awareness**&#x20;

OpenClaw is an autonomous agent — it reads web pages, files, messages, and executes actions. This makes it a target for prompt injection attacks, where malicious content in the environment tries to hijack the agent's behavior.&#x20;

Best practices:&#x20;

* Use strong, instruction-following models (Llama 4, Qwen3) that are more resistant to injection&#x20;
* Review OpenClaw's security best practices before deploying in production&#x20;
* Be cautious when allowing the agent to browse untrusted websites or process untrusted files&#x20;
* Prompt injection is an industry-wide unsolved problem — no model is fully immune&#x20;

**3. Network security**&#x20;

All requests from OpenClaw to FPT AI Inference are transmitted over HTTPS/TLS. No plaintext API calls. Your prompts and responses are encrypted in transit.&#x20;

<mark style="color:$info;">`Base URL: https://mkp-api.fptcloud.com`</mark>&#x20;

<mark style="color:$info;">`Protocol: HTTPS (TLS 1.2+)`</mark>&#x20;

**4. Rate limiting & runaway protection**&#x20;

FPT AI Inference has built-in rate limiting and abuse protection. A misconfigured or looping agent won't silently drain your credit. Consider setting a budget cap:&#x20;

<mark style="color:$info;">`{`</mark>&#x20;

&#x20; <mark style="color:$info;">`agents: {`</mark>&#x20;

&#x20;   <mark style="color:$info;">`defaults: {`</mark>&#x20;

&#x20;     <mark style="color:$info;">`// Hard cap per session`</mark>&#x20;

&#x20;     <mark style="color:$info;">`maxTokensPerSession: 100000,`</mark>&#x20;

&#x20;   <mark style="color:$info;">`},`</mark>&#x20;

&#x20; <mark style="color:$info;">`},`</mark>&#x20;

<mark style="color:$info;">`}`</mark>&#x20;

&#x20;

Monitor your usage in real-time at [<mark style="color:blue;">marketplace.fptcloud.com/my-account?tab=my-usage</mark>](https://marketplace.fptcloud.com/my-account?tab=my-usage)

#### Why FPT AI Inference + OpenClaw?&#x20;

* OpenAI-compatible — zero code changes, works out of the box&#x20;
* $100 free credit — new users get $100 Starter Plan to explore all services&#x20;

**Supported models**&#x20;

| Model                                         | Context | Best for                              |
| --------------------------------------------- | ------- | ------------------------------------- |
| GLM-4.7                                       | 128K    | General purpose — good starting model |
| meta-llama/Llama-4-Scout-17B-16E-Instruct     | 128K    | Fast, reliable agent tasks            |
| meta-llama/Llama-4-Maverick-17B-128E-Instruct | 1M      | Long context, complex tasks           |
| Qwen/Qwen3-32B                                | 128K    | Tool calling, structured output       |
| deepseek-ai/DeepSeek-V3                       | 64K     | Coding, analysis                      |
| deepseek-ai/DeepSeek-R1                       | 64K     | Step-by-step reasoning                |

Full model list: [<mark style="color:blue;">marketplace.fptcloud.com/</mark> ](https://marketplace.fptcloud.com/)

**Use cases**&#x20;

OpenClaw powered by FPT AI works great for:&#x20;

* Automating daily workflows via Telegram or WhatsApp&#x20;
* Coding assistant — write, test, and deploy code from chat&#x20;
* Research & summarization — browse and synthesize information&#x20;
* Business process automation — integrate with Slack, Teams, Discord&#x20;

Check out the OpenClaw Showcase at [openclaw.ai/](https://openclaw.ai/)showcase for real-world examples.&#x20;

**The bottom line**&#x20;

You don't have to choose between performance and security. FPT AI Inference gives you frontier open-source models with built-in rate limiting and HTTPS encryption. OpenClaw turns them into a full-featured agent that lives on your machine.&#x20;

Start free: [marketplace.fptcloud.com/?free-trial ](https://marketplace.fptcloud.com/?free-trial=)

&#x20;

### GitHub PR Description&#x20;

**Add FPT AI Inference as supported provider**&#x20;

| **Provider**      | **FPT AI Inference**           |
| ----------------- | ------------------------------ |
| Base URL          | <https://mkp-api.fptcloud.com> |
| Auth flag         | --auth-choice fpt-api-key      |
| API compatibility | OpenAI-compatible              |

**What this PR adds**&#x20;

* Native onboarding support via openclaw onboard --auth-choice fpt-api-key&#x20;
* FPT AI Inference listed as a supported provider in docs and config&#x20;
* Quickstart guide: How to use OpenClaw with FPT AI Inference&#x20;
* Security section covering: API key storage, prompt injection, HTTPS/TLS, rate limiting&#x20;

**Why FPT AI Inference**&#x20;

FPT AI Inference is an OpenAI-compatible inference API with data centers in Vietnam and Japan, giving OpenClaw users access to frontier open-source models:&#x20;

* Models — Llama 4, Qwen3-32B, DeepSeek V3/R1, and more&#x20;
* $100 free Starter Plan — including $70 for AI Inference, enough for millions of tokens to power your first OpenClaw workflows&#x20;

**Security notes included in doc**&#x20;

This PR includes a dedicated Security section aligned with OpenClaw's existing security-first positioning:&#x20;

* API key storage — best practices for .env usage, warn against hardcoding&#x20;
* Prompt injection — recommendation to use strong models + link to OpenClaw security docs&#x20;
* Network security — confirm HTTPS/TLS for all FPT AI Inference endpoints&#x20;
* Rate limiting — built-in abuse protection + budget cap config example&#x20;

OpenClaw has invested significantly in security (34 security commits in the last release). This doc reflects that priority and helps FPT AI users adopt the same standard.&#x20;

**Testing**&#x20;

* openclaw onboard --auth-choice fpt-api-key ✅&#x20;
* Model: fpt/meta-llama/Llama-4-Scout-17B-16E-Instruct ✅&#x20;
* Model: fpt/Qwen/Qwen3-32B ✅&#x20;
* Model: fpt/deepseek-ai/DeepSeek-V3 ✅&#x20;
* Gateway: openclaw gateway run ✅&#x20;
* Channels tested: Web UI, CLI ✅&#x20;

**Links**&#x20;

FPT AI Inference: [marketplace.fptcloud.com ](https://marketplace.fptcloud.com)

Starter Plan ($100 free): [marketplace.fptcloud.com/?free-trial ](https://marketplace.fptcloud.com/?free-trial=)

Security best practices: [docs.openclaw.ai/security](https://docs.openclaw.ai/security)
