Prompt Guide

(Best Practices)

Introduction

Prompt engineering is the practice of designing structured inputs that guide large language models (LLMs) to generate accurate, relevant, and context-aware outputs. Rather than relying solely on model size or training data, well-crafted prompts significantly influence performance across tasks such as summarization, content generation, analysis, and decision support.

Inspired by industry best practices from sources such as IBM Think and applied research in generative AI, this document introduces a Prompting Cookbook tailored for FPT AI Marketplace, providing reusable prompting techniques applicable across diverse models and use cases.

Throughout this document, selected examples reference specific models available on FPT AI Marketplace (such as DeepSeek-V3.2-Special) strictly for illustration. The described prompting techniques remain fully applicable across all supported models.


Understanding Prompt Structure

A prompt typically consists of four logical components:

Component
Description

Role / Context

Defines who the model should act as or the situation it operates in

Task Instruction

Clearly states what the model is expected to do

Constraints

Specifies tone, format, length, or rules

Output Expectation

Describes the desired structure of the response

Well-structured prompts improve consistency, interpretability, and portability across models.


1. Zero-Shot Prompting

Use case: Baseline tasks, fast answers, exploratory queries

Zero-shot prompting asks the model to perform a task without providing examples, relying entirely on its pretrained knowledge.

Template

Example

🔍 Illustrative Example (Model: DeepSeek-V3.2-Special)

You are using DeepSeek-V3.2-Special on FPT AI Marketplace.

Explain the role of a Business Analyst in an AI product development lifecycle. Provide a concise, structured explanation.

This example demonstrates zero-shot prompting behavior on a reasoning-capable model. The same prompt structure can be reused across other Marketplace models.


2. Chain-of-Thought Prompting

Use case: Reasoning-heavy explanations, structured analysis, consulting-style responses

Chain-of-Thought (CoT) prompting encourages the model to reason step by step.

Template

Example

🔍 Illustrative Example (Model: DeepSeek-V3.2-Special)

You are using DeepSeek-V3.2-Special on FPT AI Marketplace.

Step 1: Identify challenges in adopting AI platforms Step 2: Analyze organizational and technical risks Step 3: Summarize best practices

Question: What are key considerations when enterprises adopt AI marketplaces?

This example illustrates step-by-step reasoning. CoT prompting is model-independent and can be applied to other reasoning-capable models.


3. Prompt Chaining

Use case: Multi-step workflows, document pipelines, agent-like behaviors

Prompt chaining decomposes a complex task into smaller, sequential prompts.

Example flow

  1. Generate outline

  2. Expand sections

  3. Refine tone and format

Example

🔍 Illustrative Example (Model: DeepSeek-V3.2-Special)

You are using DeepSeek-V3.2-Special on FPT AI Marketplace.

Prompt 1: Create an outline for onboarding AI models onto a marketplace. Prompt 2: Expand each step with validation and compliance details. Prompt 3: Rewrite the content for a business audience.

This example demonstrates how prompt chaining enables workflow-style generation. The same chaining logic applies across all Marketplace models.


4. RAG-Aware Prompting

Use case: Enterprise knowledge grounding, factual accuracy, policy-driven responses

RAG-aware prompting instructs the model to ground its responses in retrieved or provided knowledge.

Template

Example

🔍 Illustrative Example (Model: DeepSeek-V3.2-Special)

You are using DeepSeek-V3.2-Special on FPT AI Marketplace.

Using the provided onboarding documentation, explain compliance requirements for publishing AI models. If details are unavailable, explicitly state limitations.

This example highlights enterprise-safe prompting patterns. RAG-aware prompting is recommended for all factual and compliance-sensitive use cases.


Best Practices for FPT AI Marketplace

  • Keep prompts clear, modular, and reusable

  • Avoid coupling prompts too tightly to one model

  • Use examples only when output consistency is required

  • Prefer structured outputs for automation and integration

  • Iterate prompts based on observed outputs


Appendix — Model-Specific Examples (Illustrative)

The following examples demonstrate how selected prompting techniques may be applied to specific models available on FPT AI Marketplace. These examples are non-exhaustive and provided for reference only.

Example: DeepSeek-V3.2-Special (Text Reasoning)

Use case: Enterprise explanation with structured reasoning

Explain the AI model onboarding process on FPT AI Marketplace. Structure the answer into:

  • Prerequisites

  • Validation steps

  • Deployment readiness

This example highlights structured reasoning behavior and does not imply model exclusivity or technique dependency.

5. Applied Use Case: Education Summarization & Evaluation

Use case: Automated grading support, step-by-step logic analysis, structured feedback. This recipe demonstrates how to combine Direct Instructions, Role Prompting, and Task-Specific Output to assist educators. The goal is to digest a student's lengthy math solution into a concise summary with an accuracy assessment.

The Configuration (Recipe)

This use case utilizes specific parameters to balance mathematical precision with natural language feedback.

  • Model: gpt-oss-20b (or similar reasoning-capable models)

  • Temperature: 0.6 (Balances accuracy with natural phrasing)

  • Max Tokens: 600 (Ensures the full summary and evaluation are generated without cutoff)

  • Reasoning Effort: medium (Optimizes latency to ~5-10 seconds while maintaining logical depth)

The Prompt Structure

System Prompt:

Plaintext

User Prompt:

Plaintext

🔍 Illustrative Output (Actual Response)

The model processes the input (~912 tokens) and generates a structured evaluation (~536 tokens):

  • Method: Elimination method

  • Steps:

    1. Multiplied equations to align coefficients of y (6 and -6).

    2. Added equations to eliminate y, solving for x = 38/13.

    3. Substituted x back into the original equation to find y = 31/13.

    4. Verified results by substituting x and y into both original equations.

  • Evaluation: Correct

  • Strengths:

    • Used an appropriate method and correctly performed multiplication/addition.

    • Detailed calculation avoiding errors.

    • Full verification of roots.

  • Improvements:

    • Avoid converting to decimals prematurely; keep fractions for higher precision.

    • Multiplication steps could be optimized to avoid repetition.

This example illustrates how Task-Specific Instructions can transform raw text into standardized data suitable for educational platforms.


References

Last updated

Was this helpful?