ai-model-cloudbase

When to use this skill

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-model-cloudbase" with this command: npx skills add tencentcloudbase/cloudbase-mcp/tencentcloudbase-cloudbase-mcp-ai-model-cloudbase

When to use this skill

Use this skill for calling AI models using CloudBase across all platforms.

Supported platforms:

Platform SDK/API Section

Web (Browser) @cloudbase/js-sdk

Part 1

Node.js (Server/Cloud Functions) @cloudbase/node-sdk ≥3.16.0 Part 1 (same API, different init)

Any platform (HTTP) HTTP API / OpenAI SDK Part 2

WeChat Mini Program wx.cloud.extend.AI

Part 3 ⚠️ Different API

How to use this skill (for a coding agent)

  • Identify the target platform - Ask user which platform they're developing for

  • Confirm CloudBase environment - Get env (environment ID) and credentials

  • Pick the appropriate section - Part 1 for JS/Node SDK, Part 3 for WeChat Mini Program

  • Follow CloudBase API shapes exactly - Do not invent new APIs

Part 1: CloudBase JS SDK & Node SDK

JS SDK and Node SDK share the same AI API. Only initialization differs.

Installation

For Web (Browser)

npm install @cloudbase/js-sdk

For Node.js (Server/Cloud Functions)

npm install @cloudbase/node-sdk

⚠️ Node SDK AI feature requires version 3.16.0 or above. Check your version with npm list @cloudbase/node-sdk .

Initialization - Web (JS SDK)

import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({ env: "<YOUR_ENV_ID>", accessKey: "<YOUR_PUBLISHABLE_KEY>" // Get from CloudBase console });

const auth = app.auth(); await auth.signInAnonymously();

const ai = app.ai();

Initialization - Node.js (Node SDK)

const tcb = require('@cloudbase/node-sdk'); const app = tcb.init({ env: '<YOUR_ENV_ID>' });

exports.main = async (event, context) => { const ai = app.ai(); // Use AI features - same API as JS SDK };

generateText() - Non-streaming

const model = ai.createModel("hunyuan-exp");

const result = await model.generateText({ model: "hunyuan-lite", messages: [{ role: "user", content: "你好,请你介绍一下李白" }], });

console.log(result.text); // Generated text string console.log(result.usage); // { prompt_tokens, completion_tokens, total_tokens } console.log(result.messages); // Full message history console.log(result.rawResponses); // Raw model responses

streamText() - Streaming

const model = ai.createModel("hunyuan-exp");

const res = await model.streamText({ model: "hunyuan-turbos-latest", messages: [{ role: "user", content: "你好,请你介绍一下李白" }], });

// Option 1: Iterate text stream (recommended) for await (let text of res.textStream) { console.log(text); // Incremental text chunks }

// Option 2: Iterate data stream for full response data for await (let data of res.dataStream) { console.log(data); // Full response chunk with metadata }

// Option 3: Get final results const messages = await res.messages; // Full message history const usage = await res.usage; // Token usage

generateImage() - Image Generation

⚠️ Image generation is currently only available in Node SDK, not in JS SDK (Web) or WeChat Mini Program.

// Node SDK only const imageModel = ai.createImageModel("hunyuan-image");

const res = await imageModel.generateImage({ model: "hunyuan-image", prompt: "一只可爱的猫咪在草地上玩耍", size: "1024x1024", version: "v1.9", });

console.log(res.data[0].url); // Image URL (valid 24 hours) console.log(res.data[0].revised_prompt);// Revised prompt if revise=true

Part 2: CloudBase HTTP API

API Endpoint

https://<ENV_ID>.api.tcloudbasegateway.com/v1/ai/<PROVIDER>/v1/chat/completions

cURL - Non-streaming

curl -X POST 'https://<ENV_ID>.api.tcloudbasegateway.com/v1/ai/deepseek/v1/chat/completions'
-H 'Authorization: Bearer <YOUR_API_KEY>'
-H 'Content-Type: application/json'
-d '{"model": "deepseek-r1", "messages": [{"role": "user", "content": "你好"}], "stream": false}'

cURL - Streaming

curl -X POST 'https://<ENV_ID>.api.tcloudbasegateway.com/v1/ai/deepseek/v1/chat/completions'
-H 'Authorization: Bearer <YOUR_API_KEY>'
-H 'Content-Type: application/json'
-H 'Accept: text/event-stream'
-d '{"model": "deepseek-r1", "messages": [{"role": "user", "content": "你好"}], "stream": true}'

OpenAI SDK Compatible

const OpenAI = require("openai");

const client = new OpenAI({ apiKey: "<YOUR_API_KEY>", baseURL: "https://<ENV_ID>.api.tcloudbasegateway.com/v1/ai/deepseek/v1", });

const completion = await client.chat.completions.create({ model: "deepseek-r1", messages: [{ role: "user", content: "你好" }], stream: true, });

for await (const chunk of completion) { console.log(chunk); }

Part 3: WeChat Mini Program

⚠️ WeChat Mini Program API is DIFFERENT from JS/Node SDK. Pay attention to the parameter structure.

Requires base library 3.7.1+. No extra SDK needed.

Initialization

// app.js App({ onLaunch: function() { wx.cloud.init({ env: "<YOUR_ENV_ID>" }); } })

generateText() - Non-streaming

⚠️ Different from JS/Node SDK: Return value is raw model response.

const model = wx.cloud.extend.AI.createModel("hunyuan-exp");

const res = await model.generateText({ model: "hunyuan-lite", messages: [{ role: "user", content: "你好" }], });

// ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK console.log(res.choices[0].message.content); // Access via choices array console.log(res.usage); // Token usage

streamText() - Streaming

⚠️ Different from JS/Node SDK: Must wrap parameters in data object, supports callbacks.

const model = wx.cloud.extend.AI.createModel("hunyuan-exp");

// ⚠️ Parameters MUST be wrapped in data object const res = await model.streamText({ data: { // ⚠️ Required wrapper model: "hunyuan-lite", messages: [{ role: "user", content: "hi" }] }, onText: (text) => { // Optional: incremental text callback console.log("New text:", text); }, onEvent: ({ data }) => { // Optional: raw event callback console.log("Event:", data); }, onFinish: (fullText) => { // Optional: completion callback console.log("Done:", fullText); } });

// Async iteration also available for await (let str of res.textStream) { console.log(str); }

// Check for completion with eventStream for await (let event of res.eventStream) { console.log(event); if (event.data === "[DONE]") { // ⚠️ Check for [DONE] to stop break; } }

API Comparison: JS/Node SDK vs WeChat Mini Program

Feature JS/Node SDK WeChat Mini Program

Namespace app.ai()

wx.cloud.extend.AI

generateText params Direct object Direct object

generateText return { text, usage, messages }

Raw: { choices, usage }

streamText params Direct object ⚠️ Wrapped in data: {...}

streamText return { textStream, dataStream }

{ textStream, eventStream }

Callbacks Not supported onText , onEvent , onFinish

Image generation Node SDK only Not available

Type Definitions

JS/Node SDK - BaseChatModelInput

interface BaseChatModelInput { model: string; // Required: model name messages: Array<ChatModelMessage>; // Required: message array temperature?: number; // Optional: sampling temperature topP?: number; // Optional: nucleus sampling }

type ChatModelMessage = | { role: "user"; content: string } | { role: "system"; content: string } | { role: "assistant"; content: string };

JS/Node SDK - generateText() Return

interface GenerateTextResult { text: string; // Generated text messages: Array<ChatModelMessage>; // Full message history usage: Usage; // Token usage rawResponses: Array<unknown>; // Raw model responses error?: unknown; // Error if any }

interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; }

JS/Node SDK - streamText() Return

interface StreamTextResult { textStream: AsyncIterable<string>; // Incremental text stream dataStream: AsyncIterable<DataChunk>; // Full data stream messages: Promise<ChatModelMessage[]>;// Final message history usage: Promise<Usage>; // Final token usage error?: unknown; // Error if any }

interface DataChunk { choices: Array<{ finish_reason: string; delta: ChatModelMessage; }>; usage: Usage; rawResponse: unknown; }

WeChat Mini Program - streamText() Input

interface WxStreamTextInput { data: { // ⚠️ Required wrapper object model: string; messages: Array<{ role: "user" | "system" | "assistant"; content: string; }>; }; onText?: (text: string) => void; // Incremental text callback onEvent?: (prop: { data: string }) => void; // Raw event callback onFinish?: (text: string) => void; // Completion callback }

WeChat Mini Program - streamText() Return

interface WxStreamTextResult { textStream: AsyncIterable<string>; // Incremental text stream eventStream: AsyncIterable<{ // Raw event stream event?: unknown; id?: unknown; data: string; // "[DONE]" when complete }>; }

WeChat Mini Program - generateText() Return

// Raw model response (OpenAI-compatible format) interface WxGenerateTextResponse { id: string; object: "chat.completion"; created: number; model: string; choices: Array<{ index: number; message: { role: "assistant"; content: string; }; finish_reason: string; }>; usage: { prompt_tokens: number; completion_tokens: number; total_tokens: number; }; }

HunyuanGenerateImageInput (JS/Node SDK only)

interface HunyuanGenerateImageInput { model: "hunyuan-image" | string; // Required prompt: string; // Required: image description version?: "v1.8.1" | "v1.9"; // Default: "v1.8.1" size?: string; // Default: "1024x1024" negative_prompt?: string; // v1.9 only style?: string; // v1.9 only revise?: boolean; // Default: true n?: number; // Default: 1 footnote?: string; // Watermark, max 16 chars seed?: number; // Range: [1, 4294967295] }

interface HunyuanGenerateImageOutput { id: string; created: number; data: Array<{ url: string; // Image URL (24h valid) revised_prompt?: string; }>; }

Best Practices

  • Use streaming for long responses - Better user experience

  • Handle errors gracefully - Wrap AI calls in try/catch

  • Keep API Keys secure - Never expose in client-side code

  • Initialize early - Initialize SDK/cloud in app entry point

  • Check for [DONE] - In WeChat Mini Program streaming, check event.data === "[DONE]" to stop

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

ui-design

No summary provided by upstream source.

Repository SourceNeeds Review
General

ai-model-wechat

No summary provided by upstream source.

Repository SourceNeeds Review
General

cloudbase-platform

No summary provided by upstream source.

Repository SourceNeeds Review