Hugging Face
Use Hugging Face Inference API
Configuration
System Prompt
Enter system prompt to guide the model behavior...
User Prompt*
Enter your message here...
Provider*
Select...
Model*
e.g., deepseek/deepseek-v3-0324, llama3.1-8b, meta-llama/Llama-3.2-3B-Instruct-Turbo
The model must be available for the selected provider.
Temperature
0
Max Tokens
e.g., 1000
API Token*
••••••••
Tools
huggingface_chat
Generate completions using Hugging Face Inference API
Input
| Parameter | Type | Required | Description |
|---|---|---|---|
systemPrompt | string | No | System prompt to guide the model behavior |
content | string | Yes | The user message content to send to the model |
provider | string | Yes | The provider to use for the API request (e.g., novita, cerebras, etc.) |
model | string | Yes | Model to use for chat completions (e.g., deepseek/deepseek-v3-0324) |
maxTokens | number | No | Maximum number of tokens to generate |
temperature | number | No | Sampling temperature (0-2). Higher values make output more random |
apiKey | string | Yes | Hugging Face API token |
Output
| Parameter | Type | Description |
|---|---|---|
success | boolean | Operation success status |
content | string | Generated text content |
model | string | Model used for generation |
prompt_tokens | number | Number of tokens in the prompt |
completion_tokens | number | Number of tokens in the completion |
Usage Instructions
Integrate Hugging Face into the workflow. Can generate completions using the Hugging Face Inference API.
Notes
- Category:
tools - Type:
huggingface