Hugging Face

Use Hugging Face Inference API

Configuration

System Prompt
Enter system prompt to guide the model behavior...
User Prompt*
Enter your message here...
Provider*
Select...
Model*
e.g., deepseek/deepseek-v3-0324, llama3.1-8b, meta-llama/Llama-3.2-3B-Instruct-Turbo
The model must be available for the selected provider.
Temperature
0
Max Tokens
e.g., 1000
API Token*
••••••••

Tools

huggingface_chat

Generate completions using Hugging Face Inference API

Input

ParameterTypeRequiredDescription
systemPromptstringNoSystem prompt to guide the model behavior
contentstringYesThe user message content to send to the model
providerstringYesThe provider to use for the API request (e.g., novita, cerebras, etc.)
modelstringYesModel to use for chat completions (e.g., deepseek/deepseek-v3-0324)
maxTokensnumberNoMaximum number of tokens to generate
temperaturenumberNoSampling temperature (0-2). Higher values make output more random
apiKeystringYesHugging Face API token

Output

ParameterTypeDescription
successbooleanOperation success status
contentstringGenerated text content
modelstringModel used for generation
prompt_tokensnumberNumber of tokens in the prompt
completion_tokensnumberNumber of tokens in the completion

Usage Instructions

Integrate Hugging Face into the workflow. Can generate completions using the Hugging Face Inference API.

Notes

  • Category: tools
  • Type: huggingface