Quickstart
Get up and running with AI Foundation Services in minutes. This guide walks you through installing the SDK, setting up authentication, and making your first API call.
Step 1: Install the OpenAI Package
AI Foundation Services uses an OpenAI-compatible API, so you can use the official OpenAI SDKs.
- Python
- Node.js
pip install openai
npm install openai
Step 2: Get an API Key
- Go to the API Key Portal and create a free trial key.
- Or purchase via the T-Cloud Marketplace for production use.
Step 3: Set Environment Variables
- macOS / Linux
- Windows (PowerShell)
- Windows (CMD)
export OPENAI_API_KEY="your_api_key_here"
export OPENAI_BASE_URL="https://llm-server.llmhub.t-systems.net/v2"
$env:OPENAI_API_KEY = "your_api_key_here"
$env:OPENAI_BASE_URL = "https://llm-server.llmhub.t-systems.net/v2"
setx OPENAI_API_KEY "your_api_key_here"
setx OPENAI_BASE_URL "https://llm-server.llmhub.t-systems.net/v2"
Step 4: Make Your First API Call
- curl
- Python
- Node.js
curl -X POST "$OPENAI_BASE_URL/chat/completions" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "Llama-3.3-70B-Instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is quantum computing in simple terms?"}
],
"temperature": 0.5,
"max_tokens": 150
}'
from openai import OpenAI
client = OpenAI() # Reads OPENAI_API_KEY and OPENAI_BASE_URL from env
response = client.chat.completions.create(
model="Llama-3.3-70B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is quantum computing in simple terms?"},
],
temperature=0.5,
max_tokens=150,
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI(); // Reads OPENAI_API_KEY and OPENAI_BASE_URL from env
const response = await client.chat.completions.create({
model: "Llama-3.3-70B-Instruct",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is quantum computing in simple terms?" },
],
temperature: 0.5,
max_tokens: 150,
});
console.log(response.choices[0].message.content);
More Examples
- Create Embeddings
- Vision / Multimodal
from openai import OpenAI
client = OpenAI()
texts = ["The quick brown fox jumps over the lazy dog", "Data science is fun!"]
result = client.embeddings.create(input=texts, model="jina-embeddings-v2-base-de")
print(f"Embedding dimension: {len(result.data[0].embedding)}")
print(f"Token usage: {result.usage}")
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="Qwen3-VL-30B-A3B-Instruct-FP8",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://images.unsplash.com/photo-1546069901-ba9599a7e63c?w=400"
},
},
],
}
],
max_tokens=300,
)
print(response.choices[0].message.content)
Next Steps
- Authentication — API key management and best practices
- Available Models — Browse all supported models
- Chat Completions Guide — Detailed guide with streaming, parameters, and more
- LangChain Integration — Use AIFS with LangChain for RAG
- LlamaIndex Integration — Use AIFS with LlamaIndex for RAG