Quickstart
Quickstart Guide for Developers
This guide will walk you through setting up the OpenAI package to interact with our LLM models and embeddings. In just a few steps, you’ll be ready to use our language, vision, and embedding models through a simple and unified API.
Step 1: Install the OpenAI Package
To get started, install the OpenAI package using pip
. This package allows you to access all our models, including language generation, chat, embeddings, and vision capabilities.
pip install openai
Step 2: Create and Export an API Key
To securely connect to our API, you’ll need to generate an API key from the dashboard. Once you have the key:
- Store it safely in a location like a
.zshrc
file (macOS/Linux) or another text file on your computer. - Export it as an environment variable in your terminal for easy access in your scripts.
Create an API key in the dashboard here and follow the steps below for your operating system.
- macOS / Linux
- Windows
export OPENAI_API_KEY="your_api_key_here"
export BASE_URL="https://llm-server.llmhub.t-systems.net/v2"
setx OPENAI_API_KEY "your_api_key_here"
setx BASE_URL "https://llm-server.llmhub.t-systems.net/v2"
You can see the list of our available models by using the command from API Reference or view the list of available models.
- Generate text
- Create vector embeddings
- Multimodal Models (Vision and Image Analysis)
Generate Text
Create a human-like response to a prompt.
import openai
import os
# Set up the client with your API key and base URL
client = openai.OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url=os.getenv("BASE_URL")
)
chat_response = client.chat.completions.create(
model="Llama-3.1-70B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Can you explain quantum computing in simple terms?"}
],
temperature=0.5,
max_tokens=150
)
# Print the response
print(chat_response.choices[0].message.content)
Generate Embeddings
Generate vector embeddings for text data.
import openai
import os
# Set up the client with your API key and base URL
client = openai.OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url=os.getenv("BASE_URL")
)
texts = ["The quick brown fox jumps over the lazy dog", "Data science is fun!"]
embeddings = client.embeddings.create(
input=texts,
model="jina-embeddings-v2-base-de"
)
# Print the embedding details
print(f"Embedding dimension: {len(embeddings.data[0].embedding)}")
print(f"Token usage: {embeddings.usage}")
Multimodal Models (Vision and Image Analysis)
Use a vision model to analyze images.
import openai
import os
# Set up the client with your API key and base URL
client = openai.OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url=os.getenv("BASE_URL")
)
chat_response = client.chat.completions.create(
model="llava-v1.6-34b",
messages=[
{"role": "user", "content": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "https://example.com/path-to-image.jpg"}}
],
max_tokens=150
)
# Print the image analysis
print(chat_response.choices[0].message.content)
Next Steps
You’re all set up! Now, explore different models and tune parameters like temperature
and max_tokens
to refine responses. For specialized use cases like RAG, see our LangChain and Llama-Index Integration sections in the documentation.
Note: For any model-specific requirements or best practices, consult the API Reference section of this documentation.