Skip to main content

Quickstart

Quickstart Guide for Developers

This guide will walk you through setting up the OpenAI package to interact with our LLM models and embeddings. In just a few steps, you’ll be ready to use our language, vision, and embedding models through a simple and unified API.


Step 1: Install the OpenAI Package

To get started, install the OpenAI package using pip. This package allows you to access all our models, including language generation, chat, embeddings, and vision capabilities.

pip install openai

Step 2: Create and Export an API Key

To securely connect to our API, you’ll need to generate an API key from the dashboard. Once you have the key:

  1. Store it safely in a location like a .zshrc file (macOS/Linux) or another text file on your computer.
  2. Export it as an environment variable in your terminal for easy access in your scripts.

Create an API key in the dashboard here and follow the steps below for your operating system.

Export environment variables on nix systems:
export OPENAI_API_KEY="your_api_key_here"
export BASE_URL="https://llm-server.llmhub.t-systems.net/v2"

tip

You can see the list of our available models by using the command from API Reference or view the list of available models.


Generate Text

Create a human-like response to a prompt.

import openai
import os

# Set up the client with your API key and base URL
client = openai.OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url=os.getenv("BASE_URL")
)

chat_response = client.chat.completions.create(
model="Llama-3.1-70B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Can you explain quantum computing in simple terms?"}
],
temperature=0.5,
max_tokens=150
)

# Print the response
print(chat_response.choices[0].message.content)

Next Steps

You’re all set up! Now, explore different models and tune parameters like temperature and max_tokens to refine responses. For specialized use cases like RAG, see our LangChain and Llama-Index Integration sections in the documentation.

Note: For any model-specific requirements or best practices, consult the API Reference section of this documentation.

© Deutsche Telekom AG