FAQ
What are the AI Foundation Services?
Section titled “What are the AI Foundation Services?”The AI Foundation Services by T-Systems offer companies a secure, GDPR-compliant platform for developing and scaling AI solutions with LLMs. They enable the use of over 40 language models — including open-source models like Llama, Mistral, Qwen, and Gemma — as well as ready-made services for RAG and fine-tuning. Operated on the T-Cloud in Germany, they meet the highest requirements for data protection and sovereignty.
Which LLMs are available?
Section titled “Which LLMs are available?”The selection is continuously updated with both open- and closed-source models, including Meta Llama, Mistral, DeepSeek, Qwen, OpenAI GPT, and Claude Sonnet. T-Systems offers unimodal and multimodal models that can process text, image, and audio. See Available Models for the current list.
On which cloud platform is the hosting provided?
Section titled “On which cloud platform is the hosting provided?”Open-source models are hosted on the GDPR-compliant T-Cloud. Closed-source models are hosted in a GDPR-compliant manner on MS Azure, AWS, or Google Cloud.
Can I test the AI Foundation Services for free?
Section titled “Can I test the AI Foundation Services for free?”Yes! Get a free trial key at: API Key Portal
What is the pricing structure?
Section titled “What is the pricing structure?”There are components for LLM, RAG, and fine-tuning. The LLM component offers various tariffs with usage-based billing of consumed tokens. For RAG (SmartChat), a monthly flat rate is agreed based on users and storage. See the Plans & Pricing page for details.
What are the rate limits?
Section titled “What are the rate limits?”Rate limits (TPM and RPM) depend on the selected plan and model. For example, GPT-OSS 120B (T-Cloud, Germany) provides 300 RPM and 300,000 input TPM on the Essential plan, scaling up to 1,000 RPM and 2,000,000 input TPM on Agentic. Premium models such as Claude Opus are available on the Professional and Agentic plans. See Rate Limits for the full breakdown.
Can additional LLMs be provided?
Section titled “Can additional LLMs be provided?”Yes, additional models can be hosted on dedicated GPU resources on request. In this case, billing is based on the required GPU resources rather than usage-based. Contact the AIFS team for details.
How do I order LLM-Serving?
Section titled “How do I order LLM-Serving?”The Essential, Professional, and Agentic plans can be ordered directly via the T-Cloud Marketplace. For Enterprise (custom) plans, contact us for an offer.
How do I order SmartChat?
Section titled “How do I order SmartChat?”Contact us for an offer. In the future, ordering via the T-Cloud Marketplace will also be available.
Can SmartChat be integrated into existing applications?
Section titled “Can SmartChat be integrated into existing applications?”Yes, SmartChat can be integrated into existing processes or applications via the SmartChat API. Connect databases or systems securely for maximum efficiency.