Continue for JetBrains
Continue for AI Engineer is a custom version of the Continue plugin for JetBrains IDEs. It brings chat, inline edits, autocomplete, and coding agents directly into IntelliJ-based workflows so you can work with AI models without leaving your editor.
Key Features
Section titled “Key Features”- Chat interface for questions, code explanations, and development assistance
- Inline edits for targeted code changes inside the editor
- Autocomplete for low-latency code suggestions while you type
- Coding agents for multi-step tasks such as reading files, running commands, and applying changes
Installation
Section titled “Installation”- Open Settings in your JetBrains IDE and select Plugins.
- On the Plugins page, click the settings icon and choose Manage Plugin Repositories.
- In the Custom Plugin Repositories dialog, add the following repository URL:
https://plugins.dev.aie.ai.t-systems.net/v1/registry/434192/packages/latest/info?package_name=continue-jetbrains&file_pattern=jetbrainupdatexml- Click OK to save the repository list and apply the changes.
- Open the marketplace, search for Continue for AI Engineer, and install it.
Config File Setup
Section titled “Config File Setup”After the first installation, open the AI Engineer Continue plugin and choose Local config. This opens an empty config.yaml file that you can update with your model, base URL, and API key details.
- Choose a model ID from Available Models:
model: gpt-oss-120b- Set the base URL:
apiBase: https://llm-server.llmhub.t-systems.net/v2/coding- Generate an API key in LLM Serving API Keys:
apiKey: gen-************************************************- Assign one or more roles to the model, such as
chat,edit,apply,autocomplete, orembed:
roles: - chatExample config.yaml
Section titled “Example config.yaml”name: Local Configversion: 1.0.0schema: v1models: - name: gpt-4.1 model: gpt-4.1 provider: openai apiBase: https://llm-server.llmhub.t-systems.net/v2/coding apiKey: gen-************************************************ roles: - chat requestOptions: proxy: YourProxyHost:Port - name: Qwen3-30B-A3B-FP8 model: Qwen3-30B-A3B-FP8 provider: openai apiBase: https://llm-server.llmhub.t-systems.net/v2/coding apiKey: gen-************************************************ roles: - chat - edit - apply requestOptions: proxy: YourProxyHost:PortOnce the config is saved, you can start a chat and use Continue in your JetBrains IDE.
Updating
Section titled “Updating”Automatic updates are included for the JetBrains plugin whenever a new version is published.
- The plugin can fetch updates automatically when they are available.
- You can also check manually in
Settings -> Appearance & Behavior -> System Settings -> Updates.