GPT Assistant

GPT Assistant Plugin for JetBrains Rider: An Overview

The GPT Assistant Plugin integrates OpenAI’s GPT models into JetBrains Rider, offering AI-assisted coding suggestions and code completions. This tool leverages your current coding context to produce relevant and intelligent suggestions, transforming your Rider experience.

Key Information:

  • API Key Needed: You must have an OpenAI or Azure OpenAI API key to activate and use this plugin.
  • Data Transmission: Your code serves as context and is transferred to OpenAI/Azure via the API. Although neither OpenAI nor Azure, at this point, does employ API-transferred data for model training or improvement, it’s vital to stay updated on their data usage policies.
  • Code Review Essential: The plugin aims to assist, but it’s crucial to inspect the AI-generated code for accuracy and relevance to your project. Even the best language model is prone to poor quality sometimes.
  • Quality Limitations: The accuracy of responses is linked to the language model, not the plugin itself. The plugin does not generate the answers itself but organizes context, indexes your solution and provides a user interface.

Configuration Guide for GPT Assistant Plugin

Setting up and configuring the GPT Assistant for JetBrains Rider is straightforward. Below is a structured outline you can populate with more detailed steps:

  1. Installation:
  2. Get your API-Key:
    • For OpenAI: Create an Account if necessary and visit https://platform.openai.com/account/api-keys to get your key
    • For Azure: Visit https://portal.azure.com, set-up an Azure OpenAI resource (see below) and go to “Keys and Endpoint” to get your API key
    • For Azure (Login): No API-Key is needed
  3. Set-up Azure OpenAI
    • Skip this if you are using OpenAI as an API Provider
    • Set up an Azure OpenAI Resource
    • You will need both a language model with the minimum model version of 0613 and the embedding model text text-embedding-ada-002 in the same resource – keep that in mind when choosing a region. For example, France Central offers these options, if you are located in the EU.
    • Deploy the embedding model text-embedding-ada-002 and note down the deployment name
    • Deploy the GPT model of your choosing and note down the deployment name. We recommend gpt-35-turbo-16k.
    • If using Azure (Login): Grant the Cognitive Services OpenAI User permission to your users
  4. Open the GPT Assistant Settings (Tools->GPT Assistant) and set up your settings
    • Set your OpenAI Provider to either OpenAI or Azure
      • If you are using OpenAI, your settings screen looks like this
      • Just enter your API key, choose a model and context length and you’re done
      • If you are using Azure, your settings screen looks like this
      • Enter your API-Key from step 2.
      • Enter your resource name. You can, for example, find it here:
      • Enter the deployment names for text-embedding-ada-002 and your chosen model. You can find them here:
  5. Using Integrated GPT Chat:
    • The chat can be found as a tool window at the right hand side of ride by default.
    • It is recommended to check the “Index Solution” checkbox. It will improve the answer quality drastically. Please not that this will lead to a small cost as your code files are being indexed by the API-Service. However, even for a very large solution, this cost is usually way under 5$.
  6. On the subject of context length:
    • In the plugin settings, you can set a context length.
    • Since every model has a maximum number of tokens allowed, the length of context supplied from your solution is limited.
    • The prompt, chat history and answer of the model all count towards the models maximum token count.
    • Therefore, it is recommended to set the context length to about half of the maximum token count for that model.
    • For example (this setup works great): GPT 3.5 turbo 16k has a maximum token count of 16000. A context length of 8000 yields great results!
  7. Troubleshooting:
    • One common error is to set the context length to high, leading to unexpected errors. Starting from plugin version 2023.2.4, the plugin detects this misconfiguration and gives advice on how to fix it. Please refer to the previous bullet for more details.
    • If you run into any other problem regarding the plugin (not the API), OR if you have any cool feature idea that could be added, don’t hesitate to contact us!