As I wrote in the RAD Studio 12.2 announcement blog post, “leveraging AI LLMs for coding is becoming increasingly popular and the quality of these tools is getting better. For this reason, RAD Studio 12.2 introduces an open architecture for AI plugins, offering ready-to-use support for three online solutions (OpenAI, Gemini by Google, and Claude by Anthropic) and one offline solution (Ollama).
In this blog post, I’ll provide some more details and information, considering that this is RAD Studio’s first entry into the AI space and we are providing initial support, which we plan to expand in the future. At this time, Embarcadero is not investing directly in its own AI backend tools, but it’s focused on making some of the best industry tools directly available to our customers, with a direct integration in the IDE.
The RAD Studio AI support is called “Smart CodeInsight” and it’s implementation is based on an open architecture, powered by new interfaces in the IDE ToolsAPI. This API allows a developer to integrate additional AI engines and also to create a custom UI for one of them. I’ll get back to this towards the end of the blog post. Beside the API, we also provide ready-to-use features based on it, In fact we offer IDE integration with:
- An AI chat window
- An editor menu commands to invoke AI operations
In terms of specific plugins, as mentioned we offer ready-to-use support for the following APIs:
- OpenAI
- Gemini by Google
- Claude by Anthropic
- Ollama
Notice that we are providing integration with these services, which require you to agree to their terms, receive from one of the vendors an API key (which are generally made available with a pay-per-use contract) and enter this key in the RAD Studio configuration. Be aware that creating the proper account on these vendors sites can be difficult. They tend to lead you towards subscribing to a monthly payment fee to use their site, but that doesn’t give you access to the APIs. Make sure you sign up to these API offers (I’m underlying it as we saw multiple people sign up for the wrong offer by mistake):
- For Open AI the API product at openai.com/api/ and the pricing is at openai.com/api/pricing/
- For Claude the product information is at www.anthropic.com/api and the pricing is at www.anthropic.com/pricing#anthropic-api
- For Gemini you can refer to the page at ai.google.dev/gemini-api/ and the pricing is at ai.google.dev/pricing (Google offers a Free of Charge plan, but notice this will get trained based on the prompts and the code you submit, unlike the paid solution, according to their web site at the time I’m writing this)
The only exception in terms of accounts and payments is Ollama, which can be installed locally (or on a server of your choice) and used offline without having to pay a service fee. More information on Ollama and Codellama later in the blog post.
Notice also we don’t offer support for other popular solutions we have considered (and our customers have requested) because they don’t offer an open REST API, but require the AI vendor to build a custom IDE integration. We expect other AI vendors to provide REST APIs in the future.
Table of Contents
Developers in Full Control
We are giving our customers extended configuration and full privacy control in multiple ways:
- In case you don’t care or trust LLMs, you can turn off the entire AI feature with a single global setting.
- Each of the four engines can be enabled or disabled (they are not enabled by default).
- You can pick which engine is used by default by the different UI elements (chat and editor menu).
- We are storing the API keys in an encrypted format.
- We include the option to use a local, offline engine.
Notice Embarcadero is not sending request to an engine of our choice, Embarcadero is not brokering requests over our servers, and the company is not providing a specific service you have to pay for. It’s an open architecture in which you have full control.
The Smart CodeInsight Configuration
The RAD Studio AI integration can be configured in the Tools Options dialog under the Editor > Smart CodeInsight. As I wrote earlier, there are:
- A general checkbox to enable the entire feature.
- A check box to enable each of the engines.
- Two combo boxes for setting the default engine for each UI element
As you can see in the dialog box below, there is a tab for each engine with a different set of configuration parameters:
The suggested configuration options for each engine, including the API BaseURL and other configurations, are listed in the RAD Studio DocWiki page here.
The Smart CodeInsight UI elements
In the 12.2 release of RAD Studio, the IDE surfaces the AI tools in two different ways: a general-purpose chat window, in which you type a custom prompt, and some editor menus, which allow you to invoke specific operations on the currently selected source code.
The Chat Window
The AI chat window is an IDE dockable form, which works like any LLM chat window. You can type a request, pick an engine (unless you want to use the default one), and wait for the information. Here is a quick example asking Claude about opening a text file:
This chat window has a question memo that can act like a simple command line. These are the available special commands:
Switch the active AI engine to a different engine you have enabled (you can switch the engine in the chat page also using the selection box at the bottom):
1 2 3 4 |
chatgpt> + Enter gemini> + Enter claude> + Enter ollama> + Enter |
Clear the answer memo; Stop generating the answer (the same as clicking the stop button); Start generating the answer (the same as clicking the start button); respectively with:
1 2 3 |
clear> + Enter stop> + Enter Ctrl + Enter |
The Editor Menu
The editor menu offers some preset operations on the code selected in the editor itself, with the result returned by the LLM engine added in a comment in the editor itself after the selection. This is faster than copying code in the chat request and than pasting the result in the editor. The idea is to send a portion of your application’s source code to be analyzed.
These are the available commands:
- AI Chat: Open the chat view
- Find Bugs: Try to find potential bugs in the selected code
- Explain Code: Explain the selected code
- Add Comment: Add comments to the selected code
- Complete the code: Complete the selected code
- Optimize code: Optimize the selected code
- Add unit test: Add unit test for the selected code
- Convert to Assembly: Convert the selected code to Assembly code
- Convert to Delphi: Convert the selected code to Delphi code(from C++ or Assembly)
- Convert to C++ Builder: Convert the selected code to C++ builder code
Here you can see an example of explained code:
Expanding Smart CodeInsight with the ToolsAPI
We have full support for writing your own AI plugins in two ways, and we have added demos to our official demos repository for both scenarios. Here I don’t have room for the details, but I want to provide the links to the demos on GitHub (they also ship with the product) for two two types of plugins:
- Providing support for an alternate AI vendor implementation: github.com/Embarcadero/RADStudio12Demos/tree/main/Object%20Pascal/ToolsAPI/AIEngine%20Demos/CohereAI_Plugin
- Creating custom IDE features that use one of the available plugins: github.com/Embarcadero/RADStudio12Demos/tree/main/Object%20Pascal/ToolsAPI/AIEngine%20Demos/AI_Consumer_CodeSample
Additional Note on Ollama
While for the three online solutions you need to create an account and generally pay for the use, we have included support for Ollama. For more information see ollama.com/. As you can see in their GitHub repository, this engine has an MIT license. Ollama is the engine, but you need an actual model. What we recommend is Codellama, a model for developers by Meta and made available with it own license agreement and terms of service.
This engine is generally installed via docker and it does work fine also on the Windows Linux Subsystem (WLS). I’ve personally installed it on my Windows machine. Some of the information is available in the Ollama configuration docwiki page. While you can use the official Ollama docker repo and install Codellama on it, we are also providing a ready to use docker image with the ready-to-use configuration, it takes a single docker command to download and run it. At that point, just configure RAD Studio with the URL pointing to localhost or the machine where you have installed the engine, and you are good to go, with no further problem. Performance of a local engine might vary depending on the hardware, and it might be slower than online solutions, but you’ll get some good results.
The usage of the Embarcadero ready-to-use docker is covered in docwiki, and the image is hosted at docker link: hub.docker.com/r/radstudio/codellama (notice that the complete image including the pre-installed model is 4.26 GB).
Code Smarter with Smart CodeInsight
An LLM integration in the IDE can help you code faster, write skeleton code, check it for correctness, reach out for more information than a general web search, help understand some code you are browsing… but as we know LLMs make mistakes and shouldn’t be blindly trusted. Spend effort in checking the result before incorporating them in your code. Be also careful in terms of IP ownership of the code suggested and or your own code when you are sharing source with an engine. The paid solution we offer interface to don’t get trained with your code, but the exact details can depend on the offer: Make sure you read the fine print.
We can already see more ways these tools can become useful. Stay tuned for more and let us know if you write an extension using the ToolsAPI that you want to share with the community.
Design. Code. Compile. Deploy.
Start Free Trial Upgrade Today
Free Delphi Community Edition Free C++Builder Community Edition
Smart CodeInsight really set AI LLM at RADStudio IDE’s fingertips.
-> Congratulations for this great new feature !
Finally got it to work:
BaseURL:
http://localhost:11434/
Model:
llama3.2:latest
Then run this model.
This may be enough, if not setup OLLAMA_BASE_URL in environment variables:
http://localhost:11434
I don’t think last step is necessary but that might help OpenWebUI, but OpenWebUI not necessary, ollama has a buid in web server.. interesting and funny.
Clunky feature for now… adds a boat load of text to source files ! Carefully, I can already see how this could screw up previous source code… hmm…