
Embarcadero introduced its first integrated AI solution in the RAD Studio IDE in version 12.2. The technology, called Smart CodeInsight, offers the ability to interact with different LLM engines, some online and one offline, but also to install drivers for other AI engines thanks to a specific OpenToools API interface. The available engines are:
- OpenAI
- Ollama (offline)
- Gemini
- Claude
There are two primary ways to invoke the LLMs from the RAD Studio IDE: using a general purpose chat window and with specific commands in the editor, which can operate (and pass to the AI) the selected source code as part of the prompt. I thought it would be a nice idea to ask the AI engines how Smart CodeInsight works, but I got some interesting odd responses, mostly explaining the Delphi LSP engine. Better look at this blog post from last September, Using AI LLMs in the RAD Studio IDE with Smart CodeInsight.
Table of Contents
A New Chat Window
As already mentioned in the RAD Studio 12.3 announcement blog post, there are several enhancements to Smart CodeInsight in 12.3 The most notable one is the display of MarkDown rather than plain text in the chat, offering much more readable answers, as you can see in the image below:
The other change to the Smart CodeInsight Chat window is the presence of two new buttons (at the bottom of the pane) to copy the currently selected text in the editor or the current unit. Rather than doing a manual copy and paste, you can do a “direct paste” using the buttons. Even better, you can use special macros in the request. Type your request, add a $ and a list pops up with the available symbols, including $selection for the currently selected text in the editor and $Unit for the entire code of the unit active in the editor:
The answer is again displayed in the chat window, based on the current editor selection. Here is the result of that prompt after selecting the form variable declaration, and using Clause as the AI engine:
New Editor Command: Find Unit/Header
The other area of the IDE with an improved AI experience is the editor. The available commands have been extended in two ways: first there is a new Find Unit/Header option (for Delphi or C++), which can help you locate the unit declaring a specific type or symbol, that is the unit you need to add to the uses statement to be able to compile the code. Suppose you are typing this code, which has an undefined symbol:
By selecting “TInifile” and issuing the command Find Unit, you’ll get the following output in the editor:
All you need to do is to take the name of the unit added to the comment (Sytem.IniFiles) and paste it in a uses statement of the current unit. Now, this isn’t based on your actual project code and would work only for commonly referenced units in the code the AI was trained on, but I find it quite reliable for core Delphi RTL data types.
Editor and Chat
Another change to the editor is the ability to send the result of an Ai request to the MarkDown-enabled chat rather than a comment in the editor, as in the example above. This can be configured with an option in the editor Smart CodeInsight menu, as you can see here:
By enabling this feature, you can now select some code in the editor and issue a command (Find Bugs, in this example):
Available Models in Configuration
The last new feature added to Smart CodeInsight in 12.3 pertains to the feature configuration, in the Tools Options dialog box. Once you have configured an AI service (with the URL and your credentials), rather than typing the name of a model you can select one of the models available for the specific service, from a drop down list that gets populated on request. This is relevant as models are added continuously over time and we cannot provide a fix list: It will be obsolete the day we ship a new version. (Notice, by the way, that while newer models can offer better results, they might also be more expensive to invoke). This is how the feature looks like, in case of OpenAI:
In Summary
In summary, the Smart CodeInsight architecture offers multiple goals: everything is optional and disabled by default; we make available multiple providers and let you choose which one or ones you want to enable; we include a locally installed, offline solution for maximum privacy. In this release we have improved the integration of the editor and chat window, and we’ve enabled MarkDown display in the chat. We’ll continue tracking this space and enhance this feature, making sure we respect our customers IP and privacy to the maximum extent and we offer the ability to choose the AI engine our customers prefer, rather than forcing a specific vendor.
Design. Code. Compile. Deploy.
Start Free Trial Upgrade Today
Free Delphi Community Edition Free C++Builder Community Edition
Hello,
It’s not working as expected. In most cases it’s useless. Try to put that small example prompt to recreate problem into chat window :
Optimize code, show only optimized source code :
function TForm1.TryToOptimize: Integer;
begin
Result := 1;
Result := Result + 1;
Result := Result + 1;
Result := Result + 1;
end;
and you get back :
function tform1.trytooptimize: integer;
begin
result := 4;
end;
and this is what Delphi send and get back :
optimize code, show only optimized source code :
function tform1.trytooptimize: integer;
begin
result := 1;
result := result + 1;
result := result + 1;
result := result + 1;
end;
Answer>
pascal
function tform1.trytooptimize: integer;
begin
result := 4;
end;
The error is that from some unknown reasons you change all prompt to lowercase, Why ?
Regards
Marcin
It will be fixed officially for sure, but as a fast solution you can easily adjust your prompt by asking a Pascal standard formatted result or ask the AI to keep the result in camelCase, or first letter capital, etc.
1. “AI Chat” window should be easily searched with “IDE Insight”. Yet, typing “AI chat” in IDE insight does not return anything.
2. “AI Chat” window should be easily invoked, with eg IDE menu View > Tool windows > AI Chat. Yet, one has to guess it’s invokable with a right-mouse click menu option on the IDE editor