Site icon Embarcadero RAD Studio, Delphi, & C++Builder Blogs

RAD Studio Smart CodeInsight & LM Studio: A Local AI Alternative

RAD Studio  LM Studio

Introduction

Are you interested in using AI-powered tools like Smart CodeInsight but prefer to keep your data local? Maybe you’re concerned about privacy, want to reduce API costs, or simply want to experiment with different LLM models? This guide will show you how to set up LM Studio, a user-friendly desktop application for running Large Language Models (LLMs) locally, and connect it to RAD Studio’s Smart CodeInsight.

Unlike alternatives such as Ollama or Docker-based solutions, LM Studio offers a visual, intuitive interface that makes local LLM setup accessible even if you’re not comfortable with command-line tools or container technologies.

What is LM Studio?

LM Studio is a desktop application that allows you to download and run various LLM models locally on your machine. It offers:

Best of all, you can use LM Studio as a drop-in replacement for OpenAI’s API in applications like RAD Studio’s Smart CodeInsight, keeping your code generation and AI assistance entirely local.

Prerequisites

Before we begin, make sure you have:

Step 1: Install LM Studio

Simply download and install LM Studio from their official website. The installation process is straightforward for Windows, macOS, and Linux. If you encounter any platform-specific issues, refer to their documentation.

Step 2: Download a Model

After first launch, LM Studio will suggest you to download a model. For best results with Smart CodeInsight, consider downloading code-specialized models like CodeLlama, WizardCoder, or DeepSeek Coder. You can download multiple models to experiment with different capabilities and performance characteristics. Once downloaded, models appear in your “My Models” tab where you can select and load them as needed.

For details on model selection and management options, refer to the LM Studio Model Management documentation.

Step 3: Start the Local API Server

To connect RAD Studio to LM Studio, you’ll need to start LM Studio’s local API server:

  1. In LM Studio, click on the “Developer” tab in the sidebar.
  2. Make sure your selected model is loaded.
  3. Click the “Start Server” toggle.
  4. Note the server address shown (typically http://localhost:1234 or similar).
  5. The server is now running and ready to accept connections.

For additional server configuration options, check the LM Studio API Server documentation.

Step 4: Configure RAD Studio Smart CodeInsight

Now that your local LLM server is running, you need to configure RAD Studio to use it:

  1. Open RAD Studio.
  2. Navigate to Tools > Options.
  3. Look for “Smart CodeInsight” under the “IDE” section.
  4. In the Plugins settings:
    • Choose “ChatGPT” and click “Enabled”
    • Set the API URL to the address of your local LM Studio server (e.g., http://localhost:1234/v1).
    • For the API Key field, enter any text (RAD Studio requires something in this field, but the actual value doesn’t matter).
  5. Click on the “Models” dropdown menu. If it returns a list of the available models, the connection was successful. 

 

Step 5: Test the Connection

To verify that everything is working:

  1. Open a Delphi or C++ project in RAD Studio.
  2. Try using a Smart CodeInsight feature, such as code completion or explanation.
  3. You should see the request being processed by LM Studio (you can monitor this in the LM Studio interface).
  4. Smart CodeInsight should respond with suggestions powered by your local model.

Troubleshooting

If you encounter issues with the connection:

Advanced Tips

Using LM Studio Across a Network

If you want to run LM Studio on a powerful desktop and connect to it from a laptop running RAD Studio, or if you run RAD Studio in a VM, you can also activate remote connections (different to localhost) on LM Studio:

  1. In LM Studio’s “Developer” settings (next to the “activate” toggle), click the option “Serve on local network”. 
  2. Note the IP address of the computer running LM Studio (e.g., 192.168.1.100).
  3. In RAD Studio, use this IP address in the API URL: http://192.168.1.100:1234/v1.
  4. Ensure your firewall allows connections to the LM Studio port.

Optimizing Model Performance

For better performance:

AI under your control

Alternatives like this provide all the benefits of AI-assisted development while keeping your data local and under your control. As models continue to improve and hardware becomes more capable, local LLM solutions like this will become an increasingly practical option for development teams of all sizes.

For questions about RAD Studio’s Smart CodeInsight feature and other AI-powered development tools, visit the Embarcadero Documentation.

Disclaimer

LM Studio is available only for personal use. For more information, please read their terms of use.

UPDATE: As of 2025-07-08, LM Studio have made their software free for Work, with an Enterprise plan with advanced features. See their announcement https://lmstudio.ai/blog/free-for-work

 

Exit mobile version