Have an amazing solution built in RAD Studio? Let us know. Looking for discounts? Visit our Special Offers page!
AIHow-To'sIDENewsProductivityRAD Studio

RAD Studio Smart CodeInsight & LM Studio: A Local AI Alternative

RAD Studio  LM Studio

Introduction

Are you interested in using AI-powered tools like Smart CodeInsight but prefer to keep your data local? Maybe you’re concerned about privacy, want to reduce API costs, or simply want to experiment with different LLM models? This guide will show you how to set up LM Studio, a user-friendly desktop application for running Large Language Models (LLMs) locally, and connect it to RAD Studio’s Smart CodeInsight.

Unlike alternatives such as Ollama or Docker-based solutions, LM Studio offers a visual, intuitive interface that makes local LLM setup accessible even if you’re not comfortable with command-line tools or container technologies.

What is LM Studio?

LM Studio is a desktop application that allows you to download and run various LLM models locally on your machine. It offers:

  • A clean, user-friendly interface
  • Automatic GPU acceleration (when available)
  • Compatibility with Windows, macOS, and Linux
  • The ability to download models directly from Hugging Face
  • An OpenAI-compatible API server mode

Best of all, you can use LM Studio as a drop-in replacement for OpenAI’s API in applications like RAD Studio’s Smart CodeInsight, keeping your code generation and AI assistance entirely local.

Prerequisites

Before we begin, make sure you have:

  • RAD Studio installed with Smart CodeInsight capability
  • A reasonably powerful computer (ideally with a dedicated GPU for better performance)
  • Enough storage space for LLM models (models can range from 1GB to 10GB+, depending on your choice)

Step 1: Install LM Studio

Simply download and install LM Studio from their official website. The installation process is straightforward for Windows, macOS, and Linux. If you encounter any platform-specific issues, refer to their documentation.

LMStudio Installer

Step 2: Download a Model

After first launch, LM Studio will suggest you to download a model. For best results with Smart CodeInsight, consider downloading code-specialized models like CodeLlama, WizardCoder, or DeepSeek Coder. You can download multiple models to experiment with different capabilities and performance characteristics. Once downloaded, models appear in your “My Models” tab where you can select and load them as needed.

For details on model selection and management options, refer to the LM Studio Model Management documentation.

Step 3: Start the Local API Server

lmstudio app

To connect RAD Studio to LM Studio, you’ll need to start LM Studio’s local API server:

  1. In LM Studio, click on the “Developer” tab in the sidebar.
  2. Make sure your selected model is loaded.
  3. Click the “Start Server” toggle.
  4. Note the server address shown (typically http://localhost:1234 or similar).
  5. The server is now running and ready to accept connections.

For additional server configuration options, check the LM Studio API Server documentation.

Step 4: Configure RAD Studio Smart CodeInsight

Now that your local LLM server is running, you need to configure RAD Studio to use it:

  1. Open RAD Studio.
  2. Navigate to Tools > Options.
  3. Look for “Smart CodeInsight” under the “IDE” section.
  4. In the Plugins settings:
    • Choose “ChatGPT” and click “Enabled”
    • Set the API URL to the address of your local LM Studio server (e.g., http://localhost:1234/v1).
    • For the API Key field, enter any text (RAD Studio requires something in this field, but the actual value doesn’t matter).
  5. Click on the “Models” dropdown menu. If it returns a list of the available models, the connection was successful. 
RAD Studio - Smart CodeInsight

 

Step 5: Test the Connection

To verify that everything is working:

  1. Open a Delphi or C++ project in RAD Studio.
  2. Try using a Smart CodeInsight feature, such as code completion or explanation.
  3. You should see the request being processed by LM Studio (you can monitor this in the LM Studio interface).
  4. Smart CodeInsight should respond with suggestions powered by your local model.

Troubleshooting

If you encounter issues with the connection:

  • LM Studio Server Not Responding: Make sure the API server is running in LM Studio and that the model is loaded.
  • Connection Refused: Check that the URL in RAD Studio matches the address shown in LM Studio’s “Developer” tab.
  • Poor Response Quality: You might need to try a different model. Code-specialized models generally perform better for development tasks.
  • Slow Performance: Local LLMs are generally slower than cloud-based alternatives, especially without GPU acceleration. Consider using a smaller model or upgrading your hardware.

Advanced Tips

Using LM Studio Across a Network

If you want to run LM Studio on a powerful desktop and connect to it from a laptop running RAD Studio, or if you run RAD Studio in a VM, you can also activate remote connections (different to localhost) on LM Studio:

  1. In LM Studio’s “Developer” settings (next to the “activate” toggle), click the option “Serve on local network”. 
  2. Note the IP address of the computer running LM Studio (e.g., 192.168.1.100).
  3. In RAD Studio, use this IP address in the API URL: http://192.168.1.100:1234/v1.
  4. Ensure your firewall allows connections to the LM Studio port.

Optimizing Model Performance

For better performance:

  • Use models optimized for your specific hardware. LM Studio will suggest which models will work best based on your hardware. 
  • Consider smaller models if you don’t need the full capabilities of larger ones.
  • Adjust the inference parameters in LM Studio for a balance between speed and quality.

AI under your control

Alternatives like this provide all the benefits of AI-assisted development while keeping your data local and under your control. As models continue to improve and hardware becomes more capable, local LLM solutions like this will become an increasingly practical option for development teams of all sizes.

For questions about RAD Studio’s Smart CodeInsight feature and other AI-powered development tools, visit the Embarcadero Documentation.

Disclaimer

LM Studio is available only for personal use. For more information, please read their terms of use.

UPDATE: As of 2025-07-08, LM Studio have made their software free for Work, with an Enterprise plan with advanced features. See their announcement https://lmstudio.ai/blog/free-for-work

 

See What's New in RAD Studio 13 Florence The AI Codecamp: Learn, Code, Create

Reduce development time and get to market faster with RAD Studio, Delphi, or C++Builder.
Design. Code. Compile. Deploy.
Start Free Trial   Upgrade Today

   Free Delphi Community Edition   Free C++Builder Community Edition

About author

Pre-sales consultant engineer at Embarcadero inc.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

IN THE ARTICLES