Have an amazing solution built in RAD Studio? Let us know. Looking for discounts? Visit our Special Offers page!
C++

Rethinking C++: Architecture, Concepts, and Responsibility

Rethinking C++

Rethinking C++: Architecture, Concepts, and Responsibility

Why C++Builder 13 is more than just a version bump – and why we must begin to truly understand C++.

An article about C++Builder – and yet about much more

With the release of C++Builder 13 and support for the current C++20 and C++23 standards a few weeks ago, and after my subsequent extensive tests, a concept for a new library emerged—together with the idea for a book. This text is a first summary. In recent years, I repeatedly met C++ developers who became uneasy when looking at modern C++, and I often heard that this was not C++. Of course, modern C++ looks different, relies much more on metaprogramming, and although it evolved step by step, it nonetheless made a quantum leap.

Before diving into the topic, I want to state something general. At first glance, this article seems to refer to C++Builder, a specific development environment. In fact, however, it concerns C++ as a whole: every compiler, every platform, every C++ library.

Even if I discuss specific C++Builder libraries here, such as VCL, FMX or FireDAC, the same applies without restriction to any other library, for example Qt. Whether database, UI, or business framework: everywhere it is about the same principle—understanding modern C++ as a language of thought, not of imitation.

Thus, the requirements for the respective frameworks are defined via concepts, a new kind of “contract” that our compilers verify. They enable direct and simple implementations of interfaces without runtime overhead, already at compile time.

Simplified—without the helper concepts—such a concept for any form of table views (in VCL, for example, TListView, in FMX a TStringGrid, in Qt a QTableWidget) looks as follows and must then be implemented for the respective components / widgets. In modern C++, concepts play an important role; some even view them as a new paradigm: concept‑based development.

After this detour, back to the topic. Analyzing my work with the new C++Builder prompted me to probe the boundaries, to test what works, and to deliberately find out what does (not yet) work. After the migration of our main product “PE Portal” to C++Builder 13 had already made surprisingly good progress with the beta, I wanted—after the official release in September—to test the capabilities for using C++20 and C++23. Following many rather poor experiences with versions 11 and 12 in recent years, I wanted to push this development system to its limits with modern C++ and initially used demanding examples from my Twitch streams.

The software package “PE Portal” consists of 10 applications, some are COM servers, and 30 dynamic libraries. Comprising 2312 source files (source, header, resources) and roughly 1 million source lines of code. The successful migration lasted from late August to mid‑October 2025 (C++Builder 10.2.3 / DevExpress 23.2.7 to C++Builder 13 / DevExpress 25.1.5).

To my own surprise, I reached new horizons and deepened my knowledge of C++ further.

The more I tried, the more I pushed the language, and implemented the wildest ideas, the clearer it became to me that modern C++ can no longer simply be learned (or taught) but must be re‑understood from the ground up. A few weeks ago I wrote in my blog about how important the transition from C++17 to C++20 / C++23 is. As a mathematician I am used to abstract thinking, and the idea of metaprogramming was familiar to me early on. I not only used it actively but also offered trainings on metaprogramming. I already had the notion that programs do not merely control procedures but describe models that can take shape and change at compile time. For example, in a stream in December 2022 I explained the “tie the knot” technique using metaprogramming and compile‑time folding (with Visual Studio 2022), diving deep into metaprogramming.

What has become possible today with concept and variadic template I could not have anticipated.

Today, in the light of the new standards C++20 and C++23, this way of thinking becomes reality. And the evolutionary path of the standards since C++11 continues purposively. C++ has reached a new quality, and besides the compile‑time paradigm—about which much has been written—there is now concept‑based programming.


The renaissance of thinking in modern C++

C++ stands at a historic turning point. It will succeed only if we embrace C++ and are willing to learn again. The language, once the tool for system‑ and machine‑level programming, has become an architectural medium: an instrument with which structures, relationships, and lifecycles can not only be formulated but expressed as design logic.

With the language standards since C++11—starting with move semantics through variadic templates, better use of SFINAE, functional extensions and lambda functions, up to std::thread and std::async—C++ became a language that, alongside increased efficiency, also offers control over parallelism and concurrency and prepares new depth. These were the first steps on the evolutionary path; C++20 has completed this development with cooperative multitasking via generators and coroutines: Pipes and Ranges complement each other to form a model in which data flows are no longer described but declared. The type traits introduced in C++11 find their completion in concepts and enable an entirely new programming style: concept‑based programming.

Today it is no longer about executing functions at runtime, but about designing systems that describe themselves, validate themselves, and already optimize at compile time. The compiler is no longer a mere translator but a partner in architectural construction.


From user to architect

Many developers believe that it will suffice to remain users in the future: to use the language and libraries, to integrate frameworks and continue established patterns. In doing so, they often remain stuck in the paradigms of the 1990s. But that is not enough if we want to shape the future of C++. Architects are needed to rethink C++. And trainers, to learn anew themselves and to develop new concepts.

Library vendors must have the courage to create a new generation of libraries—libraries that consistently use concepts, typelists, ranges, and compile‑time mechanisms. Compiler vendors, in turn, are responsible for continuing this development and fully unlocking the new language means.

But all of us—the C++ developers—must go back to school. We must learn C++ anew, not because we have forgotten it, but because through evolution it has become a different language. Only those who understand the modern language constructs can use the new tools properly and unfold the potential of this generation of libraries.


The value of translation (compile) time

Many—especially historically minded—developers complain that modern C++ compilers take longer to compile. But this criticism is short‑sighted. You cannot compare C++ compile times with compilation in other languages, because the compiler is doing something entirely different. And that is not just due to good optimization of the executable machine code. While other modern languages often rely on a functional paradigm—which C++ as a multi‑paradigm language naturally also supports—metaprogramming takes center stage here.

In modern C++ we design blueprints for the compiler: precise, generic, conceptual models. The compiler checks these structures, combines them, specializes them, and generates highly optimized code. In doing so, it computes methods and constants already at compile time whenever they are time‑invariant. What seemingly increases compile time actually reduces development time and runtime. And it increases reliability.

C++ compilation is not merely a syntax transformation but a part of execution. The compiler has not become slower; it has become more intelligent. The gained time shift—from runtime into the translation phase—is the expression of a language that invests in performance and safety.

I have said for many years: anyone who uses C++ like another language—be it Delphi or Java/C#—should rather use that language. Not least because they compile faster and the real advantages of C++ lie beyond the language constructs of those languages.


C++Builder: C++ in a tool – or with a tool?

In the past weeks I tested the new C++Builder 13 live in my streams. Everyone could follow it. I did not want to use C++ in a tool, but to live C++ with a tool. That is a very decisive difference.

The Delphi libraries VCL and FMX have provided an impressive basis for GUIs for years—intuitive, stable, powerful. FireDAC provides one of the most elegant database connectivities, working already at design time. But that is no longer enough: the next generation of C++ should not just be used; it must be rethought.

I deliberately sought boundaries—where modern C++ in interaction with C++Builder still appears incomplete. This boundary search did not become a test report but a voyage of discovery. The more I pushed the language in practice, the more clearly I recognized the direction in which C++ libraries and applications must develop.

As a mathematician I see a logical consequence in this: metaprogramming is nothing other than applied theory—thinking in models that describe and evaluate themselves. Modern C++ finally makes exactly that possible.


Safety through RAII: resources are responsibility

A central principle of this new way of thinking is RAII (Resource Acquisition Is Initialization). RAII has long been a focal point for safe programming. It stands for the insight that safety means more than memory management, namely strict control over every resource.

In my streams I showed how RAII can thereby also be applied to database operations: a connection exists as long as it is in scope. A database query delivers data as long as its object lives. Scope is not a syntactic boundary but a semantic promise, a guarantee of integrity.

This deterministic control replaces chance with order. Resources are not temporary; they are taken responsibility for. Of course, the language does not take responsibility away from you, but it offers tools. C++ links object lifetime and system stability—an exactness that rarely exists so precisely in other languages.


Concepts, ranges, and compile‑time evaluation

The idea of concepts in C++ serves the precise formulation of type requirements and thus leads to clearer, better verifiable generic programming. Whereas templates in classic C++ often fail only at instantiation with long error messages, concepts allow a declarative definition of what a type must be able to do or provide. They align seamlessly with the STL’s fundamental idea whose strength lies in separating algorithm and data structure: an algorithm describes the procedure, an iterator abstracts data access, and both are loosely coupled through type compatibility. From this combination came the great reusability of STL algorithms, which work independently of the concrete container. With C++20, std::ranges decisively extends this principle: it integrates the notions of iterator, container, and algorithm into a unified model that combines type safety and readability. Instead of manually handling iterators, ranges operate directly on sequences and, through filters, transformations, and views, enable a declarative, pipeline‑like description of data processing—precise, type‑safe, and conceptually clear, yet flexible and reusable.

In classic C++ (and the early days of the STL) it was considered good style to model every data store and iteration target as a container: lists, vectors, or maps formed the foundation for almost every algorithmic operation. This mindset made the separation of data and processing consistent, but often led to unnecessary copies or complicated adapters whenever only partial areas or filtered views were needed. With the introduction of std::ranges, this paradigm has fundamentally shifted. In modern C++, ranges assume the role of a universal transport medium between data source and algorithm: they can represent real containers, iterator pairs, or dynamically generated sequences without owning data. Thus, the view of data is decoupled from ownership; operations are often lazily evaluated and can be combined into efficient processing pipelines. Through these properties, ranges have become a central building block of modern C++; they enable a functional, declarative style that is both efficient and expressive and thus form the logical successor to the classical container abstraction.

The combination of ranges and concepts opens a new quality of data processing in modern C++: type‑safe, generic, and highly efficient. While concepts precisely define what capabilities a type must provide, ranges form the flexible infrastructure for processing data sources of any kind—whether containers, iterator ranges, generators, or composed pipelines—in a uniform way. This combination allows algorithms to be formulated such that they are instantiated only for suitable types, whereby errors are detected at compile time. Together with variadic templates, entire sequences of heterogeneous types can be processed type‑safely, for example via parameter‑pack expansions or fold expressions that operate on each element of a range‑like structure. The result is a generic data‑processing model that is both memory‑efficient and optimization‑friendly: unnecessary copies are eliminated, evaluations are lazy, and the compiler can analyze and transform the entire pipeline into optimized machine code.

Type safety, expressiveness, and runtime efficiency are no longer in conflict but reinforce one another.

Delphi types as concept and std::ranges

In the following section I combine modern C++ with Delphi. I want to have std::ranges types for TMemo components for both VCL and FMX, and I also want to access TListBox components or similar. The goal is a type‑safe, compile‑time‑governed and runtime‑guarded bridge between the two languages that accepts only valid owners and otherwise delivers precise diagnostics. To that end, we first establish the foundations, import C++ concepts, and draw on the Delphi RTL classes.

We do not use the various specialized libraries and not the concrete properties.

Encapsulation in its own namespace keeps the surface clear and prevents collisions; this example is a building block that you can integrate into existing projects. A lightweight shim class serves as an anchor, providing only a TStrings source that stores the data as a fundamental property in the components mentioned above. This wrapper later allows non‑TComponent types to be embedded into the same concept logic without overstretching semantics: the shim merely mirrors the property—no more, no less.

The following concept is a compile‑time verification to define and recognize types that stem from the Delphi component world. It recognizes components derived from System::Classes::TComponent. The benefit is precise candidate selection: only true components or the explicitly defined shim are permitted as owners.

For the actual storage of data in the components we care about there is either a Lines property (e.g., TMemo) or Items (e.g., TListBox), each exposing access to a TStrings*. From C++’s perspective, it suffices if these properties are convertible to that type. Even though C++ itself has no properties, they behave like variables that may be readable and writable.

We can express these requirements as two small, orthogonal concepts. Diagnostics remain granular: if Lines is missing, the compiler still checks Items and only then reports a clear negative.

We combine both concepts into a common one. Allowed are either the previously defined LinesOwnerShim or a Delphi component that provides at least one of the two properties. The logic remains intentionally strict: neither “almost fits” nor implicit conversions outside of TStrings* are accepted.

Software changes—especially components from third‑party libraries. For this important case, the logical consequences in metaprogramming must always be guarded. This also happens at compile time and we need a check value that always yields false. The helper constant always_false_v allows us to reliably trigger a compile‑time error in a dependent context without instantiating unnecessary templates.

I do not want to show the entire class with iterator, a type that converts the lines in TStrings* to std::string, the std::range and the back inserter. Therefore only the class declaration and a static method that performs the association at compile time.

The template parameter is constrained by the concept HasLinesOrItems above; the static method ValidateAndGetSequence returns a checked TStrings* with the data. The approach is two‑stage: at compile time it selects the appropriate property, with a subsequent runtime null check. This way, both miswired owners and runtime‑missing objects are cleanly handled.

With that, the bridge between C++ and Delphi is complete: the concepts precisely define the admissible types; the method deterministically selects the right access path at compile time and performs a runtime null‑guard. In practice this yields a robust pattern for components like TMemo, TListBox, or custom adapters that expose TStrings via Lines or Items.


Type safety, variants, and many‑valued logic

With std::variant, C++ gained a tool that allows dynamic states without giving up type safety. Variants are not uncontrolled, dynamic containers, but defined sets of possible types that can be queried explicitly.

But the thinking does not end there: std::optional has opened a new depth that corresponds to a many‑valued logic. A value may exist—or not exist. This possibility, that a variable has no value, was long unimaginable to many developers. With std::expected, this concept was extended: a result may be valid or carry an error state, and both cases become part of the type system.

In this monadic view, error handling becomes an aspect of semantics. Optional or expected values are no longer special cases but logical states—mathematically precise and compiler‑checked.

On this basis, concepts open a new level of safety. Just one example. We can equip variables with explicit units via enum class types and concept‑based parameters: meters, seconds, degrees, pounds, or Newtons. This yields a system that allows automatic yet controlled conversions, for example between metric and imperial units, and validates at compile time whether an operation is semantically meaningful.

The same can be done with degrees and radians—perhaps easier to grasp:

I do not want to show the entire class for explanation, but one example. The angle should be output correctly, optionally with a different type. The value can be returned by reference if the kind of angle and the used data type match; otherwise a conversion must take place and the return value is by value. Here SFINAE enables exactly that, and one can see that in C++ today we leave little to chance. And everyone should recognize that this does not happen at runtime.

If we now want to use trigonometric functions (std::sin, std::cos, …), we need the angle in radians; perhaps you have once forgotten this and caused a bug. Therefore, similarly to the example above, we can offer a toRadians method and then provide custom, safe functions for this class. The method may return a reference when the angle kind and data type are the same; otherwise it must return a value. Here, C++ has long offered SFINAE.

And of course we can define a std::formatter for this new type to also output the unit and support the usual numeric formatting rules. Also string literals, to work directly with units in source code.

This capability is revolutionary; it breaks a decades‑old dogma: programs were never written to compute with “apples, pears, and slices of cake” but with abstract numbers. That mindset led to countless problems and disasters—even the failed Mars Climate Orbiter mission, where mixing units caused the loss of a spacecraft.

With modern C++ we can eliminate this class of errors. Through concept‑based thinking, type systems arise that separate physical units, measures, and values and connect them arithmetically in a controlled way. We can define “safe numbers” that no longer allow uncontrolled conversions. The language itself becomes the guardian of logic and consistency—a progress earlier generations of developers could only dream of.


Typelists, templates, and compile‑time safety

Another example. In my experiments on database values and parameters I used variadic templates and fold expressions to define typelists that already at compile time verify which bindings are allowed. When a SQL query binds parameters, the compiler decides correctness and order—not runtime. Trait structures take care of conversions among different types.

This principle extends the RAII idea of safety to the type level. It creates a system in which errors cannot even arise because they are excluded at compile time. The following example shows a simple typelist, with a std::tuple to hold a full typelist and a std::variant as a type to store a single value from the list. The invoke method serves to perform multiple operations in one function with the same typelist.

Now, with the help of a fold expression, we can check whether a particular type ty is contained in a typelist.

Now I can use these concepts to define a list of valid types for returning a database query. I can check this again at compile time and define a concept based on it for use in the program. Not only types from the list itself are allowed, but also std::optional types with those.

We can now use these concepts for a database query; a single value or the entire row can be retrieved. In the first step we access a specific attribute in the data row. Since the types can also be given as std::optional, this must be reflected in the return type.

We can now also return a complete data row using std::generator and a coroutine; however, we need a bit of preparation to linearize the variadic parameter pack into a sequential series so that each type entry is considered. To avoid always returning the full result set and instead enable projections, a std::vector<std::string> with the names of the attributes is passed.

Since C++Builder 13 does not yet provide std::generator, a custom implementation is required as a workaround.

Many developers will no longer see C++ in the previous examples and will despair while reading. They will argue that all of this is far too complicated and unnecessary. What is often overlooked is that this code is written only once for this library and then reused type‑safely and robustly. These are blueprints that the modern C++ compiler assembles and checks in concrete projects and then translates into efficient native code without overhead. Later there will be another example that shows how to populate a table with values from the database.


Thinking in ranges, tuples, and relations

The new C++ no longer views data simply as objects but as relationships. A file, a database query, a grid display in the UI: everything is a range.

In my streams I showed how a dataset from FireDAC can be defined as a range with a variadic typelist and modeled as a std::tuple, managed with a generator and coroutines, and then visualized type‑safely in a grid via a C++ standard algorithm using the same typelist.

Each cell is a tuple element, each column a projection. With structured bindings and return value optimization, functions today can return multiple values without creating copies. The significance of rvalues and std::move lies precisely here: data is not copied, but transferred.

This efficiency is not an optimization—it is a form of expression. C++ knows movement as a semantic concept and makes it visible. Where other languages copy, C++ transforms states. Thus efficiency becomes a philosophy.

This perspective connects computer science and mathematics. A range is a set, a tuple an element, a transformation a mapping. C++ becomes a tool for mathematical thinking in relationships, not instructions. And in doing so, C++ reaches a new level of safety, and the need for strict type safety as defined by Bjarne Stroustrup gains new quality.


Safe output: formatted precision

This new generation of type safety and compile‑time validation does not end with data or computations; it continues consistently through output.

With std::format, C++ has gained a modern, powerful, and safe formatting system that ends the classic, error‑prone printf mechanisms. std::format is not merely convenient but fully type‑safe: the compiler checks that placeholders and data types match.

Beyond that, formatting can be extended with std::formatter: custom types can have specialized formatters that define how an object is rendered—precise, context‑dependent, and checked at compile time. Thus, expression and structure fuse into a unified system.

C++23 continues this idea: with std::print and std::println, formatting becomes safe output. These functions are compile‑time optimized, use the std::format system internally, and offer a semantically clear, expressive interface.

Output is therefore no longer an uncontrolled side effect but part of type logic. Formatting becomes, like every other operation in modern C++, a verifiable part of the program. Safety, precision, and readability unite in one of the oldest yet often underestimated dimensions of software design: expression.


Responsibility and perspective

C++ is not a language for imitators but for developers who understand what they are building. Yet this new phase demands courage—from the developers designing libraries and from the makers of the tools implementing the language.

Embarcadero has taken an important step with C++Builder 13 to make modern C++ productive. But this step must not be the destination; it is an important waypoint in a continuous process for the next versions and years. C++26 is at the door, and anyone who stands still today will be overtaken tomorrow.

My appeal goes to those responsible: do not stand still—press forward. Compilers, frameworks, and IDEs must be developed so they natively support thinking in concepts, typelists, ranges, and RAII. C++Builder has the potential to be part of this movement—if we allow it.


Closing thought: understanding C++

C++ is often described as complex, hard to learn, and unsafe. That reputation is undeserved. The language itself is not unsafe. On the contrary: it is precise, honest, and consistent. What is unsafe is how it is used if it is misunderstood or if one remains in old patterns.

C++ deliberately preserves the old to avoid breaking evolution, but it expands in layers of precision, control, and expressiveness. The insecurity lies not in the language but in the refusal to learn new things.

If we stop seeing libraries as implementations and start seeing them as models of relationships and states, if we understand the compiler as a partner and RAII as a principle of responsibility, if we think of ranges as mathematical structures, use concepts as semantic contracts, treat move semantics as deliberate movement of meaning, and finally use std::format and std::print to turn the language itself into an instrument of precise communication—then a new epoch of C++ thinking begins.

C++ has never been a language for convenience; it has always been a language of responsibility. And precisely therein lies its future.


The RAD Studio 13 logo

Why not download a free trial of RAD Studio today and see why we think it’s the fastest, easiest, and most efficient way to create modern apps?

 


About the Author

Volker Hillmann comes from northern Germany and is a mathematician as well as a software architect with an interdisciplinary background bridging formal science and applied computer science. His studies in mathematics combine classical rigor with logical and cybernetic thinking, complemented by many years of engagement with chaos mathematics, systems theory, and artificial intelligence. The computer science focus of his work lies in databases, data security, and software architecture – with a consistently modern emphasis on C++.

He has been programming in Turbo C since 1988, in Turbo C++ since 1991, and knows the evolution of the language as well as its tools firsthand.
He has given numerous lectures on C++ and software architecture and has been self-employed since 2001. Since the mid-2000s, he has also been an Embarcadero MVP, actively promoting the further development and practical application of the C++Builder.

His passion is modern C++:
In his free livestreams, he focuses exclusively on contemporary C++, regardless of the compiler – whether MSVC, GCC, or, thanks to the new version, again the C++Builder. It is never about a specific tool, but always about the language itself:
C++ as an expression of architecture, precision, and thought.

He understands C++ not merely as a tool but as a language of thought. C++ is a platform for structured, efficient, and secure design.
In his current streams and articles, he tests and analyzes the C++Builder 13 to demonstrate where modern C++ stands in practice, what possibilities already exist, and which boundaries still need to be crossed.

His topics range from RAII and move semantics to coroutines, ranges, and concepts, all the way to compile-time metaprogramming
and type safety. As a mathematician, he thinks in systems and relations, in ranges, tuples, and mappings, and consistently applies this perspective to software architecture.
He represents an understanding of C++ that combines responsibility, precision, and evolution, showing that this language is neither outdated nor unsafe, but rather the most precise and honest form of software design.

What What's New in InterBase 15 See What's New in RAD Studio 13 Florence The AI Codecamp: Learn, Code, Create

Reduce development time and get to market faster with RAD Studio, Delphi, or C++Builder.
Design. Code. Compile. Deploy.
Start Free Trial   Upgrade Today

   Free Delphi Community Edition   Free C++Builder Community Edition

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

IN THE ARTICLES