C++ - Reddit
224 subscribers
48 photos
8 videos
24.7K links
Stay up-to-date with everything C++!
Content directly fetched from the subreddit just for you.

Join our group for discussions : @programminginc

Powered by : @r_channels
Download Telegram
Online talk on building a C++ based custom language and lexer internals

Developers from PVS-Studio are continuing their series of talks about creating a custom programming language. They will explain what the lexer is, what it consists of, and how to work with it.

The talk series as a whole is for devs who want to start understanding how compilers work under the hood. Throughout the series, their C++ architect demonstrates the practical application of each programming language component.

If you're interested, I leave the link here.

https://redd.it/1so08nx
@r_cpp
build2 0.18.1 released, adds package manager Fetch Cache, JSON Compilation Database, and official binary packages
https://build2.org/release/0.18.0.xhtml

https://redd.it/1so1igj
@r_cpp
cppreference is back up! but overloaded

I just clicked a link that wasn’t cached and noticed very long loading time. Eventually the page loaded, and I noticed the font was different. After Herb’s post, I was excited and noticed the homepage notice declared the site newly operational again! However I am experiencing a significant number of 5xx errors.

https://redd.it/1so7kx8
@r_cpp
Built an AI overlay that disappears on screen share — one Win32 API call, C++, and a week of evenings

Built this after getting frustrated during technical interviews — an AI assistant that's literally invisible on screen share

I kept wondering why there wasn't a clean way to have a personal reference window open during video calls without it being visible on screen.

Then I found out about SetWindowDisplayAffinity — a Windows API that lets you exclude a window from all capture. OBS, Zoom, Teams, Google Meet — none of them pick it up. The window exists on your screen, nowhere else.

Spent a week building an overlay on top of it. Floating AI assistant. Only you can see it. That's the whole thing.

Shipped it at www.unviewable.online.

For anyone curious about the tech — it's C++ with CMake, the magic is literally one Win32 API call. Windows has had this since Windows 10 2004 and barely anyone talks about it. Wild.

https://redd.it/1sodzjg
@r_cpp
Opinions on Introducing C++: The Easy Way to Start Learning Modern C++ by Frances Buontempo

What do you guys think about the book Introducing C++: The Easy Way to Start Learning Modern C++ by Frances Buontempo i was considering to buy it because i want to learn c++ and i already have some experiences coding in other languages it seems like a sort of successor to accellerated c++

https://redd.it/1sp3epy
@r_cpp
mini project

I built a small to-do List project in C++

I'd appreciate if you can take a quick look and give feedback on:

\>File structure

\>Code design

\>Any bad practices

\> the name must be one word like (ex_ex_ex) and i can't solve that



GitHub link: to-do\_list\_cpp/to\_do\_list.cpp at main · TheGreat-A7A/to-do\_list\_cpp

https://redd.it/1spmp6g
@r_cpp
A virtual pointer pattern for dynamic resolution in C++ — years in production

I've been working on Olex2, a crystallography software package, for over 20 years. At some point I needed pointers whose target wasn't a fixed address but a runtime decision — "whatever is currently the active object of this type."

The result was olx_vptr — a virtual pointer where resolution is delegated to a user-defined get_ptr():

https://github.com/pcxod/olex2/blob/master/sdl/olxvptr.h

The calling code uses natural pointer syntax and knows nothing about how resolution happens. A concrete use looks like this:

struct VPtr : public olx_virtual_ptr<TXFile> {

virtual IOlxObject *get_ptr() const;

};

olx_vptr<TXFile> thip(new VPtr());

lib->Register(
new TFunction<TXFile>(thip, &TXFile::LibGetFormula, "GetFormula", .....

(https://github.com/pcxod/olex2/blob/master/xlib/xfiles.cpp#L1427)

Single virtual dispatch, fully type-safe, open to any resolution strategy.

I'm surprised this pattern never made it into the standard or common literature. Has anyone seen something similar? Would there be interest in a more formal writeup?

https://redd.it/1spvxx6
@r_cpp
I built a calm, readable system monitor for Linux (Qt6, C++20, open source)

Most Linux system monitors end up in one of two places: they feel ancient and cramped, or they show everything at once and become visual noise. I wanted something in between, genuinely useful but still pleasant to look at for more than ten seconds. So I built archvital.

It has a compact Overview page for the numbers that matter most, plus dedicated pages for CPU, GPU, Memory, Disk, Network, Tasks, and a Diagnostics page for common health checks. There is a Settings page for theme colors and section visibility, and preferences are saved through QSettings so the app remembers your layout between sessions.

The whole thing is built on Qt6 and C++20 with custom sidebar, card, sparkline, and bar components. No extra widget library beyond Qt itself.

The project is already daily driveable but still actively evolving. Screenshots are in the repo. If you try it and something looks wrong or reads oddly, that is exactly the kind of feedback I am looking for.

github.com/T9Tuco/archvital

https://redd.it/1sq6siu
@r_cpp
SIMD IPv6 lookup vs Patricia trie: surprising real-world results

I’ve been working on a C++ implementation of a SIMD-accelerated IPv6 longest-prefix-match (LPM) structure based on the PlanB paper (linearized B+-tree + AVX-512).



On synthetic workloads, the results were as expected:

\~20× faster than a naive Patricia trie.



But when I switched to a real BGP table (RIPE RIS rrc00, \~254K IPv6 prefixes), I got a surprising result:



A simple Patricia trie can actually match or even outperform the SIMD-based tree.



Numbers (single core, Ice Lake laptop):

\- SIMD tree: \~65–137 MLPS

\- Patricia: \~95 MLPS median



The reason seems to be cache locality and early exits:

\- Patricia often resolves lookups after just a few pointer hops in hot regions of the address space

\- The SIMD tree always pays a fixed traversal cost (depth \~7)



So even though the SIMD approach is more “theoretically efficient”, real-world prefix distribution and access patterns change the outcome quite a bit.



I’m curious if others have observed similar effects in routing / packet processing systems, or when comparing structures like PopTrie / CP-Trie.



Repo (MIT, includes benchmarks + real BGP reproduction):

https://github.com/esutcu/planb-lpm

https://redd.it/1sqe08e
@r_cpp
I am new to C++, is it just me or is the checklist kinda crazy? How often do you encounter these or plan on making use of them like the newer C++26 features like contracts? Looking for more experienced dev opinions...

I know Python and have been binging C++ for a couple of days now from scratch. C++ feels like a language where you gradually discover a huge checklist of "STATE YOUR INTENT" due to years of probably layering on things to shore up its weaknesses of memory/type safety etc. Like I get it, it's how C++ was designed for things to be explicit and it's like the English language where you don't need to know every word and in this case, feature, but every line of code makes me feel I'm asking a gajillion questions.

So far I have jotted down some stuff that took me awhile to digest...

* const
* constexpr
* [[nodiscard]]
* std::expected
* delete("reason")
* smart pointers
* zero-initialization curly brackets
* structured bindings
* assert
* static assert
* contracts

So I guess I'm not very familiar with the ecosystem or ever worked with other C++ code. Like is this good or bad? I would think it's very safe but do and should people code actually C++ like this? I don't have a frame of reference to relate to and I don't know if the C++ community is going to jump on things like C++26's contracts or reject it. What's the current attitude? If you were a teammate, would you love or hate to see something like this?

[[nodiscard]] constexpr std::expected<int, Error> GetHP(const std::unique_ptr<Player>& player)
{
assert(player);
if (!player) return std::unexpected(Error::NullPlayer);
const auto [hp, alive] = std::pair{player->hp, player->hp > 0};
static_assert(sizeof(int) == 4);
return alive ? hp : 0;
}

https://redd.it/1sqhobx
@r_cpp
Avoiding per-cell std::string allocation in a vectorized filter

Writing a small columnar query engine and hit a string-copy trap in the filter operator. The fix turned out to be
measurable so I thought I'd share.

Naive version: output chunk built cell-by-cell.

  for (idx_t i = 0; i < input.size(); i++) {
if (!matches[i]) continue;
for (idx_t c = 0; c < num_cols; c++)
result.SetValue(c, out, input.GetValue(c, i));
out++;
}


GetValue/SetValue go through a tagged Value type, and for VARCHAR they each allocate a fresh std::string. 1M
rows with a few VARCHAR columns means millions of allocations on a single filter pass.

Vectorized version: build a uint32_t sel[] of matching row indices, then per column copy with the typed pointer.

  auto *s = src.GetData<int64_t>();
auto *d = dst.GetData<int64_t>();
for (idx_t i = 0; i < n; i++) d[i] = s[sel[i]];


Trivial for numeric types. For VARCHAR it's trickier: string_t is a 16-byte type, inline for short strings, a
pointer to a heap-allocated payload for longer ones. Copying the 16 bytes is cheap. The problem is that the
long-string pointer aims at the source vector's string heap. Let src go out of scope and dst's strings point at freed
memory.

The string heap (VectorStringBuffer) is already owned via shared_ptr<VectorBuffer>. Fix is a setter that makes dst
adopt src's heap:

  auto *s = src.GetData<string_t>();
auto *d = dst.GetData<string_t>();
for (idx_t i = 0; i < n; i++) d[i] = s[sel[i]];
dst.SetAuxiliaryPtr(src.GetAuxiliaryPtr()); // dst keeps src's heap alive


No string copies. Refcount bumps once per vector, not once per cell.

A WHERE ... GROUP BY region query on 1M rows went from 894 ms to ~150 ms. Roughly 100 ms of that was this change
alone; the rest was unrelated parallelism on another pass.

Question for the sub: is there a standard name for this pattern? "Copy handles that reference an upstream buffer,
retain the buffer as long as any handle lives." Arrow solves the same problem internally. shared_ptr<Buffer>
adoption feels manual. Curious what the idiomatic C++ answer is.

Repo if anyone wants the full context: https://github.com/SouravRoy-ETL/slothdb

https://redd.it/1sqj7ze
@r_cpp