C++ - Reddit
224 subscribers
48 photos
8 videos
24.7K links
Stay up-to-date with everything C++!
Content directly fetched from the subreddit just for you.

Join our group for discussions : @programminginc

Powered by : @r_channels
Download Telegram
New C++ Conference Videos Released This Month - April 2026 (Updated To Include Videos Released 2026-04-13 - 2026-04-19)

**CppCon**

2026-04-13 - 2026-04-19

* Persistence Squared: Persisting Persistent Data Structures - Juan Pedro Bolivar Puente - [https://youtu.be/HmmRVdYMP-g](https://youtu.be/HmmRVdYMP-g)
* CTRACK: C++ Performance Tracking and Bottleneck Discovery - Grischa Hauser - [https://youtu.be/en4OQvZePqg](https://youtu.be/en4OQvZePqg)
* From C+ to C++: Modernizing a GameBoy Emulator - Tom Tesch - [https://youtu.be/ScmhRNSrRP4](https://youtu.be/ScmhRNSrRP4)
* Leverage AI Agents to Refactor and Modernize C++ Code - Jubin Chheda - [https://youtu.be/vAySFnu-Z18](https://youtu.be/vAySFnu-Z18)
* Lightning Talk: Algebraic Path Problems Done Quick: Or how to find the best\* path from one talk to another - Stefan Ivanov - [https://youtu.be/Fcun7lDfTRQ](https://youtu.be/Fcun7lDfTRQ)

2026-04-06 - 2026-04-12

* Rust/C++ Interop Challenges - Victor Ciura - [https://youtu.be/8xqhSy539Pc](https://youtu.be/8xqhSy539Pc)
* groov: Asynchronous Handling of Special Function Registers - Michael Caisse - [https://youtu.be/TjSL-XCyUJY](https://youtu.be/TjSL-XCyUJY)
* Clean code! Horrible Performance? - Sandor Dargo - [https://youtu.be/nLts4S8xSd4](https://youtu.be/nLts4S8xSd4)
* Beyond the Big Green Button: Demystifying the Embedded Build Process - Morten Winkler Jørgensen - [https://youtu.be/UekVdzMCAa0](https://youtu.be/UekVdzMCAa0)
* C++: Some Assembly Required - Matt Godbolt - [https://youtu.be/zoYT7R94S3c](https://youtu.be/zoYT7R94S3c)

2026-03-30 - 2026-04-05

* How to Build Type Traits in C++ Without Compiler Intrinsics Using Static Reflection - Andrei Zissu - [https://youtu.be/EcqiwhxKZ4g](https://youtu.be/EcqiwhxKZ4g)
* Beyond Sequential Consistency: Unlocking Hidden Performance Gains - Christopher Fretz - CppCon 2025 - [https://youtu.be/6AnHbZbLr2o](https://youtu.be/6AnHbZbLr2o)
* Dynamic Asynchronous Tasking with Dependencies - Tsung-Wei (TW) Huang - CppCon 2025 - [https://youtu.be/6Jd9Zyl9SDc](https://youtu.be/6Jd9Zyl9SDc)
* Work Contracts in Action: Advancing High-performance, Low-latency Concurrency in C++ - Michael Maniscalco - CppCon 2025 - [https://youtu.be/5ghAa7B5bF0](https://youtu.be/5ghAa7B5bF0)
* Constexpr STL Containers: Why C++20 Still Falls Short - Sergey Dobychin - CppCon 2025 - [https://youtu.be/Py4GJaCHwkA](https://youtu.be/Py4GJaCHwkA)

**C++Online**

2026-04-13 - 2026-04-19

* Suspend and Resume: How C++20 Coroutines Actually Work - Lieven de Cock - [https://youtu.be/SOSn6Ich60A](https://youtu.be/SOSn6Ich60A)
* Building High-Performance Distributed Systems in Modern C++ - Real-World Patterns with Boost.Asio & Beast - Samaresh Kumar Singh - [https://youtu.be/V9pKPug3xbo](https://youtu.be/V9pKPug3xbo)

2026-04-06 - 2026-04-12

* Mastering C++ Clocks: A Deep Dive into std::chrono - Sandor DARGO - [https://youtu.be/ytI6pzT1Opk](https://youtu.be/ytI6pzT1Opk)

2026-03-30 - 2026-04-05

* Is AI Destroying Software Development? - David Sankel - C++Online 2026 - [https://youtu.be/Ek32ZH3AI3k](https://youtu.be/Ek32ZH3AI3k)
* From Hello World to Real World - A Hands-On C++ Journey from Beginner to Advanced - Workshop Preview - Amir Kirsh - [https://youtu.be/2zhW-tL2UXs](https://youtu.be/2zhW-tL2UXs)
* Workshop Preview: C++ Software Design - Klaus Iglberger - [https://youtu.be/VVQN-fkwqlA](https://youtu.be/VVQN-fkwqlA)
* Workshop Preview: Essential GDB and Linux System Tools - Mike Shah - [https://youtu.be/ocaceZWKm\_k](https://youtu.be/ocaceZWKm_k)
* Workshop Preview: Concurrency Tools in the C++ Standard Library - A Hands-On Workshop - Mateusz Pusz - [https://youtube.com/live/Kx9Ir1HBbwY](https://youtube.com/live/Kx9Ir1HBbwY)
* Workshop Preview: Mastering std::execution (Senders/Receivers) - A Hands-On Workshop - Mateusz Pusz - [https://youtube.com/live/bsyqh\_bjyE4](https://youtube.com/live/bsyqh_bjyE4)
* Workshop Preview: How C++ Actually Works - Hands-On With Compilation, Memory, and Runtime - Assaf Tzur-El - [https://youtube.com/live/L0SSRRnbJnU](https://youtube.com/live/L0SSRRnbJnU)
* Workshop Preview:
Jumpstart to C++ in Audio - Learn Audio Programming & Create Your Own Music Plugin/App with the JUCE C++ Framework - Jan Wilczek - [https://youtube.com/live/M3wJN0x8cJw](https://youtube.com/live/M3wJN0x8cJw)
* Workshop Preview: AI++ 101 - Build an AI Coding Assistant in C++ & AI++ 201 - Build a Matching Engine with Claude Code - Jody Hagins - [https://youtube.com/live/Vx7UA9wT7Qc](https://youtube.com/live/Vx7UA9wT7Qc)
* Workshop Preview: Stop Thinking Like a Junior - The Soft Skills That Make You Senior - Sandor DARGO - [https://youtube.com/live/nvlU5ETuVSY](https://youtube.com/live/nvlU5ETuVSY)
* Workshop Preview: Splice & Dice - A Field Guide to C++26 Static Reflection - Koen Samyn - [https://youtube.com/live/9bSsekhoYho](https://youtube.com/live/9bSsekhoYho)

**ADC**

2026-04-13 - 2026-04-19

* Building Better Software through Cross-Functional Collaboration - Matt Morton - [https://youtu.be/l5RxH7pZVpw](https://youtu.be/l5RxH7pZVpw)
* Accelerate UI Development - Seamless Designer-Developer Collaboration with Web Tools - Ryan Wardell - [https://youtu.be/HXwjKm5Vu08](https://youtu.be/HXwjKm5Vu08)

2026-04-06 - 2026-04-12

* Hacking Handhelds for Creative Audio - Building Music Applications for the New Nintendo 3DS - Leonardo Foletto - [https://youtu.be/x-9lDvfAKd0](https://youtu.be/x-9lDvfAKd0)
* Helicopter View of Audio ML - Martin Swanholm - [https://youtu.be/TxQ4htrS2Po](https://youtu.be/TxQ4htrS2Po)
* PhilTorch: Accelerating Automatic Differentiation of Digital Filters In PyTorch - How to evaluate differentiable filters 1000 times faster in PyTorch. - Chin-Yun Yu - [https://youtu.be/Br5QhU\_08Po](https://youtu.be/Br5QhU_08Po)

2026-03-30 - 2026-04-05

* Creating from Legacy Code - A Case Study of Porting Legacy Code from Exponential Audio - Harriet Drury - [https://youtu.be/rjafXQwCz4w](https://youtu.be/rjafXQwCz4w)
* Designing an Audio Live Coding Environment - Corné Driesprong - [https://youtu.be/Jw8x2uMgFnc](https://youtu.be/Jw8x2uMgFnc)
* How To Successfully Develop Software Products - Olivier Petit & Alistair Barker - [https://youtu.be/vymlQFopbp0](https://youtu.be/vymlQFopbp0)

**Meeting C++**

2026-04-06 - 2026-04-12

* The Misra C++:2023 Guidelines - Richard Kaiser - [https://www.youtube.com/watch?v=TRz-WXgADuI](https://www.youtube.com/watch?v=TRz-WXgADuI)
* Applied modern C++: efficient expression evaluator with type erasure - Olivia Quinet - [https://www.youtube.com/watch?v=66WtE\_7wE1c](https://www.youtube.com/watch?v=66WtE_7wE1c)

2026-03-30 - 2026-04-05

* Building C++: It Doesn't Have to be Painful! - Nicole Mazzuca - Meeting C++ 2025 - [https://www.youtube.com/watch?v=ExSlx0vBMXo](https://www.youtube.com/watch?v=ExSlx0vBMXo)
* int != safe && int != ℤ - Peter Sommerlad - Meeting C++ 2025 - [https://www.youtube.com/watch?v=YyNE6Y2mv1o&pp=0gcJCdkKAYcqIYzv](https://www.youtube.com/watch?v=YyNE6Y2mv1o&pp=0gcJCdkKAYcqIYzv)

**using std::cpp**

2026-03-30 - 2026-04-05

* Learning C++ as a newcomer - Berill Farkas - [https://www.youtube.com/watch?v=nsMl54Dvm24](https://www.youtube.com/watch?v=nsMl54Dvm24)
* C++29 Library Preview : A Practitioners Guide - Jeff Garland - [https://www.youtube.com/watch?v=NqpLxkatkt4](https://www.youtube.com/watch?v=NqpLxkatkt4)
* High frequency trading optimizations at Pinely - Mikhail Matrosov - [https://www.youtube.com/watch?v=qDhVrxqb40c](https://www.youtube.com/watch?v=qDhVrxqb40c)
* Don’t be negative! - Fran Buontempo - [https://www.youtube.com/watch?v=jqLEFPDXZ-o](https://www.youtube.com/watch?v=jqLEFPDXZ-o)
* Cross-Platform C++ AI Development with Conan, CMake, and CUDA - Luis Caro - [https://www.youtube.com/watch?v=jnKeUE2C8\_I](https://www.youtube.com/watch?v=jnKeUE2C8_I)
* Building a C++23 tool-chain for embedded systems - José Gómez López - [https://www.youtube.com/watch?v=AlNnd0QARS8](https://www.youtube.com/watch?v=AlNnd0QARS8)
* Space Invaders: The Spaceship Operator is upon us - Lieven de Cock - [https://www.youtube.com/watch?v=9niOq1kr61Y](https://www.youtube.com/watch?v=9niOq1kr61Y)
* Same C++, but quicker to the finish line - Daniela Engert -
[https://www.youtube.com/watch?v=9ijIocn\_xzo](https://www.youtube.com/watch?v=9ijIocn_xzo)
* Having Fun With C++ Coroutines - Michael Hava - [https://www.youtube.com/watch?v=F9ffx7HvyrM](https://www.youtube.com/watch?v=F9ffx7HvyrM)
* The road to 'import boost': a library developer's journey into C++20 modules - Rubén Pérez Hidalgo - [https://www.youtube.com/watch?v=hD9JHkt7e2Y](https://www.youtube.com/watch?v=hD9JHkt7e2Y)
* C++20 and beyond: improving embedded systems performance - Alfredo Muela - [https://www.youtube.com/watch?v=SxrC-9g6G\_o](https://www.youtube.com/watch?v=SxrC-9g6G_o)
* Supercharge Your C++ Project: 10 Tips to Elevate from Repo to Professional Product - Mateusz Pusz - [https://www.youtube.com/watch?v=DWXlyOd\_z88](https://www.youtube.com/watch?v=DWXlyOd_z88)
* Compiler as a Service: C++ Goes Live - Aaron Jomy, Vipul Cariappa - [https://www.youtube.com/watch?v=jMO5Usa26cg](https://www.youtube.com/watch?v=jMO5Usa26cg)
* The CUDA C++ Developer's Toolbox - Bernhard Manfred Gruber - [https://www.youtube.com/watch?v=MNwGvqX4KH0](https://www.youtube.com/watch?v=MNwGvqX4KH0)
* C++ Committee Q&A at using std::cpp 2026 - [https://www.youtube.com/watch?v=iD5Bj7UyAQI](https://www.youtube.com/watch?v=iD5Bj7UyAQI)
* The Mathematical Mind of a C++ Programmer - Joaquín M López - [https://www.youtube.com/watch?v=9g4K-oNw1SE](https://www.youtube.com/watch?v=9g4K-oNw1SE)
* C++ Profiles: What, Why, and How - Gabriel Dos Reis - [https://www.youtube.com/watch?v=Z6Nkb1sCogI](https://www.youtube.com/watch?v=Z6Nkb1sCogI)
* Nanoseconds, Nine Nines and Structured Concurrency - Juan Alday - [https://www.youtube.com/watch?v=zyhWzoE3Y2c](https://www.youtube.com/watch?v=zyhWzoE3Y2c)
* Fantastic continuations and how to find them - Gonzalo Juarez - [https://www.youtube.com/watch?v=\_0xRMXA83z0](https://www.youtube.com/watch?v=_0xRMXA83z0)
* You 'throw'; I'll 'try' to 'catch' it - Javier López Gómez - [https://www.youtube.com/watch?v=VwloPRtTGkU](https://www.youtube.com/watch?v=VwloPRtTGkU)
* Squaring the Circle: value-oriented design in an object-oriented system -Juanpe Bolívar - [https://www.youtube.com/watch?v=DWthcNoRVew](https://www.youtube.com/watch?v=DWthcNoRVew)
* Concept-based Generic Programming - Bjarne Stroustrup - [https://www.youtube.com/watch?v=V0\_Q0H-PQYs](https://www.youtube.com/watch?v=V0_Q0H-PQYs)

https://redd.it/1sqsqb5
@r_cpp
DMA is Dead: Zero-Copy Audio via Capability-Based Shared Memory in a C++26 Microkernel

Authors: Rajeshkumar Venugopal, Third Buyer Advisory, Claude 4.6

Description: A C++26 microkernel inspired by QNX Neutrino demonstrates that DMA is unnecessary for real-time audio transfer. Four user-space processes share a single 3840-byte stereo PCM buffer through capability-based memory grants — zero memory copies, zero DMA, zero kernel-mode drivers. The producer writes interleaved 48kHz/16-bit stereo samples, grants read-only capabilities to an audio driver, a VU meter (sub-region: left channel only), and a waveform visualizer (user-space read-back). IPC transfers only a 4-byte capability ID. The driver reads PCM data directly from the producer's buffer via std::span. Revoke cascades: munmap kills all grants. IPC round-trip latency: 1.31 microseconds (Apple M3, -O2), faster than QNX Neutrino on 600MHz ARM (\~2us) and FreeRTOS context switch on Cortex-M4 (\~7us). 14 invariants formally verified by Z3 (SMT solver): 9 IPC state machine proofs + 5 capability grant proofs. No counterexample exists for any invariant. 67 Catch2 tests, 252 assertions, all passing. BSD 2-Clause licensed. No Java, no Alloy, no DMA.

Keywords: microkernel, QNX, Neutrino, C++26, zero-copy, shared memory, capability-based security, DMA-free, real-time audio, IPC, message passing, send/receive/reply, priority inversion, formal verification, Z3, SMT, F#, alloy-fsx, Catch2, resource manager, PPS, publish-subscribe, stereo PCM, RTOS, embedded systems, BSD license

License: BSD-2-Clause

Repository: https://github.com/chanakyan/qnx-micro

Related: https://github.com/chanakyan/alloy-fsx https://github.com/chanakyan/mars_pathfinder

https://redd.it/1sr5ex1
@r_cpp
Writing gRPC Clients and Servers with C++20 Coroutines (Part 1)

# The Callback Hell of C++ Async Programming

If you have ever written network programs in C++, you are probably very familiar with this scenario —

You need to send request B after request A completes, then send request C after B completes. So you end up writing code like this:

client.send(requestA, [](Response a) {
client.send(requestB(a), [](Response b) {
client.send(requestC(b), [](Response c) {
// Finally made it here...
});
});
});

This is the infamous "callback hell". The deeper the nesting, the harder the code is to read, and error handling becomes a nightmare — each layer of callback needs to handle errors independently. One missed callback call in a branch, and the entire request silently disappears without a trace.

The callback pattern also introduces another thorny problem: lifetime management. When a callback executes, every captured object must still be alive, yet their lifetimes are often hard to predict. You end up littering the code with `shared_ptr` and `shared_from_this()` just to keep things alive — ugly and unnecessarily expensive.

The most maddening part is cancellation. Imagine a user closes a page and you want to cancel an in-progress multi-step operation, but every step in the callback chain may have already started. How do you propagate the cancel signal? How do you ensure all resources are properly cleaned up? This is nearly an unsolvable problem.

If you have worked with async programming in `Go` or `Python`, you have probably envied their concise coroutine syntax — expressing asynchronous logic with synchronous-looking code. The good news is that C++20 introduced coroutines, and C++ programmers finally have a proper tool for the job.

# The Async World of gRPC

Before diving in, let's take a look at what gRPC's async model looks like and what new challenges it brings.

The C++ implementation of gRPC provides two async APIs: the classic `CompletionQueue`\-based model and the Reactor pattern callback model. Clients typically use the Reactor pattern — inheriting various `ClientXxxReactor` classes and implementing callbacks; servers typically use the `CompletionQueue` model — registering operations with the queue and polling for results.

For a client-side server-streaming RPC, you need to inherit `grpc::ClientReadReactor<T>` and implement several callbacks:

class MyReader : public grpc::ClientReadReactor<Response> {
public:
void OnReadDone(bool ok) override {
if (ok) {
// Process the received data, then call StartRead again
StartRead(&response_);
}
}

void OnDone(const grpc::Status &status) override {
// RPC finished
}

private:
Response response_;
};

The server side is even more complex. With the `CompletionQueue`\-based model, you need to manually maintain a state machine:

enum class State { WAIT, READ, WRITE, FINISH };

class Handler {
void Proceed() {
switch (state_) {
case State::WAIT:
// Register the next request, transition to READ state
break;
case State::READ:
// Read data, decide whether to continue reading or writing
break;
// ...
}
}
};

This state machine must be maintained by hand. A single wrong state transition can cause data loss or a crash. Every time you need to implement this logic for a new RPC method, you have to go through the same ordeal.

Worse still, both approaches make it very hard to support cancellation cleanly. In the Reactor model, cancellation means calling `context->TryCancel()` and waiting for the `OnDone` callback; in the `CompletionQueue` model, state transitions after cancellation require extra care.

The root of the problem is: **gRPC's async API is designed around callbacks, while we want to organize code around
the logical flow.**

# Promise and Future: Bridging Callbacks and Coroutines

This is exactly where [asyncio](https://github.com/Hackerl/asyncio) shines.

`asyncio` is an async framework built on C++20 coroutines and the `libuv` event loop. Its core design idea is simple: use `Promise` and `Future` as a bridge to connect the callback world and the coroutine world.

A `Promise` is an object that can be "resolved" or "rejected". `Future` is its other face — you can `co_await` a `Future`, and the coroutine will automatically resume when the `Promise` is resolved.

Take the `sleep` function as an example, to see how `asyncio` does it:

asyncio::task::Task<void> asyncio::sleep(std::chrono::milliseconds ms) {
Promise<void> promise;

uv_timer_start(timer, [](uv_timer_t *handle) {
// Timer fires, resolve the promise, coroutine resumes
static_cast<Promise<void> *>(handle->data)->resolve();
}, ms.count(), 0);

co_return co_await promise.getFuture();
}

Apply the same treatment to gRPC callbacks: `resolve` or `reject` a `Promise` inside the callback, and `co_await` the corresponding `Future` on the coroutine side. This lets you write what would otherwise require nested callbacks as straight-line code.

# Cancellation Support

Coroutine cancellation is also implemented through the `Promise` mechanism. `asyncio` provides a `Cancellable` wrapper that takes a `Future` and a cancellation function:

co_return co_await asyncio::task::Cancellable{
promise.getFuture(),
[&]() -> std::expected<void, std::error_code> {
// Perform the actual cancellation here
context->TryCancel();
return {};
}
};

When external code calls `task.cancel()`, `asyncio` walks the task chain to find the `Cancellable` currently being awaited and executes its cancellation function. For gRPC, the cancellation function simply calls `context->TryCancel()`. gRPC then handles the cleanup, triggers the `OnDone` callback, the `Promise` is eventually rejected, and the coroutine ends with a cancellation error.

With this mechanism, we can cleanly add cancellation support to any gRPC async operation without introducing any special state variables into business code.

# Defining the Sample Service

Before writing code, let's define a sample service covering all four RPC types:

syntax = "proto3";

package sample;

service SampleService {
// Unary RPC: simplest request-response pattern
rpc Echo(EchoRequest) returns (EchoResponse);

// Server streaming: server continuously pushes data to the client
rpc GetNumbers(GetNumbersRequest) returns (stream Number);

// Client streaming: client continuously sends data, server returns one result
rpc Sum(stream Number) returns (SumResponse);

// Bidirectional streaming: both sides can send and receive continuously
rpc Chat(stream ChatMessage) returns (stream ChatMessage);
}

message EchoRequest { string message = 1; }
message EchoResponse { string message = 1; int64 timestamp = 2; }
message GetNumbersRequest { int32 value = 1; int32 count = 2; }
message Number { int32 value = 1; }
message SumResponse { int32 total = 1; int32 count = 2; }
message ChatMessage { string user = 1; string content = 2; int64 timestamp = 3; }

Each RPC pattern has its use: `Echo` is the canonical RPC, `GetNumbers` lets the server stream a batch of data, `Sum` lets the client stream data and get an aggregated result, and `Chat` is the most complex — a bidirectional stream where either side can send at any time.

Let's implement them one by one, starting with the simplest: client Unary RPC.

# Client Unary RPC

Unary RPC is the most straightforward: send one request, receive one response. The corresponding gRPC Reactor method signature is:

void Stub::async::Echo(
grpc::ClientContext *,
const EchoRequest *,
EchoResponse *,
std::function<void(grpc::Status)>
);

gRPC calls the
callback we pass in when the RPC completes. Bridging it with `Promise` is natural:

asyncio::task::Task<sample::EchoResponse> echo(
const sample::EchoRequest &request,
std::shared_ptr<grpc::ClientContext> context
) {
sample::EchoResponse response;
asyncio::Promise<void, std::string> promise;

stub->async()->Echo(
context.get(), &request, &response,
[&](const grpc::Status &status) {
if (!status.ok()) {
promise.reject(status.error_message());
return;
}

promise.resolve();
}
);

if (const auto result = co_await asyncio::task::Cancellable{
promise.getFuture(),
[&]() -> std::expected<void, std::error_code> {
context->TryCancel();
return {};
}
}; !result)
throw co_await asyncio::error::StacktraceError<std::runtime_error>::make(result.error());

co_return response;
}

The core logic is just a few lines: construct a `Promise`, decide `resolve` or `reject` based on `Status` inside the callback, then `co_await` the `Promise`'s `Future`.

Cancellation support is woven in naturally — wrap the `Future` with `Cancellable`, call `context->TryCancel()` on cancellation. From the caller's perspective, this function is indistinguishable from a normal synchronous function, yet it never blocks the event loop.

# Client Streaming RPCs

Streaming RPCs are more complex than Unary, because data is transmitted one piece at a time and each read or write is an independent async operation.

# Server Streaming (Reader)

For a server-streaming RPC, the client needs to inherit `grpc::ClientReadReactor<T>`. Each call to `StartRead` causes gRPC to invoke `OnReadDone` when data is ready.

template<typename T>
class Reader final : public grpc::ClientReadReactor<T> {
public:
explicit Reader(std::shared_ptr<grpc::ClientContext> context) : mContext{std::move(context)} {
}

void OnDone(const grpc::Status &status) override {
if (!status.ok()) {
mDonePromise.reject(status.error_message());
return;
}

mDonePromise.resolve();
}

void OnReadDone(const bool ok) override {
// Resolve the per-read Promise when each read completes
std::exchange(mReadPromise, std::nullopt)->resolve(ok);
}

asyncio::task::Task<std::optional<T>> read() {
T element;

asyncio::Promise<bool> promise;
auto future = promise.getFuture();

mReadPromise.emplace(std::move(promise));
grpc::ClientReadReactor<T>::StartRead(&element);

// ok == false means the stream has ended
if (!co_await asyncio::task::Cancellable{
std::move(future),
[this]() -> std::expected<void, std::error_code> {
mContext->TryCancel();
return {};
}
})
co_return std::nullopt;

co_return element;
}

asyncio::task::Task<void> done() {
if (const auto result = co_await mDonePromise.getFuture(); !result)
throw co_await asyncio::error::StacktraceError<std::runtime_error>::make(result.error());
}

private:
std::shared_ptr<grpc::ClientContext> mContext;
asyncio::Promise<void, std::string> mDonePromise;
std::optional<asyncio::Promise<bool>> mReadPromise;
};

A few details worth noting:

* `mReadPromise` is wrapped in `std::optional` because each call to `read()` constructs a new `Promise`.
* `std::exchange` is used in `OnReadDone` to take ownership of the `Promise`, signaling that a single read has completed.
* `done()` waits for the entire stream to finish; it holds a separate `mDonePromise` distinct from the per-read promise.

Using `Reader` to
consume a server-streaming RPC:

asyncio::task::Task<void> getNumbers(
const sample::GetNumbersRequest &request,
std::shared_ptr<grpc::ClientContext> context
) {
Reader<sample::Number> reader{context};
stub->async()->GetNumbers(context.get(), &request, &reader);

reader.AddHold();
reader.StartCall();

while (true) {
auto number = co_await reader.read();

if (!number)
break; // stream ended

fmt::print("Received: {}\n", number->value());
}

reader.RemoveHold();
co_await reader.done(); // wait for the stream to fully finish and check for errors
}

>`AddHold` and `RemoveHold` are gRPC Reactor lifecycle control mechanisms that prevent the Reactor from being destroyed while we hold it.

# Client Streaming (Writer)

Client-streaming RPC is similar to server-streaming but in the opposite direction. Inherit `grpc::ClientWriteReactor<T>`; after each `StartWrite` completes, `OnWriteDone` is called back:

template<typename T>
class Writer final : public grpc::ClientWriteReactor<T> {
public:
void OnWriteDone(const bool ok) override {
std::exchange(mWritePromise, std::nullopt)->resolve(ok);
}

void OnWritesDoneDone(const bool ok) override {
std::exchange(mWriteDonePromise, std::nullopt)->resolve(ok);
}

asyncio::task::Task<bool> write(const T element) {
asyncio::Promise<bool> promise;
auto future = promise.getFuture();

mWritePromise.emplace(std::move(promise));
grpc::ClientWriteReactor<T>::StartWrite(&element);

co_return co_await asyncio::task::Cancellable{
std::move(future),
[this]() -> std::expected<void, std::error_code> {
mContext->TryCancel();
return {};
}
};
}

asyncio::task::Task<bool> writeDone() {
asyncio::Promise<bool> promise;
auto future = promise.getFuture();

mWriteDonePromise.emplace(std::move(promise));
grpc::ClientWriteReactor<T>::StartWritesDone();

co_return co_await asyncio::task::Cancellable{
std::move(future),
[this]() -> std::expected<void, std::error_code> {
mContext->TryCancel();
return {};
}
};
}

// OnDone and done() are similar to Reader, omitted for brevity

private:
std::shared_ptr<grpc::ClientContext> mContext;
asyncio::Promise<void, std::string> mDonePromise;
std::optional<asyncio::Promise<bool>> mWritePromise;
std::optional<asyncio::Promise<bool>> mWriteDonePromise;
};

`writeDone()` corresponds to gRPC's `StartWritesDone`, which signals to the server that the client has finished writing — equivalent to sending EOF on the stream.

# Client Bidirectional Streaming

Bidirectional streaming is the most complex of the four patterns: the client simultaneously has both read and write capabilities. Fortunately, all that is needed is to merge the `Reader` and `Writer` logic into a single `Stream` class:

template<typename RequestElement, typename ResponseElement>
class Stream final : public grpc::ClientBidiReactor<RequestElement, ResponseElement> {
public:
// OnReadDone, OnWriteDone, OnWritesDoneDone, OnDone are the same as before

asyncio::task::Task<std::optional<ResponseElement>> read() {
ResponseElement element;

asyncio::Promise<bool> promise;
auto future = promise.getFuture();

mReadPromise.emplace(std::move(promise));
grpc::ClientBidiReactor<RequestElement, ResponseElement>::StartRead(&element);

if (!co_await asyncio::task::Cancellable{
std::move(future),
[this]() -> std::expected<void,
std::error_code> {
mContext->TryCancel();
return {};
}
})
co_return std::nullopt;

co_return element;
}

asyncio::task::Task<bool> write(const RequestElement element) { /* same as Writer */ }
asyncio::task::Task<bool> writeDone() { /* same as Writer */ }
asyncio::task::Task<void> done() { /* same as Reader */ }

private:
std::shared_ptr<grpc::ClientContext> mContext;
asyncio::Promise<void, std::string> mDonePromise;
std::optional<asyncio::Promise<bool>> mReadPromise;
std::optional<asyncio::Promise<bool>> mWritePromise;
std::optional<asyncio::Promise<bool>> mWriteDonePromise;
};

When using a bidirectional stream, reading and writing are two independent tasks that run concurrently:

asyncio::task::Task<void> chat(std::shared_ptr<grpc::ClientContext> context) {
Stream<sample::ChatMessage, sample::ChatMessage> stream{context};
stub->async()->Chat(context.get(), &stream);

stream.AddHold();
stream.StartCall();

co_await all(
// Read task
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (const auto msg = co_await stream.read()) {
fmt::print("Received: {}\n", msg->content());
}
}),
// Write task
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::ChatMessage msg;
msg.set_content("Hello!");
co_await stream.write(msg);
co_await stream.writeDone();
})
);

stream.RemoveHold();
co_await stream.done();
}

`all()` waits for both the read and write subtasks simultaneously. If either fails, it cancels the other and returns the error. This is the task-tree mechanism of asyncio in action — structured concurrency.

# Wrapping GenericClient

The three streaming wrappers above all follow the same pattern, so it is time to unify them with templates. `GenericClient` provides one `call` overload for each of the four RPC types:

template<typename T>
class GenericClient {
using Stub = T::Stub;
using AsyncStub = class Stub::async;

public:
explicit GenericClient(std::unique_ptr<Stub> stub) : mStub{std::move(stub)} {
}

protected:
// 1. Unary RPC
template<typename Request, typename Response>
asyncio::task::Task<Response>
call(
void (AsyncStub::*method)(grpc::ClientContext *, const Request *, Response *,
std::function<void(grpc::Status)>),
std::shared_ptr<grpc::ClientContext> context,
Request request
) {
Response response;
asyncio::Promise<void, std::string> promise;

std::invoke(
method,
mStub->async(),
context.get(),
&request,
&response,
[&](const grpc::Status &status) {
if (!status.ok()) {
promise.reject(status.error_message());
return;
}

promise.resolve();
}
);

if (const auto result = co_await asyncio::task::Cancellable{
promise.getFuture(),
[&]() -> std::expected<void, std::error_code> {
context->TryCancel();
return {};
}
}; !result)
throw co_await asyncio::error::StacktraceError<std::runtime_error>::make(result.error());

co_return response;
}

// 2. Server streaming: push data into a channel via Sender
template<typename Request, typename Element>
asyncio::task::Task<void>
call(
void (AsyncStub::*method)(grpc::ClientContext *, const Request *,
grpc::ClientReadReactor<Element> *),
std::shared_ptr<grpc::ClientContext> context,
Request request,
asyncio::Sender<Element> sender
) {
Reader<Element> reader{context};
std::invoke(method, mStub->async(), context.get(), &request, &reader);

reader.AddHold();
reader.StartCall();

const auto result = co_await asyncio::error::capture(
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await reader.read();

if (!element)
break;

co_await asyncio::error::guard(sender.send(*std::move(element)));
}
})
);

reader.RemoveHold();
co_await reader.done();

if (!result)
std::rethrow_exception(result.error());
}

// 3. Client streaming: read data from Receiver and write into the stream
template<typename Response, typename Element>
asyncio::task::Task<Response>
call(
void (AsyncStub::*method)(grpc::ClientContext *, Response *,
grpc::ClientWriteReactor<Element> *),
std::shared_ptr<grpc::ClientContext> context,
asyncio::Receiver<Element> receiver
) {
Response response;
Writer<Element> writer{context};
std::invoke(method, mStub->async(), context.get(), &response, &writer);

writer.AddHold();
writer.StartCall();

const auto result = co_await asyncio::error::capture(
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await receiver.receive();

if (!element) {
if (!co_await writer.writeDone())
fmt::print(stderr, "Write done failed\n");

if (element.error() != asyncio::ReceiveError::Disconnected)
throw co_await asyncio::error::StacktraceError<std::system_error>::make(element.error());

break;
}

co_await writer.write(*std::move(element));
}
})
);

writer.RemoveHold();
co_await writer.done();

if (!result)
std::rethrow_exception(result.error());

co_return response;
}

// 4. Bidirectional streaming: hold both a Receiver (input) and a Sender (output)
template<typename RequestElement, typename ResponseElement>
asyncio::task::Task<void>
call(
void (AsyncStub::*method)(grpc::ClientContext *,
grpc::ClientBidiReactor<RequestElement, ResponseElement> *),
std::shared_ptr<grpc::ClientContext> context,
asyncio::Receiver<RequestElement> receiver,
asyncio::Sender<ResponseElement> sender
) {
Stream<RequestElement, ResponseElement> stream{context};
std::invoke(method, mStub->async(), context.get(), &stream);

stream.AddHold();
stream.StartCall();

const auto result = co_await asyncio::error::capture(
all(
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await stream.read();

if (!element)
break;

if (const auto res = co_await sender.send(*std::move(element)); !res) {
context->TryCancel();
throw co_await asyncio::error::StacktraceError<std::system_error>::make(res.error());
}
}
}),
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await receiver.receive();

if (!element) {
if (!co_await stream.writeDone())
fmt::print(stderr, "Write done failed\n");

if (element.error() != asyncio::ReceiveError::Disconnected)
throw co_await asyncio::error::StacktraceError<std::system_error>::make(element.error());

break;
}

co_await stream.write(*std::move(element));
}
})
)
);

stream.RemoveHold();
co_await stream.done();

if (!result)
std::rethrow_exception(result.error());
}

private:
std::unique_ptr<Stub> mStub;
};

The four overloads are distinguished automatically by parameter types — the compiler selects the correct overload based on the method pointer type passed in. This is a classic use of template metaprogramming: different call patterns map to different function signatures, with zero runtime overhead.

For streaming RPCs, `GenericClient` uses `asyncio::channel` as the data conduit: `Sender` writes data into the channel, `Receiver` reads from it. The channel's close signal (`Receiver` receiving a `Disconnected` error) naturally maps to stream EOF.

# Implementing the Concrete Client

With `GenericClient` in place, implementing a concrete service client is straightforward:

class Client final : public GenericClient<sample::SampleService> {
public:
using GenericClient::GenericClient;

static Client make(const std::string &address) {
return Client{sample::SampleService::NewStub(grpc::CreateChannel(address, grpc::InsecureChannelCredentials()))};
}

asyncio::task::Task<sample::EchoResponse>
echo(
sample::EchoRequest request,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_return co_await call(&sample::SampleService::Stub::async::Echo, std::move(context), std::move(request));
}

asyncio::task::Task<void>
getNumbers(
sample::GetNumbersRequest request,
asyncio::Sender<sample::Number> sender,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_await call(
&sample::SampleService::Stub::async::GetNumbers,
std::move(context),
std::move(request),
std::move(sender)
);
}

asyncio::task::Task<sample::SumResponse> sum(
asyncio::Receiver<sample::Number> receiver,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_return co_await call(&sample::SampleService::Stub::async::Sum, std::move(context), std::move(receiver));
}

asyncio::task::Task<void>
chat(
asyncio::Receiver<sample::ChatMessage> receiver,
asyncio::Sender<sample::ChatMessage> sender,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_return co_await call(
&sample::SampleService::Stub::async::Chat,
std::move(context),
std::move(receiver),
std::move(sender)
);
}
};

Here is how to call all four RPC types concurrently,
showcasing the elegance of asyncio's concurrent programming model:

asyncio::task::Task<void> asyncMain(const int argc, char *argv[]) {
auto client = Client::make("localhost:50051");

co_await all(
// Unary RPC
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::EchoRequest req;
req.set_message("Hello gRPC!");
const auto resp = co_await client.echo(req);
fmt::print("Echo: {}\n", resp.message());
}),

// Server streaming + client streaming, connected via a channel
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::GetNumbersRequest req;
req.set_value(1);
req.set_count(5);

auto [sender, receiver] = asyncio::channel<sample::Number>();

const auto result = co_await all(
client.getNumbers(req, std::move(sender)),
client.sum(std::move(receiver))
);

const auto &resp = std::get<sample::SumResponse>(result);
fmt::print("Sum: {}, count: {}\n", resp.total(), resp.count());
}),

// Bidirectional streaming
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
auto [inSender, inReceiver] = asyncio::channel<sample::ChatMessage>();
auto [outSender, outReceiver] = asyncio::channel<sample::ChatMessage>();

co_await all(
client.chat(std::move(outReceiver), std::move(inSender)),
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::ChatMessage msg;
msg.set_content("Hello server!");
co_await asyncio::error::guard(outSender.send(std::move(msg)));
outSender.close();
}),
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
const auto msg = co_await asyncio::error::guard(inReceiver.receive());
fmt::print("Chat reply: {}\n", msg.content());
})
);
})
);
}

The channel-based pipeline connecting `getNumbers` and `sum` is especially worth noting: numbers produced by the server-streaming RPC flow directly through the channel into the client-streaming RPC. The whole pipeline looks like synchronous code, but is fully asynchronous underneath.

> Full source code: [GitHub](https://github.com/Hackerl/asyncio/tree/master/sample/grpc)
> Due to word count limitations, the server-side section can only be placed in Part 2.

https://redd.it/1ssjrsx
@r_cpp
Boost 1.91 Released: New Decimal Library, SIMD UUID, Redis Sentinel, C++26 Reflection in PFR
https://boost.org/releases/1.91.0/

https://redd.it/1ssrnzf
@r_cpp