scala-mlx — LLM inference on Apple Silicon from Scala Native (98.8% of Python mlx-lm speed)
I built a project that runs LLM inference directly on Apple GPU from Scala Native, using MLX via C/C++ FFI.
GitHub: https://github.com/ghstrider/scala-mlx
Requires macOS + Apple Silicon (M1/M2/M3/M4). Would love feedback from the Scala community
https://preview.redd.it/6a2ty73e4bng1.png?width=890&format=png&auto=webp&s=5933486d169781a63fe4b1e9e64f3777defd8766
Tested on Mac Mini, 16GB, M2Pro
https://redd.it/1rlwom8
@r_scala
I built a project that runs LLM inference directly on Apple GPU from Scala Native, using MLX via C/C++ FFI.
GitHub: https://github.com/ghstrider/scala-mlx
Requires macOS + Apple Silicon (M1/M2/M3/M4). Would love feedback from the Scala community
https://preview.redd.it/6a2ty73e4bng1.png?width=890&format=png&auto=webp&s=5933486d169781a63fe4b1e9e64f3777defd8766
Tested on Mac Mini, 16GB, M2Pro
https://redd.it/1rlwom8
@r_scala
GitHub
GitHub - ghstrider/scala-mlx: LLM inference on Apple Silicon from Scala Native, powered by MLX
LLM inference on Apple Silicon from Scala Native, powered by MLX - ghstrider/scala-mlx
Built a chat app with Scala3 (ZIO + Laminar) for fun and for private circles use
Hey everyone,
Over the past few months I’ve been building a chatting app called "Happy Farm Messenger" - mainly for private circles like family and close friends.
The main motivation? I wanted data ownership and full control over the tools I use. No black boxes - just something I understand end to end. And to prove Scala can do everything.
It’s built entirely in Scala 3 with ZIO on the backend and Laminar on the frontend. I had an absolute blast working with the ZIO effect system and Laminar’s reactive model. Huge appreciation to the ZIO and Laminar teams - the Scala ecosystem is seriously powerful. I also baked in an in-house Actor implementation just for fun. I could indeed use other lib for Actor but it requires some additional plumbing work so I just went ahead and built my own with a broadcasting mechanism as well. (Unfortunately, zio-actors is not actively continued)
I also designed those pixel-based clickable buttons for the UI because… why not?
Right now I’m deploying it on Railway for demo/testing, it’s been really fun seeing real messages go through something I built from scratch.
If you're into Scala, ZIO, Laminar, or just curious about it, feel free to check out the repo on GitHub
Add Friend
Chat
https://redd.it/1rkui0d
@r_scala
Hey everyone,
Over the past few months I’ve been building a chatting app called "Happy Farm Messenger" - mainly for private circles like family and close friends.
The main motivation? I wanted data ownership and full control over the tools I use. No black boxes - just something I understand end to end. And to prove Scala can do everything.
It’s built entirely in Scala 3 with ZIO on the backend and Laminar on the frontend. I had an absolute blast working with the ZIO effect system and Laminar’s reactive model. Huge appreciation to the ZIO and Laminar teams - the Scala ecosystem is seriously powerful. I also baked in an in-house Actor implementation just for fun. I could indeed use other lib for Actor but it requires some additional plumbing work so I just went ahead and built my own with a broadcasting mechanism as well. (Unfortunately, zio-actors is not actively continued)
I also designed those pixel-based clickable buttons for the UI because… why not?
Right now I’m deploying it on Railway for demo/testing, it’s been really fun seeing real messages go through something I built from scratch.
If you're into Scala, ZIO, Laminar, or just curious about it, feel free to check out the repo on GitHub
Add Friend
Chat
https://redd.it/1rkui0d
@r_scala
GitHub
GitHub - ssdong/happy-farm-messenger: A resilient, ZIO/Laminar-powered communication suite for private circles
A resilient, ZIO/Laminar-powered communication suite for private circles - ssdong/happy-farm-messenger
The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed.
Hey r/scala,
A while back, we asked for your input on a global survey we were running alongside the Scala Days organizers to get a realistic picture of the ecosystem in 2025. Between this sub, conference attendees, and the broader community, we gathered insights from over 400 Scala developers and tech leads worldwide.
First off: thank you to everyone here who participated.
We know nobody here wants to hand over their email for a PDF, so we dropped a direct link to the report in the first comment. The official landing page is linked at the bottom if you want to drop it in your team's Slack or send it to your Tech Lead.
Instead of a marketing pitch, here’s the actual TL;DR of what you guys told us:
Scala 3 is actually happening: 92% of respondents are using it in some capacity, and 48% are fully in prod. Also, the whole "migration nightmare" seems a bit overblown, 64% rated the move from Scala 2 as moderate or somewhat easy.
The FP takeover is real: The pure functional stack is dominating. Cats is sitting at 56% usage, Http4s at 45%, and ZIO at 31%. Meanwhile, classic frameworks like Akka (26%) and Play (23%) are losing ground to the newer libraries.
The compiler isn't the main enemy anymore: While 35% still complain about build times, the actual #1 blockers are entirely organizational. Tied at 43%, the biggest headaches are "Convincing stakeholders" and "Difficulty finding/hiring Scala developers".
The Sentiment Paradox: 44% of respondents believe Scala's industry usage is declining, yet 90% say it's still the primary language in their company's codebase, mostly because of the strict type safety (79%) and FP features (89%).
Full report is here: https://scalac.io/state-of-scala-2025/ (Again, direct PDF link is in the comments).
We're really curious if the data matches your day-to-day right now:
1. Are you guys actually struggling to hire Scala devs, or are you just cross-training Java seniors because it's cheaper?
2. For the 8% still stuck entirely on Scala 2.x - what’s the actual hold-up? Is it Spark dependencies, or just zero budget for tech debt?
3. How are you currently "selling" Scala to your CTOs when they push back and ask for Go, Kotlin, or Python?
Let's discuss.
https://redd.it/1rkn2wb
@r_scala
Hey r/scala,
A while back, we asked for your input on a global survey we were running alongside the Scala Days organizers to get a realistic picture of the ecosystem in 2025. Between this sub, conference attendees, and the broader community, we gathered insights from over 400 Scala developers and tech leads worldwide.
First off: thank you to everyone here who participated.
We know nobody here wants to hand over their email for a PDF, so we dropped a direct link to the report in the first comment. The official landing page is linked at the bottom if you want to drop it in your team's Slack or send it to your Tech Lead.
Instead of a marketing pitch, here’s the actual TL;DR of what you guys told us:
Scala 3 is actually happening: 92% of respondents are using it in some capacity, and 48% are fully in prod. Also, the whole "migration nightmare" seems a bit overblown, 64% rated the move from Scala 2 as moderate or somewhat easy.
The FP takeover is real: The pure functional stack is dominating. Cats is sitting at 56% usage, Http4s at 45%, and ZIO at 31%. Meanwhile, classic frameworks like Akka (26%) and Play (23%) are losing ground to the newer libraries.
The compiler isn't the main enemy anymore: While 35% still complain about build times, the actual #1 blockers are entirely organizational. Tied at 43%, the biggest headaches are "Convincing stakeholders" and "Difficulty finding/hiring Scala developers".
The Sentiment Paradox: 44% of respondents believe Scala's industry usage is declining, yet 90% say it's still the primary language in their company's codebase, mostly because of the strict type safety (79%) and FP features (89%).
Full report is here: https://scalac.io/state-of-scala-2025/ (Again, direct PDF link is in the comments).
We're really curious if the data matches your day-to-day right now:
1. Are you guys actually struggling to hire Scala devs, or are you just cross-training Java seniors because it's cheaper?
2. For the 8% still stuck entirely on Scala 2.x - what’s the actual hold-up? Is it Spark dependencies, or just zero budget for tech debt?
3. How are you currently "selling" Scala to your CTOs when they push back and ask for Go, Kotlin, or Python?
Let's discuss.
https://redd.it/1rkn2wb
@r_scala
Scalac - Software Development Company - Akka, Kafka, Spark, ZIO
State of Scala 2025
Discover insights from the State of Scala 2025. Understand how organizations benefit from Scala for reliable systems and scaling.
dotty-cps-async 1.3.1
- The main change behind dependency updates is a workaround for https://github.com/scala/scala3/issues/17445 \- finally figure out how to handle default field value copying in own tree copier.
- Also added a few OptionAsyncShift methods: collect, fold, exists, forall, find — you can now use await inside these Option operations.
- Url, as usual: https://github.com/rssh/dotty-cps-async
https://redd.it/1rju045
@r_scala
- The main change behind dependency updates is a workaround for https://github.com/scala/scala3/issues/17445 \- finally figure out how to handle default field value copying in own tree copier.
- Also added a few OptionAsyncShift methods: collect, fold, exists, forall, find — you can now use await inside these Option operations.
- Url, as usual: https://github.com/rssh/dotty-cps-async
https://redd.it/1rju045
@r_scala
GitHub
Copying of function application with default parameter in macro cause an error with -Xcheck-macro. (regression) · Issue #17445…
Compiler version 3.1.1-RC1-bin-SNAPSHOT. (master at May. 9.2023) Minimized code Macro.scala: package x import scala.quoted.* import scala.collection.* object Macro { def myIndexWhere[A](arrOps: Arr...
Why does fs2.Pure get lost here, and how can I prevent it?
I have a stream extension like so,
extension F[_](stream: StreamF, Char)
def fooA: StreamF, A = Stream.empty
However if I use it with a pure stream the `F[_\]` type gets lost along the way, so this doesn't compile...
Stream.emits("abc").foo(.map(identity)).toList
...but if I explicitly set the type again it will compile.
(Stream.emits("abc").foo(.map(identity)): StreamPure, Char).toList
I understand that `fst.Pure` is a "special type" and gets handled differently but I'm a bit lost why the extension can't maintain the type, what would I need to change to so it works (if it even can work?) And would anyone be able to expand on why this happens "behind the scenes"?
https://redd.it/1rjttde
@r_scala
I have a stream extension like so,
extension F[_](stream: StreamF, Char)
def fooA: StreamF, A = Stream.empty
However if I use it with a pure stream the `F[_\]` type gets lost along the way, so this doesn't compile...
Stream.emits("abc").foo(.map(identity)).toList
...but if I explicitly set the type again it will compile.
(Stream.emits("abc").foo(.map(identity)): StreamPure, Char).toList
I understand that `fst.Pure` is a "special type" and gets handled differently but I'm a bit lost why the extension can't maintain the type, what would I need to change to so it works (if it even can work?) And would anyone be able to expand on why this happens "behind the scenes"?
https://redd.it/1rjttde
@r_scala
Reddit
From the scala community on Reddit
Explore this post and more from the scala community
“Hardening Scoverage support in Scala 3”, Scala Center work-in-progress report by Anatolii Kmetiuk
“Scoverage is the standard coverage tool for Scala, built directly into the Scala 3 compiler as a dedicated phase.”
https://www.scala-lang.org/blog/2026/03/11/scoverage.html
https://redd.it/1rr46jj
@r_scala
“Scoverage is the standard coverage tool for Scala, built directly into the Scala 3 compiler as a dedicated phase.”
https://www.scala-lang.org/blog/2026/03/11/scoverage.html
https://redd.it/1rr46jj
@r_scala
www.scala-lang.org
Hardening Scoverage Support in Scala 3
We've enabled systematic coverage testing in the Scala 3 compiler's CI pipeline, uncovering and fixing latent bugs that blocked enterprise adoption.
A small, dependency-free Scala 3 library for graph processing — feedback welcome!
I wrote [**encalmo/graphs**](https://github.com/encalmo/graphs) a few years ago — a lightweight, idiomatic Scala 3 library for building and querying directed and undirected graphs. No heavy framework dependencies, just clean graph primitives and battle-tested algorithms. We just shipped v0.10.0, and I figured it's a good time to introduce it more widely.
**Why I built it**
Graph problems pop up constantly — dependency resolution, scheduling, network analysis, and competitive programming puzzles. I wanted something small I could drop into a project without pulling in a full-blown framework. So I built it.
**What's included out of the box:**
* 🔍 **DFS & BFS** — with pre/post-visit hooks
* 🔃 **Topological Sort** — for DAGs
* 🔁 **Cycle Detection** — find all nodes involved in cycles
* 🔗 **Strongly Connected Components** — via Kosaraju's algorithm
* 📏 **Shortest Paths** — Dijkstra for weighted graphs
* ✂️ **Min-Cut** — Karger's randomized algorithm
* ↩️ **Graph Reversal**
**API feels natural:**
import org.encalmo.data.Graph
val g = Graph[Int](
1 -> Seq(2, 3),
2 -> Seq(3),
3 -> Seq(4),
4 -> Seq()
)
val (distance, path) = weightedGraph.findShortestPath(1, 5)
val sccs = Graph.findStronglyConnectedComponents(g)
val order = Graph.sortTopologically(g)
Graphs are immutable by default, but you can get a mutable copy, mutate freely, then freeze back:
scala
val m = g.mutableCopy
m.addEdge(3, 4)
m.removeNode(1)
val frozen = m.freeze
You can also load graphs from edge-list or adjacency-list files, which is handy for algorithm practice (e.g., Stanford MOOC datasets).
**Getting started (Scala CLI):**
scala
//> using dep org.encalmo::graphs:0.11.0
**Or SBT:**
scala
libraryDependencies += "org.encalmo" %% "graphs" % "0.11.0"
Requires Scala 3.7+ and JVM 21+.
[https://github.com/encalmo/graphs](https://github.com/encalmo/graphs)
https://redd.it/1rqdbcj
@r_scala
I wrote [**encalmo/graphs**](https://github.com/encalmo/graphs) a few years ago — a lightweight, idiomatic Scala 3 library for building and querying directed and undirected graphs. No heavy framework dependencies, just clean graph primitives and battle-tested algorithms. We just shipped v0.10.0, and I figured it's a good time to introduce it more widely.
**Why I built it**
Graph problems pop up constantly — dependency resolution, scheduling, network analysis, and competitive programming puzzles. I wanted something small I could drop into a project without pulling in a full-blown framework. So I built it.
**What's included out of the box:**
* 🔍 **DFS & BFS** — with pre/post-visit hooks
* 🔃 **Topological Sort** — for DAGs
* 🔁 **Cycle Detection** — find all nodes involved in cycles
* 🔗 **Strongly Connected Components** — via Kosaraju's algorithm
* 📏 **Shortest Paths** — Dijkstra for weighted graphs
* ✂️ **Min-Cut** — Karger's randomized algorithm
* ↩️ **Graph Reversal**
**API feels natural:**
import org.encalmo.data.Graph
val g = Graph[Int](
1 -> Seq(2, 3),
2 -> Seq(3),
3 -> Seq(4),
4 -> Seq()
)
val (distance, path) = weightedGraph.findShortestPath(1, 5)
val sccs = Graph.findStronglyConnectedComponents(g)
val order = Graph.sortTopologically(g)
Graphs are immutable by default, but you can get a mutable copy, mutate freely, then freeze back:
scala
val m = g.mutableCopy
m.addEdge(3, 4)
m.removeNode(1)
val frozen = m.freeze
You can also load graphs from edge-list or adjacency-list files, which is handy for algorithm practice (e.g., Stanford MOOC datasets).
**Getting started (Scala CLI):**
scala
//> using dep org.encalmo::graphs:0.11.0
**Or SBT:**
scala
libraryDependencies += "org.encalmo" %% "graphs" % "0.11.0"
Requires Scala 3.7+ and JVM 21+.
[https://github.com/encalmo/graphs](https://github.com/encalmo/graphs)
https://redd.it/1rqdbcj
@r_scala
GitHub
GitHub - encalmo/graphs: Scala library for processing graphs.
Scala library for processing graphs. Contribute to encalmo/graphs development by creating an account on GitHub.
type-tree-visitor — a library to make writing Scala 3 macros a bit less painful
Hello, I want to share a library that emerged from the real pain of macro-writing: **type-tree-visitor**.
The problem
Writing Scala 3 macros that derive code from type structures is genuinely hard. Whether you're using
What this library gives you
`TypeTreeIterator` — a fully implemented iterator that walks a type tree recursively, with support for a very wide set of types out of the box (Scala case classes, sealed traits, collections, arrays, enums, tuples, named tuples, selectables, opaque types, primitives, Java enums, records, maps, iterables, and more)
What's it built on?
The pattern is the classic Visitor pattern, which means the iterator logic and your derivation logic are cleanly separated. You focus on what to do at each node type — the library handles how to get there.
It also ships with a
The library has first-class support for typeclass derivation:
autonomous — derives MyTypeclass on your case class, fully macro-driven
semi-autonomous — per-type given instances calling your macro internally
summoning existing instances within macro call — if a typeclass instance already exists for a type in the tree, we will use it instead of descending further (with circular derivation protection via summonTypeclassInstance = false on the top-level call)
Instead of raw `Quotes`, I use `StatementsCache` from [macro-utils](https://github.com/encalmo/macro-utils), which handles nested scopes, symbol caching, and optionally chunks repeated code into methods — stuff that bites you hard once your macro gets non-trivial.
This library was extracted from [xmlwriter](https://github.com/encalmo/xmlwriter), a fast zero-overhead XML serialization macro. That's the "battle-tested in production" origin story.
Three smaller demos are bundled directly in the repo:
`ValuePathsList` — enumerates all value paths available in an instance of a type,
Happy to hear your feedback.
https://redd.it/1rpd5by
@r_scala
Hello, I want to share a library that emerged from the real pain of macro-writing: **type-tree-visitor**.
The problem
Writing Scala 3 macros that derive code from type structures is genuinely hard. Whether you're using
inline/mirrors, quotes/splices, or rolling something custom, you end up re-implementing the same traversal logic over and over — handling case classes, sealed traits, enums, tuples, named tuples, Java records, opaque types, collections... It's a lot of boilerplate before you even get to the interesting part of your macro.What this library gives you
type-tree-visitor provides two core building blocks:`TypeTreeIterator` — a fully implemented iterator that walks a type tree recursively, with support for a very wide set of types out of the box (Scala case classes, sealed traits, collections, arrays, enums, tuples, named tuples, selectables, opaque types, primitives, Java enums, records, maps, iterables, and more)
TypeTreeVisitor — an open trait you implement to do the actual code-generation work at each nodeWhat's it built on?
The pattern is the classic Visitor pattern, which means the iterator logic and your derivation logic are cleanly separated. You focus on what to do at each node type — the library handles how to get there.
It also ships with a
TypeTreeTermlessIterator / TypeTreeTermlessVisitor pair for cases where you don't need access to the actual runtime value (faster + simpler).The library has first-class support for typeclass derivation:
autonomous — derives MyTypeclass on your case class, fully macro-driven
semi-autonomous — per-type given instances calling your macro internally
summoning existing instances within macro call — if a typeclass instance already exists for a type in the tree, we will use it instead of descending further (with circular derivation protection via summonTypeclassInstance = false on the top-level call)
Instead of raw `Quotes`, I use `StatementsCache` from [macro-utils](https://github.com/encalmo/macro-utils), which handles nested scopes, symbol caching, and optionally chunks repeated code into methods — stuff that bites you hard once your macro gets non-trivial.
This library was extracted from [xmlwriter](https://github.com/encalmo/xmlwriter), a fast zero-overhead XML serialization macro. That's the "battle-tested in production" origin story.
Three smaller demos are bundled directly in the repo:
StructuralRuntimeHashcode — computes a hashcode from the shape of a value's runtime type structure (not the values themselves), useful for checking if two instances have the same "form",`ValuePathsList` — enumerates all value paths available in an instance of a type,
InternalStructureHashcode \- computes a static hashcode of the nested type structure, ideal for comparing whether two types are actually the same.Happy to hear your feedback.
https://redd.it/1rpd5by
@r_scala
GitHub
GitHub - encalmo/type-tree-visitor: A library providing building blocks for Scala macros deriving code based on the type tree structure…
A library providing building blocks for Scala macros deriving code based on the type tree structure traversal. - encalmo/type-tree-visitor
tree-sitter-scala 0.24.1 and 0.25.0 released
https://eed3si9n.com/tree-sitter-scala-0.25.0
https://redd.it/1rrj8ym
@r_scala
https://eed3si9n.com/tree-sitter-scala-0.25.0
https://redd.it/1rrj8ym
@r_scala
Generating Direct-Style Scala 3 Applications
https://virtuslab.com/blog/scala/generating-direct-style-scala-3-applications
https://redd.it/1ry4sio
@r_scala
https://virtuslab.com/blog/scala/generating-direct-style-scala-3-applications
https://redd.it/1ry4sio
@r_scala
Virtuslab
Generating Direct-Style Scala 3 Applications
What kind of guidance (if any) does an LLM need to write a direct-style Scala 3 application? Let's find out!
I made tiny CLI tool written in Scala 3 + Scala Native * MUSL
https://github.com/windymelt/comport
https://redd.it/1rxs7vh
@r_scala
https://github.com/windymelt/comport
https://redd.it/1rxs7vh
@r_scala
GitHub
GitHub - windymelt/comport: Docker Compose worktree port assigner: randomize port to avoid collision between worktree
Docker Compose worktree port assigner: randomize port to avoid collision between worktree - windymelt/comport
The Scala Workshop 2026 (ECOOP) - Call for Talks (1–2 pages, deadline Mar 23)
The Scala Workshop 2026 will take place on Monday 29 June 2026 in Brussels, Belgium, co-located with ECOOP.
We’re looking for short talk proposals (1–2 pages). Topics include the Scala programming language and its foundations, as well as practical applications, libraries, and tooling.
Submission deadline: Mon 23 Mar 2026 (in 5 days)
More info: https://2026.workshop.scala-lang.org/
https://redd.it/1rxhq9i
@r_scala
The Scala Workshop 2026 will take place on Monday 29 June 2026 in Brussels, Belgium, co-located with ECOOP.
We’re looking for short talk proposals (1–2 pages). Topics include the Scala programming language and its foundations, as well as practical applications, libraries, and tooling.
Submission deadline: Mon 23 Mar 2026 (in 5 days)
More info: https://2026.workshop.scala-lang.org/
https://redd.it/1rxhq9i
@r_scala
2026.workshop.scala-lang.org
The Scala Workshop 2026 - The Scala Workshop 2026
The Scala Workshop is the continuation of the Scala Symposium, providing a forum for researchers and practitioners to discuss the design, implementation, and applications of the Scala programming language. Topics include language features, compiler internals…
ldbc v0.6.0 is out 🎉
# ldbc v0.6.0 is released with OpenTelemetry telemetry expansion and MySQL 9.x support for the Pure Scala MySQL connector!
**TL;DR**: Pure Scala MySQL connector that runs on JVM, Scala.js, and Scala Native now includes comprehensive OpenTelemetry observability, MySQL 9.x support, VECTOR type for AI/ML workloads, and a sample Grafana dashboard for production monitoring.
We're excited to announce the release of ldbc v0.6.0, bringing major enhancements to our Pure Scala MySQL connector that works across JVM, Scala.js, and Scala Native platforms.
The highlight of this release is the **significant expansion of OpenTelemetry telemetry** compliant with Semantic Conventions v1.39.0, along with **MySQL 9.x support** and **production-ready observability tooling**.
[https://github.com/takapi327/ldbc/releases/tag/v0.6.0](https://github.com/takapi327/ldbc/releases/tag/v0.6.0)
# Major New Features
# 📊 Comprehensive OpenTelemetry Telemetry
Fine-grained control over telemetry behavior with the new `TelemetryConfig`:
import ldbc.connector.telemetry.TelemetryConfig
// Default configuration (spec-compliant)
val config = TelemetryConfig.default
// Custom configuration
val customConfig = TelemetryConfig.default
.withoutQueryTextExtraction // Disable automatic db.query.summary generation
.withoutSanitization // Disable query sanitization (caution: may expose sensitive data)
.withoutInClauseCollapsing // Disable IN clause collapsing
Connection pool metrics (wait time, use time, timeout count) via OpenTelemetry's `Meter`:
import org.typelevel.otel4s.metrics.Meter
import ldbc.connector.*
MySQLDataSource.pooling[IO](
config = mysqlConfig,
meter = Some(summon[Meter[IO]])
).use { pool =>
// Pool wait time, use time, and timeouts are automatically recorded
pool.getConnection.use { conn => ... }
}
# 🐬 MySQL 9.x Support
MySQL 9.x is officially supported starting from 0.6.0. Internal behavior automatically adapts based on the connected MySQL version — no configuration changes needed.
# 🧮 Schema: VECTOR Type
A `DataType` representing MySQL's `VECTOR` type has been added, enabling AI/ML embedding workloads:
import ldbc.schema.*
class EmbeddingTable extends Table[Embedding]("embeddings"):
def id: Column[Long] = column[Long]("id")
def embedding: Column[Array[Float]] = column[Array[Float]]("embedding", VECTOR(1536))
override def * = (id *: embedding).to[Embedding]
# 🔍 Improved Error Tracing
Error spans are now automatically set in the Protocol layer via `span.setStatus(StatusCode.Error, message)`. Error spans are correctly displayed in distributed tracing tools (Jaeger, Zipkin, etc.) with no changes to user code required.
# 📈 Sample Grafana Dashboard
A ready-to-use Grafana dashboard is provided for connection pool observability:
* DB Operation Duration (p50 / p95 / p99)
* Connection Pool Status (Active / Idle / Pending / Wait Time p99 / Pool Usage %)
* Connection Pool Breakdown (time series graph)
* Latency Distribution (average Wait / Create / Use)
* Connection Timeouts (15-minute window)
* Connection Wait Time Heatmap
# ⚠️ Breaking Changes
**TelemetryAttribute key names** have been updated to align with Semantic Conventions v1.39.0:
|Old (0.5.x)|New (0.6.x)|
|:-|:-|
|`DB_SYSTEM`|`DB_SYSTEM_NAME`|
|`DB_OPERATION`|`DB_OPERATION_NAME`|
|`DB_QUERY`|`DB_QUERY_TEXT`|
|`STATUS_CODE`|`DB_RESPONSE_STATUS_CODE`|
|`VERSION`|`DB_MYSQL_VERSION`|
|`THREAD_ID`|`DB_MYSQL_THREAD_ID`|
|`AUTH_PLUGIN`|`DB_MYSQL_AUTH_PLUGIN`|
# Why ldbc?
* ✅ **100% Pure Scala** \- No JDBC dependency required
* ✅ **True cross-platform** \- Single codebase for JVM, JS, and Native
* ✅ **Fiber-native design** \- Built from the ground up for Cats Effect
* ✅ **ZIO Integration** \- Complete ZIO ecosystem support
* ✅ **Production-ready observability** \- OpenTelemetry Semantic Conventions v1.39.0 compliant
* ✅ **Enterprise-ready** \- AWS Aurora IAM authentication support
* ✅ **AI/ML ready** \-
# ldbc v0.6.0 is released with OpenTelemetry telemetry expansion and MySQL 9.x support for the Pure Scala MySQL connector!
**TL;DR**: Pure Scala MySQL connector that runs on JVM, Scala.js, and Scala Native now includes comprehensive OpenTelemetry observability, MySQL 9.x support, VECTOR type for AI/ML workloads, and a sample Grafana dashboard for production monitoring.
We're excited to announce the release of ldbc v0.6.0, bringing major enhancements to our Pure Scala MySQL connector that works across JVM, Scala.js, and Scala Native platforms.
The highlight of this release is the **significant expansion of OpenTelemetry telemetry** compliant with Semantic Conventions v1.39.0, along with **MySQL 9.x support** and **production-ready observability tooling**.
[https://github.com/takapi327/ldbc/releases/tag/v0.6.0](https://github.com/takapi327/ldbc/releases/tag/v0.6.0)
# Major New Features
# 📊 Comprehensive OpenTelemetry Telemetry
Fine-grained control over telemetry behavior with the new `TelemetryConfig`:
import ldbc.connector.telemetry.TelemetryConfig
// Default configuration (spec-compliant)
val config = TelemetryConfig.default
// Custom configuration
val customConfig = TelemetryConfig.default
.withoutQueryTextExtraction // Disable automatic db.query.summary generation
.withoutSanitization // Disable query sanitization (caution: may expose sensitive data)
.withoutInClauseCollapsing // Disable IN clause collapsing
Connection pool metrics (wait time, use time, timeout count) via OpenTelemetry's `Meter`:
import org.typelevel.otel4s.metrics.Meter
import ldbc.connector.*
MySQLDataSource.pooling[IO](
config = mysqlConfig,
meter = Some(summon[Meter[IO]])
).use { pool =>
// Pool wait time, use time, and timeouts are automatically recorded
pool.getConnection.use { conn => ... }
}
# 🐬 MySQL 9.x Support
MySQL 9.x is officially supported starting from 0.6.0. Internal behavior automatically adapts based on the connected MySQL version — no configuration changes needed.
# 🧮 Schema: VECTOR Type
A `DataType` representing MySQL's `VECTOR` type has been added, enabling AI/ML embedding workloads:
import ldbc.schema.*
class EmbeddingTable extends Table[Embedding]("embeddings"):
def id: Column[Long] = column[Long]("id")
def embedding: Column[Array[Float]] = column[Array[Float]]("embedding", VECTOR(1536))
override def * = (id *: embedding).to[Embedding]
# 🔍 Improved Error Tracing
Error spans are now automatically set in the Protocol layer via `span.setStatus(StatusCode.Error, message)`. Error spans are correctly displayed in distributed tracing tools (Jaeger, Zipkin, etc.) with no changes to user code required.
# 📈 Sample Grafana Dashboard
A ready-to-use Grafana dashboard is provided for connection pool observability:
* DB Operation Duration (p50 / p95 / p99)
* Connection Pool Status (Active / Idle / Pending / Wait Time p99 / Pool Usage %)
* Connection Pool Breakdown (time series graph)
* Latency Distribution (average Wait / Create / Use)
* Connection Timeouts (15-minute window)
* Connection Wait Time Heatmap
# ⚠️ Breaking Changes
**TelemetryAttribute key names** have been updated to align with Semantic Conventions v1.39.0:
|Old (0.5.x)|New (0.6.x)|
|:-|:-|
|`DB_SYSTEM`|`DB_SYSTEM_NAME`|
|`DB_OPERATION`|`DB_OPERATION_NAME`|
|`DB_QUERY`|`DB_QUERY_TEXT`|
|`STATUS_CODE`|`DB_RESPONSE_STATUS_CODE`|
|`VERSION`|`DB_MYSQL_VERSION`|
|`THREAD_ID`|`DB_MYSQL_THREAD_ID`|
|`AUTH_PLUGIN`|`DB_MYSQL_AUTH_PLUGIN`|
# Why ldbc?
* ✅ **100% Pure Scala** \- No JDBC dependency required
* ✅ **True cross-platform** \- Single codebase for JVM, JS, and Native
* ✅ **Fiber-native design** \- Built from the ground up for Cats Effect
* ✅ **ZIO Integration** \- Complete ZIO ecosystem support
* ✅ **Production-ready observability** \- OpenTelemetry Semantic Conventions v1.39.0 compliant
* ✅ **Enterprise-ready** \- AWS Aurora IAM authentication support
* ✅ **AI/ML ready** \-
GitHub
Release v0.6.0 · takapi327/ldbc
ldbc v0.6.0 is released. 🎉
This release includes significant enhancements to OpenTelemetry telemetry and support for MySQL v9.
Noteldbc is pre-1.0 software and is still undergoing active developmen...
This release includes significant enhancements to OpenTelemetry telemetry and support for MySQL v9.
Noteldbc is pre-1.0 software and is still undergoing active developmen...
MySQL VECTOR type support
* ✅ **Security-focused** \- SSRF protection and enhanced SQL escaping
* ✅ **Migration-friendly** \- Easy upgrade path from 0.5.x
# Links
* Github: [https://github.com/takapi327/ldbc](https://github.com/takapi327/ldbc)
* Documentation: [https://takapi327.github.io/ldbc/](https://takapi327.github.io/ldbc/)
* Scaladex: [https://index.scala-lang.org/takapi327/ldbc](https://index.scala-lang.org/takapi327/ldbc)
* Migration Guide: [https://takapi327.github.io/ldbc/latest/en/migration-notes.html](https://takapi327.github.io/ldbc/latest/en/migration-notes.html)
https://redd.it/1rw597w
@r_scala
* ✅ **Security-focused** \- SSRF protection and enhanced SQL escaping
* ✅ **Migration-friendly** \- Easy upgrade path from 0.5.x
# Links
* Github: [https://github.com/takapi327/ldbc](https://github.com/takapi327/ldbc)
* Documentation: [https://takapi327.github.io/ldbc/](https://takapi327.github.io/ldbc/)
* Scaladex: [https://index.scala-lang.org/takapi327/ldbc](https://index.scala-lang.org/takapi327/ldbc)
* Migration Guide: [https://takapi327.github.io/ldbc/latest/en/migration-notes.html](https://takapi327.github.io/ldbc/latest/en/migration-notes.html)
https://redd.it/1rw597w
@r_scala
GitHub
GitHub - takapi327/ldbc: ldbc is Pure functional JDBC layer with Cats Effect 3 and Scala 3.
ldbc is Pure functional JDBC layer with Cats Effect 3 and Scala 3. - takapi327/ldbc
People Saying Tooling Is Largely Figured Out: What Might I Be Doing Wrong?
Hi All,
I've browsed a bunch of posts and comments saying that tooling is largely figured out. I am wondering what I might be doing wrong, because my experience in this respect is quite horrible to be frank.
We are in the process of migrating a number of projects into a Mill monorepo. Before we used SBT. The build tools work well and I particularly love Mill and do think it offers a lot of stand out features like selective execution, amazing extensibility and nice introspection.
What I struggle with severely is the language server side. This was the case in SBT repos and continues to be an issue in the Mill monorepo, amplified by the size of the repo. The issues are:
1. Large compile overhead. And also duplicate complication to what the build tool does anyway. I get that this may need to happen because of compiler vs presentation compiler but it is still very noticeable and furthermore, while the build tools can compile selectively only what they require, Metals seems to always compile all modules before the IDE becomes usable.
2. Metals v2 offers significant speed up but also seems unusable because it does not recognize any third party dependencies for info and go to definition.
3. Missing (?) docs around stand alone Metals installation. Coding agents are a thing these days. And the most prominent ones work outside of the IDE. Hence the LSP should offer easy install and document this. While Metals can be installed stand-alone via e.g. Coursier, I have not seen this mentioned anywhere in the docs. The docs only offer install instructions based on specific IDEs. Am I missing something here?
Please don't get me wrong. I like Scala, I want to keep using Scala, I am grateful for all the work people put into the community, often without pay and little recognition. However: Unless I am doing something obviously wrong, languages like Go, Python and JS/TS seem much further ahead here. At a time where agents can churn out massive amount of code in little time and industry focus seems to be so much on velocity, I worry what this means for a language where I worry about opening up the IDE so much.
https://redd.it/1rv6o86
@r_scala
Hi All,
I've browsed a bunch of posts and comments saying that tooling is largely figured out. I am wondering what I might be doing wrong, because my experience in this respect is quite horrible to be frank.
We are in the process of migrating a number of projects into a Mill monorepo. Before we used SBT. The build tools work well and I particularly love Mill and do think it offers a lot of stand out features like selective execution, amazing extensibility and nice introspection.
What I struggle with severely is the language server side. This was the case in SBT repos and continues to be an issue in the Mill monorepo, amplified by the size of the repo. The issues are:
1. Large compile overhead. And also duplicate complication to what the build tool does anyway. I get that this may need to happen because of compiler vs presentation compiler but it is still very noticeable and furthermore, while the build tools can compile selectively only what they require, Metals seems to always compile all modules before the IDE becomes usable.
2. Metals v2 offers significant speed up but also seems unusable because it does not recognize any third party dependencies for info and go to definition.
3. Missing (?) docs around stand alone Metals installation. Coding agents are a thing these days. And the most prominent ones work outside of the IDE. Hence the LSP should offer easy install and document this. While Metals can be installed stand-alone via e.g. Coursier, I have not seen this mentioned anywhere in the docs. The docs only offer install instructions based on specific IDEs. Am I missing something here?
Please don't get me wrong. I like Scala, I want to keep using Scala, I am grateful for all the work people put into the community, often without pay and little recognition. However: Unless I am doing something obviously wrong, languages like Go, Python and JS/TS seem much further ahead here. At a time where agents can churn out massive amount of code in little time and industry focus seems to be so much on velocity, I worry what this means for a language where I worry about opening up the IDE so much.
https://redd.it/1rv6o86
@r_scala
Reddit
From the scala community on Reddit
Explore this post and more from the scala community
How funny, they are reinventing Scala
https://www.youtube.com/watch?v=99s7ozvJGLk
https://redd.it/1rv4m28
@r_scala
https://www.youtube.com/watch?v=99s7ozvJGLk
https://redd.it/1rv4m28
@r_scala
YouTube
Towards Better Checked Exceptions - Inside Java Newscast #107
Java's checked exceptions are both an integral part of the language and one of its most contested features. Whether their introduction was a mistake and whether they should all be turned unchecked are frequently discussed topics but since the former is not…
This week in #Scala (Mar 16, 2026)
https://open.substack.com/pub/thisweekinscala/p/this-week-in-scala-mar-16-2026?r=8f3fq&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
https://redd.it/1ruic20
@r_scala
https://open.substack.com/pub/thisweekinscala/p/this-week-in-scala-mar-16-2026?r=8f3fq&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
https://redd.it/1ruic20
@r_scala
Substack
This week in #Scala (Mar 16, 2026)
Welcome to the new edition of #ThisWeekInScala!