1.83K subscribers
3.3K photos
132 videos
15 files
3.58K links
Блог со звёздочкой.

Много репостов, немножко программирования.

Небольшое прикольное комьюнити: @decltype_chat_ptr_t
Автор: @insert_reference_here
Download Telegram
#ml

В удивительные времена живём
💩1
Forwarded from Generative Anton
Ох уж этот таймлайн. GPT-4 jailbreak-ают по идеям из фильма “Начало” 🥲🥲🥲🥲
😁16🌚5💩1
#prog #rust #rustasync #article

The registers of Rust

Лодочник рассматривает существующие эффекты в Rust (асинхронность, итерирование, ошибочность) с точки зрения регистров (не процессорных, а в том смысле, в котором это слово используется в социолингвистике) и довольно аргументированно доказывает, что усилия по развитию асинхронности в Rust идут в не совсем нужном направлении. Даже если вы с ним не согласны, предложенный им способ рассмотрения фич языка программирования довольно полезен.

И продолжения:

Patterns & Abstractions

Const as an auto trait
👍3💩1😐1
Двое взрослых мужчины долбят и добираются до моего стояка
🌚16👍4👎3🤔1🤮1💩1
#prog #rust #cpp #article

Switching From C++ to Rust

Короткая заметка о впечатлениях от Rust от человека, который до этого профессионально писал на C++.

> After 4 years of C++, I was still getting occasional memory-related server crashes from the code that was already reviewed and merged. It is very hard to say what percentage of those made it to production environments because people just restart the server when segfault happens, so I won’t say anything.

> There are very few things in my life I hate more than building C++ code.

> In C++, you can comfortably measure the size of error messages in kilobytes. Infinite scroll in the terminal emulator is an absolute must because oh boy does the compiler like printing text. After a few years, you develop a certain intuition to decide if it makes sense to read the error or try to stare at your code for a while, depending on the error size. Usually the bigger the error, the more effective random staring at the code is

> Result and Option are useful concepts, but what makes them fantastic is the fact that everybody uses them.
👍12💩1
Блог*
Двое взрослых мужчины долбят и добираются до моего стояка
Это про ремонт в квартире, извращенцы!
👎18😁2🤔1💩1
This media is not supported in your browser
VIEW IN TELEGRAM
Как человек, который регулярно включает подкасты с комиками, а вместо этого получает мотивационные советы бро-интеллектуалов, очень смеюсь с этого видео

https://twitter.com/carolinebano/status/1635407479600451585
👍3😐2😁1💩1
🤯14🤮31💩1
💩1
❤‍🔥30👍5💩1
#prog #rust #article

Using unwrap() in Rust is Okay от Andrew Gallant aka burntsushi

(но прочитайте статью целиком, чтобы полностью понять его позицию)
💩1
— Какие ваши недостатки?
— Правильно интерпретирую семантику вопросов, но напрочь игнорирую их суть.
— Можете привести пример?
— Могу.
👍34🤡5🔥1🤬1💩1
🔥12🤮9👍3💩1
Forwarded from javawatch
ChatGPT достигла уровня человека! Я попросил ее написать пример использования модулей в C++23 через cmake, и у нее не получилось. У меня тоже не получилось. Каких высот мы достигли!
😁38🔥6💩1
Неиронично задаюсь этим вопросом, кстати
💩1
Ну как, блядь??!!
👍12❤‍🔥4😁4💩1
#prog #article #performancetrap

The 'premature optimization is evil' myth

I have heard the “premature optimization is the root of all evil” statement used by programmers of varying experience at every stage of the software lifecycle, to defend all sorts of choices, ranging from poor architectures, to gratuitous memory allocations, to inappropriate choices of data structures and algorithms, to complete disregard for variable latency in latency-sensitive situations, among others.

Mostly this quip is used defend sloppy decision-making, or to justify the indefinite deferral of decision-making. In other words, laziness. It is safe to say that the very mention of this oft-misquoted phrase causes an immediate visceral reaction to commence within me… and it’s not a pleasant one.

In this short article, we’ll look at some important principles that are counter to what many people erroneously believe this statement to be saying. To save you time and suspense, I will summarize the main conclusions: I do not advocate contorting oneself in order to achieve a perceived minor performance gain.
<...> What I do advocate is thoughtful and intentional performance tradeoffs being made as every line of code is written. Always understand the order of magnitude that matters, why it matters, and where it matters. And measure regularly! <...> Given the choice between two ways of writing a line of code, both with similar readability, writability, and maintainability properties, and yet interestingly different performance profiles, don’t be a bozo: choose the performant approach. Eschew redundant work, and poorly written code. And lastly, avoid gratuitously abstract, generalized, and allocation-heavy code, when slimmer, more precise code will do the trick.

<...>

These kinds of “peanut butter” problems add up in a hard to identify way. Your performance profiler may not obviously point out the effect of such a bad choice so that it’s staring you in your face. Rather than making one routine 1000% slower, you may have made your entire program 3% slower. Make enough of these sorts of decisions, and you will have dug yourself a hole deep enough to take a considerable percentage of the original development time just digging out.

<...>

There aren’t many ways to introduce a multisecond delay into your program at a moment’s notice. But I/O can do just that.

Code with highly variable latency is dangerous, because it can have dramatically different performance characteristics depending on numerous variables, many of which are driven by environmental conditions outside of your program’s control. As such it is immensely important to document where such variable latency can occur, and to program defensively against it happening.

<...>

I can’t tell you how many times I’ve seen programmers employ unsafe pointer arithmetic to avoid the automatic bounds checking generated by the CLR JIT compiler. It is true that in some circumstances this can be a win. But it is also true that most programmers who do this never bothered to crack open the resulting assembly to see that the JIT compiler does a fairly decent job at automatic bounds check hoisting. This is an example where the cost of the optimization outweighs the benefits in most circumstances. The cost to pin memory, the risk of heap corruption due to a failure to properly pin memory or an offset error, and the complication in the code, are all just not worth it. Unless you really have actually measured and found the routine to be a problem.

<...>

I’m not saying Knuth didn’t have a good point. He did. But the “premature optimization is the root of all evil” pop-culture and witty statement is not a license to ignore performance altogether. It’s not justification to be sloppy about writing code.
👍4🥴41❤‍🔥1💩1
💩1
Forwarded from DanD
👍13😁8🤬1💩1