Tags: #decreasing #subsequence #problem #algorithm
https://www.techiedelight.com/longest-decreasing-subsequence-problem/
https://www.techiedelight.com/longest-decreasing-subsequence-problem/
Techie Delight
Longest Decreasing Subsequence Problem | Techie Delight
The longest decreasing subsequence problem is to find a subsequence of a given sequence in which the subsequence's elements are in sorted order, highest to lowest, and in which the subsequence is as long as possible.
Tags: #modern #tools #diagnostics
https://blog.ndcconferences.com/modern-diagnostics-tools-for-c-applications/
https://blog.ndcconferences.com/modern-diagnostics-tools-for-c-applications/
NDC Blog
Modern diagnostics tools for C++ applications
Introduction Profiling, debugging, and investigating C++ applications doesn't have to be insanely hard. If you have been a C++ developer for many years, you might be used to memory tracing tools like Valgrind -- with a potential 10x overhead), instrumentationβ¦
Tags: #library #upgrades
https://developers.redhat.com/blog/2017/03/13/cc-library-upgrades-and-opaque-data-types-in-process-shared-memory/
https://developers.redhat.com/blog/2017/03/13/cc-library-upgrades-and-opaque-data-types-in-process-shared-memory/
Red Hat Developer
C/C++ library upgrades and opaque data types in process shared memory | Red Hat Developer
The problem C/C++ libraries expect to be able to change the internal implementation details of opaque data types from release to release since such a change has no external ABI consequences
Tags: #neural #network
https://blog.demofox.org/2017/03/13/neural-network-gradients-backpropagation-dual-numbers-finite-differences/
https://blog.demofox.org/2017/03/15/neural-network-recipe-recognize-handwritten-digits-with-95-accuracy/
https://blog.demofox.org/2017/03/13/neural-network-gradients-backpropagation-dual-numbers-finite-differences/
https://blog.demofox.org/2017/03/15/neural-network-recipe-recognize-handwritten-digits-with-95-accuracy/
The blog at the bottom of the sea
Neural Network Gradients: Backpropagation, Dual Numbers, Finite Differences
In the post How to Train Neural Networks With Backpropagation I said that you could also calculate the gradient of a neural network by using dual numbers or finite differences. By special request, β¦
The Daily C++ via @vote
What is your relationship with C++?
anonymous poll
Student β 166
πππππππ 42%
Learning β 107
πππππ 27%
Full Time Job β 76
πππ 19%
Part Time Job β 26
π 7%
Other β 20
π 5%
π₯ 395 people voted so far.
anonymous poll
Student β 166
πππππππ 42%
Learning β 107
πππππ 27%
Full Time Job β 76
πππ 19%
Part Time Job β 26
π 7%
Other β 20
π 5%
π₯ 395 people voted so far.
Tags: #bool #ternary #stackoverflow
https://stackoverflow.com/questions/43139144/why-is-the-ternary-operator-used-to-define-1-and-0-in-a-macro
https://stackoverflow.com/questions/43139144/why-is-the-ternary-operator-used-to-define-1-and-0-in-a-macro
Stackoverflow
Why is the ternary operator used to define 1 and 0 in a macro?
I'm using an SDK for an embedded project. In this source code I found some code which at least I found peculiar. In many places in the SDK there is source code in this format:
#define ATCI_IS_LOWER(
#define ATCI_IS_LOWER(