Добавлен кластер для моего Docker Like Solution, родного для MacOS
Added cluster for my Docker like solution native to macOS .
GitHub
GitHub - Okerew/osxiec: Native Docker-like solution for macOS developed by Okerew. It has it own containers. It leverages native…
Native Docker-like solution for macOS developed by Okerew. It has it own containers. It leverages native macOS features to provide containerization capabilities, albeit with some limitations compar...
Lazywal Terminal CLI-используйте видео-петли в качестве настольного фона
Lazywal Terminal CLI – Use video-loops as desktop background .
GitHub
GitHub - BuddhiLW/lazywal: CLI to facilitate the use of video-loops as desktop background
CLI to facilitate the use of video-loops as desktop background - BuddhiLW/lazywal
AWS Dual Stack VPC с IPAM и автоматической маршрутизацией в Terraform
AWS Dual Stack VPCs with IPAM and Auto Routing in Terraform .
GitHub
terraform-main/dual_stack_networking_trifecta_demo at main · JudeQuintana/terraform-main
Contribute to JudeQuintana/terraform-main development by creating an account on GitHub.
Show HN: Я построил удаленное бэкэнд в Terraform State на Cloudflare Workers
Show HN: I built a Terraform State remote backend on Cloudflare Workers .
GitHub
GitHub - willswire/estado: Terraform State remote backend on Cloudflare Workers
Terraform State remote backend on Cloudflare Workers - GitHub - willswire/estado: Terraform State remote backend on Cloudflare Workers
CLOC: Count Lines (пустые, комментарии и физический) исходного кода
Cloc: Count lines (blank, comment, and physical) of source code .
GitHub
GitHub - AlDanial/cloc: cloc counts blank lines, comment lines, and physical lines of source code in many programming languages.
cloc counts blank lines, comment lines, and physical lines of source code in many programming languages. - AlDanial/cloc
Show HN: CMS и другие функции, построенные из Laravel и Livewire
Show HN: CMS and other features built with Laravel and livewire .
GitHub
GitHub - oitcode/Cbswei: CMS and other features. Created with laravel and livewire.
CMS and other features. Created with laravel and livewire. - oitcode/Cbswei
Это довольно увлекательно - я вписываюсь в текстовый файл 17,6 МБ
It's quite fascinating – I fit into a 17.6 MB text file .
GitHub
GitHub - 0x77dev/dna: It's quite fascinating - I fit into a 17.6 MB text file, or at least part of me does*
It's quite fascinating - I fit into a 17.6 MB text file, or at least part of me does* - 0x77dev/dna
Show HN: Minigrad - небольшая нейронная сеть Lib в Голанге
Show HN: Minigrad – a small neural network lib in Golang .
GitHub
GitHub - 0verread/minigrad: Andrej Karpathy's Micrograd in Go
Andrej Karpathy's Micrograd in Go. Contribute to 0verread/minigrad development by creating an account on GitHub.
Простая, эффективная и легкая двойная x Window Manager
A simple, efficient and lightweight double screen X window manager .
GitHub
GitHub - Eliyaan/tinyvvm: My everyday window manager. Simple and lightweight (<1mb of ram with -prod & -autofree)
My everyday window manager. Simple and lightweight (<1mb of ram with -prod & -autofree) - Eliyaan/tinyvvm
Крамовая структура: первая попытка общего управления компьютером (GCC)
Cradle framework: a first attempt at General Computer Control (GCC) .
GitHub
GitHub - BAAI-Agents/Cradle: The Cradle framework is a first attempt at General Computer Control (GCC). Cradle supports agents…
The Cradle framework is a first attempt at General Computer Control (GCC). Cradle supports agents to ace any computer task by enabling strong reasoning abilities, self-improvment, and skill curatio...
Высокоразбитая техника программирования - это сила желаемого мышления
A highly underrated programming technique is the power of wishful thinking .
GitHub
blog/2021/todo-abc.md at main · dabeaz/blog
David Beazley's blog. . Contribute to dabeaz/blog development by creating an account on GitHub.
VLLM, быстрая и простая в использовании библиотеку для вывода и обслуживания LLM
vLLM, a fast and easy-to-use library for LLM inference and serving .
GitHub
GitHub - vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs
A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm