DLeX: AI Python
22.1K subscribers
5.08K photos
1.23K videos
765 files
4.53K links
هوش‌مصنوعی و برنامه‌نویسی

توییتر :

https://twitter.com/NaviDDariya

هماهنگی و تعرفه تبلیغات : @navidviola
Download Telegram
Forwarded from IAAA.AI
🍉جشنواره یلدا🍉

🏆بزرگ‌ترین مسابقه هوش مصنوعی کشور با همکاری بنیاد ملی نخبگان، پست بانک ایران، محک و iEEE برگزار می‌شود.

📌جوایز:
▫️هر چالش بیش از یک میلیارد تومان جایزه نقدی
▫️امتیاز نخبگی بنیاد ملی نخبگان، امریه سربازی و مزایای دیگر

🍉 تخفیف جشنواره یلدا : YLD04
💳 امکان پرداخت در ۴ قسط با اسنپ پی
⌛️مهلت تا ۳۰ آذر

🌐 ثبت‌نام و دریافت اطلاعات بیشتر
🔗 اینستاگرام
☎️شماره تماس:91096992-021
📱پشتیبانی تلگرام:09103445843

🟣جایزه سالانه هوش مصنوعی ایران (iAAA)|
@iaaa_ai
سوپابیس هم MCP خودش رو چند وقتی هست که برای همه کد ادیتور ها ارائه داده. که تعامل باهاش راحت تر بشه.

👉 @ai_python ✍️
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from DLeX: AI Python (NaviD DariYa)
👉 @ai_python ✍️

انواع منابع پردازشی موجود در Azure Machine Learning :

👉 @ai_python ✍️

1️⃣ Compute instance: Behaves similarly to a virtual machine and is primarily used to run notebooks. It's ideal for experimentation.

2️⃣ Compute clusters: Multi-node clusters of virtual machines that automatically scale up or down to meet demand. A cost-effective way to run scripts that need to process large volumes of data. Clusters also allow you to use parallel processing to distribute the workload and reduce the time it takes to run a script.

3️⃣ Kubernetes clusters: Cluster based on Kubernetes technology, giving you more control over how the compute is configured and managed. You can attach your self-managed Azure Kubernetes (AKS) cluster for cloud compute, or an Arc Kubernetes cluster for on-premises workloads.

4️⃣ Attached compute: Allows you to attach existing compute like Azure virtual machines or Azure Databricks clusters to your workspace.

5️⃣ Serverless compute: A fully managed, on-demand compute you can use for training jobs.
Please open Telegram to view this post
VIEW IN TELEGRAM
3
The paper on Nested Learning argues that current deep learning models suffer from an “anterograde amnesia” problem—unable to gradually consolidate new experiences into long-term memory—and proposes a new framework where learning occurs across multiple nested optimization levels operating at different frequencies. Drawing inspiration from the brain’s multi‑scale processing,

👉 @ai_python ✍️

it reframes optimizers as memory systems and shows that architectures like Transformers and MLPs are fundamentally uniform when viewed through this lens.


The key promise is continual learning without catastrophic forgetting, shifting focus from stacking layers to designing nested optimization processes across time scales.

https://www.k-a.in/nl.html
Please open Telegram to view this post
VIEW IN TELEGRAM