NVIDIA CUDA Virtual Connect With Experts Event Information
We are starting a monthly virtual event where CUDA enthusiasts can connect directly with a panel of real-life CUDA developers at NVIDIA. This is a chance to meet some of the people designing and implementing your favorite CUDA drivers, kernels, libraries and tools; and ask questions related to your project or work.
Event Information:
When?
Friday, September 27, 2024 at 10-11:30am PT
What?
This month, we will be discussing the CUDA Python ecosystem. You will have the opportunity to ask questions via chat about topics like CuPy, Numba, and other Python libraries to our LIVE panel of CUDA Python experts.
Who can Attend?
CUDA and Python developers of all levels and backgrounds.
Where?
Join us on NVIDIA’s Microsoft Teams Instance here
We are starting a monthly virtual event where CUDA enthusiasts can connect directly with a panel of real-life CUDA developers at NVIDIA. This is a chance to meet some of the people designing and implementing your favorite CUDA drivers, kernels, libraries and tools; and ask questions related to your project or work.
Event Information:
When?
Friday, September 27, 2024 at 10-11:30am PT
What?
This month, we will be discussing the CUDA Python ecosystem. You will have the opportunity to ask questions via chat about topics like CuPy, Numba, and other Python libraries to our LIVE panel of CUDA Python experts.
Who can Attend?
CUDA and Python developers of all levels and backgrounds.
Where?
Join us on NVIDIA’s Microsoft Teams Instance here
Microsoft Teams
Join conversation
👍1
Less known features of C
https://jorenar.com/blog/less-known-c
https://jorenar.com/blog/less-known-c
Jorenar
Lesser known tricks, quirks and features of C
❤1
How AI is Changing Coding and Education
FREE STANFORD WEBINAR
October 9, 2024 | 11:00-11:45 am PT
Registration
FREE STANFORD WEBINAR
October 9, 2024 | 11:00-11:45 am PT
Registration
Installing Lama locally
Video
https://ollama.com/library/llama3.2
Video
https://ollama.com/library/llama3.2
bash
sudo ufw allow 1143/tcp
curl -fsSL https://ollama.com/install.sh | sh
# v3.1, 8billions parameters
ollama pull llama3.1:8b
ollama run llama3.1
# v3.2, 3 billions parameters
ollama pull llama3.2
ollama run llama3.2
# https://127.0.0.1:11434/ to check ollama is working
Scientific Programming
Installing Lama locally Video https://ollama.com/library/llama3.2 bash sudo ufw allow 1143/tcp curl -fsSL https://ollama.com/install.sh | sh # v3.1, 8billions parameters ollama pull llama3.1:8b ollama run llama3.1 # v3.2, 3 billions parameters ollama…
DeepLearning.AI - Learning Platform
Introducing Multimodal Llama 3.2
Try out the features of the new Llama 3.2 models to build AI applications with multimodality.
NotebookLM from Google has new feature to make a podcast from provided resources, it's pretty cool to try.
Video for more information.
Video for more information.
YouTube
NotebookLM: Will Instant Podcasts Transform Learning?
Exploring Google's Notebook LM and Illuminate: A New Era of AI-Powered Learning Tools
In this video, I'll be demonstrating Google's Notebook LM and Illuminate experiment, both of which have gained attention on social media due to their innovative features.…
In this video, I'll be demonstrating Google's Notebook LM and Illuminate experiment, both of which have gained attention on social media due to their innovative features.…
Daily_Dose_Of_Data_Science_Full_Archive.pdf
88.3 MB
Daily dose of data science archive 2024
This media is not supported in your browser
VIEW IN TELEGRAM
How to use open wbui and run LLM models locally:
The cleanest method is to use docker, So if you have installed docker on your local machine and also have installed olama just need to download the image from docker with the following command:
find more options from here.
I uploaded a 360 page book and asked some questions, seems work fine.
Every thing runs locally without need to connect internet after getting the models.
Demo credit: Tarfandoon
The cleanest method is to use docker, So if you have installed docker on your local machine and also have installed olama just need to download the image from docker with the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
find more options from here.
I uploaded a 360 page book and asked some questions, seems work fine.
Every thing runs locally without need to connect internet after getting the models.
Demo credit: Tarfandoon
Scientific Programming
How to use open wbui and run LLM models locally: The cleanest method is to use docker, So if you have installed docker on your local machine and also have installed olama just need to download the image from docker with the following command: docker run…
Equivalently or even easier :
https://lmstudio.ai/
I am still trying this one, Got some issue reading articles in PDF. Only read Citation part :| .
https://lmstudio.ai/
I am still trying this one, Got some issue reading articles in PDF. Only read Citation part :| .
LM Studio
LM Studio - Local AI on your computer
Run local AI models like gpt-oss, Llama, Gemma, Qwen, and DeepSeek privately on your computer.
https://centuri-livingsystems.org/recruitment/
PhD and postdoc Applications Now Open
The deadline for submission is January 27.
CENTURI, Marseille, France
PhD and postdoc Applications Now Open
The deadline for submission is January 27.
CENTURI, Marseille, France
Centuri Living Systems
Careers - Centuri Living Systems
Open positions available CENTURI Funded PhD Positions PHD2026_01 | Identifying evolutionary divergent functional gene modules in the human cerebellum development Supervisors: Baptiste LIBÉ-PHILIPPOT | baptiste.libe‐philippot@univ‐amu.fr | IBDM | Léo GUIGNARD…
#j2p: A simple Python package to convert Jupyter notebooks to Python scripts.
If you find yourself needing to convert a notebook to a Python script, you likely turn to nbconvert. However, this often results in a script with annoying cell separators. Consequently, you may try manually removing these extra lines to focus solely on the code itself.
This tiny package provide a cleaner solution
## Installation
## Usage
output name is optional.
P.S:
There is already a package (not by me) for the reverse action
GitHub: https://github.com/Ziaeemehr/j2p
If you find yourself needing to convert a notebook to a Python script, you likely turn to nbconvert. However, this often results in a script with annoying cell separators. Consequently, you may try manually removing these extra lines to focus solely on the code itself.
This tiny package provide a cleaner solution
## Installation
pip install ju2py
## Usage
j2p example.ipynb [output.py]
output name is optional.
P.S:
There is already a package (not by me) for the reverse action
pip install p2j
p2j example.py
GitHub: https://github.com/Ziaeemehr/j2p
GitHub
GitHub - Ziaeemehr/j2p: A Tiny Python package to convert Jupyter notebooks to Python scripts.
A Tiny Python package to convert Jupyter notebooks to Python scripts. - Ziaeemehr/j2p
👍3
Global Brain Reconfiguration After Local Neural Manipulation.wav
37.2 MB
Our new research article from PNAS investigates how localized brain manipulations, such as lesions or silencing, impact the entire brain's functional connectivity in mice. Combining fMRI data with computational modeling, the study reveals that these targeted interventions lead to widespread network reconfigurations, sometimes decreasing and other times increasing connectivity. We used personalized brain simulations to explore the underlying mechanisms of this phenomenon, known as diaschisis, finding that alterations in local neuronal excitability drive these global changes. The findings offer insights into understanding the broad effects of focal brain disruptions and could inform the development of more precise therapeutic strategies targeting brain dynamics. The data and analysis tools are publicly available.
https://www.pnas.org/doi/10.1073/pnas.2405706122
https://www.pnas.org/doi/10.1073/pnas.2405706122
اماس (Multiple Sclerosis) یک بیماری خودایمنی است که سیستم عصبی مرکزی را درگیر میکند و منجر به ضایعاتی در غلاف میلین میشود. این آسیب به میلین باعث کند شدن سرعت هدایت سیگنالهای عصبی میشود که به آن تاخیر هدایتی میگویند.
هدف اصلی این کار، برآورد ارتباط بین شدت ضایعات میلین در هر بیمار و افزایش ناشی از آن در تاخیرهای هدایتی در سراسر راههای عصبی آسیبدیده بود. چگونگی ترجمه دقیق شدت ضایعات ساختاری میلین به کند شدن تاخیرهای هدایت عصبی تاکنون ناشناخته بود.
در این مطالعه از دادههای ۳۸ نفر (۲۰ فرد سالم و ۱۸ بیمار مبتلا به اماس) استفاده کردیم که شامل ثبت فعالیت مغزی با مگنتوانسفالوگرافی (MEG) و تصویربرداری رزونانس مغناطیسی (MRI) برای تحلیل ساختار مغز و ضایعات ماده سفید بود.
همچنین از مدلهای محاسباتی بزرگمقیاس مغز و تکنیکی به نام استنتاج مبتنی بر شبیهسازی (Simulation-Based Inference - SBI) استفاده شده است.
ادامه …
LinkedIn
هدف اصلی این کار، برآورد ارتباط بین شدت ضایعات میلین در هر بیمار و افزایش ناشی از آن در تاخیرهای هدایتی در سراسر راههای عصبی آسیبدیده بود. چگونگی ترجمه دقیق شدت ضایعات ساختاری میلین به کند شدن تاخیرهای هدایت عصبی تاکنون ناشناخته بود.
در این مطالعه از دادههای ۳۸ نفر (۲۰ فرد سالم و ۱۸ بیمار مبتلا به اماس) استفاده کردیم که شامل ثبت فعالیت مغزی با مگنتوانسفالوگرافی (MEG) و تصویربرداری رزونانس مغناطیسی (MRI) برای تحلیل ساختار مغز و ضایعات ماده سفید بود.
همچنین از مدلهای محاسباتی بزرگمقیاس مغز و تکنیکی به نام استنتاج مبتنی بر شبیهسازی (Simulation-Based Inference - SBI) استفاده شده است.
ادامه …
Linkedin
Mapping Brain Lesions to Cuncuction Delays | Abolfazl Ziaeemehr
اماس (Multiple Sclerosis) یک بیماری خودایمنی است که سیستم عصبی مرکزی را درگیر میکند و منجر به ضایعاتی در غلاف میلین میشود. این آسیب به میلین باعث کند شدن سرعت هدایت سیگنالهای عصبی میشود که به آن تاخیر هدایتی میگویند.
هدف اصلی این کار، برآورد ارتباط…
هدف اصلی این کار، برآورد ارتباط…