ML track at YaTalks 2022
YaTalks, Yandex’s main conference for the IT community, will be held on December 3 and 4. More than 100 tech experts from around the globe will gather to discuss technology and life in today’s ever-changing world. In the program, there are tracks about backend, frontend, mobile development, and, of course, machine learning.
Speakers will discuss:
• what significant events have happened in the sphere of machine learning for the last 10 years;
• how neural network-driven translation works;
• how generative neural networks create pictures and whether they are able to replace illustrators;
• and many other topical issues.
This year YaTalks will be streamed simultaneously in two languages — Russian and English — using neural network-driven voice-over translation technologies. The conference is online, so you can join it from anywhere in the world.
Learn more and register on the website
YaTalks, Yandex’s main conference for the IT community, will be held on December 3 and 4. More than 100 tech experts from around the globe will gather to discuss technology and life in today’s ever-changing world. In the program, there are tracks about backend, frontend, mobile development, and, of course, machine learning.
Speakers will discuss:
• what significant events have happened in the sphere of machine learning for the last 10 years;
• how neural network-driven translation works;
• how generative neural networks create pictures and whether they are able to replace illustrators;
• and many other topical issues.
This year YaTalks will be streamed simultaneously in two languages — Russian and English — using neural network-driven voice-over translation technologies. The conference is online, so you can join it from anywhere in the world.
Learn more and register on the website
yatalks.yandex.ru
YaTalks 2023 — Yandex's premier conference for the IT community
On December 5-6, Moscow and Belgrade will host over 100 IT industry experts and scientists delivering technical presentations on development, ML, and giving popular science lectures.
🔥19👍4❤1
🤯72👍26🔥8🤔7👎2🍓2😨2
Forwarded from Spark in me (Alexander)
Best Python Concurrency Guides
- https://superfastpython.com/multiprocessing-in-python/
- https://superfastpython.com/python-asyncio/
- https://superfastpython.com/multiprocessing-pool-python/
- https://superfastpython.com/threadpool-python/
They are a bit bloated and explain the same concepts 10 times, but they try to explain the most unexplored parts of Python in detail in plain language with examples.
You can just read examples and intro.
Good stuff.
- https://superfastpython.com/multiprocessing-in-python/
- https://superfastpython.com/python-asyncio/
- https://superfastpython.com/multiprocessing-pool-python/
- https://superfastpython.com/threadpool-python/
They are a bit bloated and explain the same concepts 10 times, but they try to explain the most unexplored parts of Python in detail in plain language with examples.
You can just read examples and intro.
Good stuff.
Super Fast Python
Python Multiprocessing: The Complete Guide - Super Fast Python
Python Multiprocessing, your complete guide to processes and the multiprocessing module for concurrency in Python.
👍37🔥7👏3❤2😁1
Forwarded from Kier from TOP
AI-assistant tool for a slides deck generation
Stumbled upon a new startup Tome, which allows to create a deck given a text prompt, i.e.
Emerge of such a service was only a question of time given the advance of Midjourney, Dall-E and GPT-3.
Tools like this will drastically improve quality of the presentations and reduce time requried to create a good deck.
Website: https://beta.tome.app/
Example of a deck: https://tome.app/kir/unlocking-the-creative-economy-with-ai-assistant-tools-clbxrl6r808cd813csocuomwi
Stumbled upon a new startup Tome, which allows to create a deck given a text prompt, i.e.
AI-assistant tool in creator economy
.Emerge of such a service was only a question of time given the advance of Midjourney, Dall-E and GPT-3.
Tools like this will drastically improve quality of the presentations and reduce time requried to create a good deck.
Website: https://beta.tome.app/
Example of a deck: https://tome.app/kir/unlocking-the-creative-economy-with-ai-assistant-tools-clbxrl6r808cd813csocuomwi
🔥32👍17🤯3
Dear all,
Our friends are organizing AI & Natural Language conference in Yerevan next year, 21-22 April 2023. Guys are open for collaboration, if you want to organize a workshop on a thriving topic or a challenge, please contact them. All the info is in their channel: https://t.iss.one/ainlconf
Our friends are organizing AI & Natural Language conference in Yerevan next year, 21-22 April 2023. Guys are open for collaboration, if you want to organize a workshop on a thriving topic or a challenge, please contact them. All the info is in their channel: https://t.iss.one/ainlconf
🔥12👍10🤮1
Forwarded from Kier from TOP
Some might have wondered what application will #Midjourney and #ChatGPT have.
What products will creators to build with them?
Here is one of examples of such human-AI collaboration — short illustrated story on TikTok having millions of views.
https://vt.tiktok.com/ZS8MENP51/
#AI_tools
What products will creators to build with them?
Here is one of examples of such human-AI collaboration — short illustrated story on TikTok having millions of views.
https://vt.tiktok.com/ZS8MENP51/
#AI_tools
👍22🤡8
Top Python libraries `22
by @tryolabs
link: https://tryolabs.com/blog/2022/12/26/top-python-libraries-2022
#python #tools
by @tryolabs
link: https://tryolabs.com/blog/2022/12/26/top-python-libraries-2022
#python #tools
Tryolabs
Top Python libraries of 2022
There are so many amazing Python libraries and tools out every year that it's hard to keep track of them all. That's why we share with you our hand-picked selection of our best picks.
👍37🔥13
Left picture is one generated by #Midjourney with a
Right one was generated with a
Looks like Midjourney is not aware of concept of distributions yet.
#AI #AGI #vizualization
bell curve with mu = 18 sigma = 4
request.Right one was generated with a
bell curve with mu = 18 sigma = 1
request.Looks like Midjourney is not aware of concept of distributions yet.
#AI #AGI #vizualization
😁20👀19👍16🤔4👎2💩2🤡2❤1
Forwarded from Kier from TOP
GPT-3 for self-therapy
Just came across an interesting article about using #GPT-3 to analyze past journal entries and summarize therapy sessions for gaining new perspectives on personal struggles. Dan Shipper loaded person journal into the neural network so he could ask different questions, including asking about his own Myers-Briggs personality type (INTJ for those who wondered).
It's a powerful example of how AI tools can help individuals become more productive, effective, and happy. As we continue to see the integration of #AI in various industries, it's important for modern blue collar workers to learn how to properly work with these tools in order to stay at the peak of efficiency.
Let's embrace the future and learn to use AI to our advantage rather than to spread FUD about AI replacing workforce. It won’t but it will enable some people to achieve more and be way more productive.
Link: https://every.to/chain-of-thought/can-gpt-3-explain-my-past-and-tell-me-my-future
#aiusecase #toolsnotactors
Just came across an interesting article about using #GPT-3 to analyze past journal entries and summarize therapy sessions for gaining new perspectives on personal struggles. Dan Shipper loaded person journal into the neural network so he could ask different questions, including asking about his own Myers-Briggs personality type (INTJ for those who wondered).
It's a powerful example of how AI tools can help individuals become more productive, effective, and happy. As we continue to see the integration of #AI in various industries, it's important for modern blue collar workers to learn how to properly work with these tools in order to stay at the peak of efficiency.
Let's embrace the future and learn to use AI to our advantage rather than to spread FUD about AI replacing workforce. It won’t but it will enable some people to achieve more and be way more productive.
Link: https://every.to/chain-of-thought/can-gpt-3-explain-my-past-and-tell-me-my-future
#aiusecase #toolsnotactors
👍54🤡6😁4👎3❤1💩1
StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
In this paper, the authors propose StyleGAN-T, a model designed for large-scale text-to-image synthesis. With its large capacity, stable training on diverse datasets, strong text alignment, and controllable variation-text alignment tradeoff, StyleGAN-T outperforms previous GANs and even surpasses distilled diffusion models, the previous frontrunners in fast text-to-image synthesis in terms of sample quality and speed.
StyleGAN-T achieves a better zero-shot MS COCO FID than current state of-the-art diffusion models at a resolution of 64×64. At 256×256, StyleGAN-T halves the zero-shot FID previously achieved by a GAN but continues to trail SOTA diffusion models.
Paper: https://arxiv.org/abs/2301.09515
Project link: https://sites.google.com/view/stylegan-t?pli=1
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-stylegan-t
#deeplearning #cv #gan #styletransfer
In this paper, the authors propose StyleGAN-T, a model designed for large-scale text-to-image synthesis. With its large capacity, stable training on diverse datasets, strong text alignment, and controllable variation-text alignment tradeoff, StyleGAN-T outperforms previous GANs and even surpasses distilled diffusion models, the previous frontrunners in fast text-to-image synthesis in terms of sample quality and speed.
StyleGAN-T achieves a better zero-shot MS COCO FID than current state of-the-art diffusion models at a resolution of 64×64. At 256×256, StyleGAN-T halves the zero-shot FID previously achieved by a GAN but continues to trail SOTA diffusion models.
Paper: https://arxiv.org/abs/2301.09515
Project link: https://sites.google.com/view/stylegan-t?pli=1
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-stylegan-t
#deeplearning #cv #gan #styletransfer
👍18❤2
Forwarded from Machinelearning
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 Dreamix: Video Diffusion Models are General Video Editors
New Google's text-based motion model.
Given a small collection of images showing the same subject, Dreamix can generate new videos with the subject in motion.
Всего из нескольких картинок или ролику новая модель от Google - Dreamix генерирует видео по текстовому описанию!
На видео Dreamix превращает обезьяну в танцующего медведя по промпту «Медведь танцует и прыгает под веселую музыку, двигая всем телом».
⭐️ Project: https://dreamix-video-editing.github.io/
✅️ Paper: https://arxiv.org/pdf/2302.01329.pdf
⭐️ Video: https://www.youtube.com/watch?v=xcvnHhfDSGM
.
ai_machinelearning_big_data
New Google's text-based motion model.
Given a small collection of images showing the same subject, Dreamix can generate new videos with the subject in motion.
Всего из нескольких картинок или ролику новая модель от Google - Dreamix генерирует видео по текстовому описанию!
На видео Dreamix превращает обезьяну в танцующего медведя по промпту «Медведь танцует и прыгает под веселую музыку, двигая всем телом».
.
ai_machinelearning_big_data
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥21👍8❤2😁1
Cut and Learn for Unsupervised Object Detection and Instance Segmentation
CutLER (Cut-and-LEaRn) is a new approach for training unsupervised object detection and segmentation models without using any human labels. It uses a combination of a MaskCut approach to generate object masks and a robust loss function to learn a detector. The model is simple and compatible with different detection architectures and can detect multiple objects. It is a zero-shot detector, meaning it performs well without additional in-domain data and is robust against domain shifts across various types of images. CutLER can also be used as a pretrained model for supervised detection and improves performance on few-shot benchmarks. Results show improved performance over previous work, including being a zero-shot unsupervised detector and surpassing other low-shot detectors with finetuning.
Paper: https://arxiv.org/abs/2301.11320
Code link: https://github.com/facebookresearch/CutLER1
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-cutler
#deeplearning #cv #objectdetection #imagesegmentation
CutLER (Cut-and-LEaRn) is a new approach for training unsupervised object detection and segmentation models without using any human labels. It uses a combination of a MaskCut approach to generate object masks and a robust loss function to learn a detector. The model is simple and compatible with different detection architectures and can detect multiple objects. It is a zero-shot detector, meaning it performs well without additional in-domain data and is robust against domain shifts across various types of images. CutLER can also be used as a pretrained model for supervised detection and improves performance on few-shot benchmarks. Results show improved performance over previous work, including being a zero-shot unsupervised detector and surpassing other low-shot detectors with finetuning.
Paper: https://arxiv.org/abs/2301.11320
Code link: https://github.com/facebookresearch/CutLER1
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-cutler
#deeplearning #cv #objectdetection #imagesegmentation
👍19👎4⚡2🙊2🔥1