Color2Style: Real-Time Exemplar-Based Image Colorization with Self-Reference Learning and Deep Feature Modulation
ArXiV: https://arxiv.org/pdf/2106.08017.pdf
#colorization #dl
ArXiV: https://arxiv.org/pdf/2106.08017.pdf
#colorization #dl
Semi-Autoregressive Transformer for Image Captioning
Current state-of-the-art image captioning models use autoregressive decoders - they generate one word after another, which leads to heavy latency during inference. Non-autoregressive models predict all the words in parallel; however, they suffer from quality degradation as they remove word dependence excessively.
The authors suggest a semi-autoregressive approach to image captioning to improve a trade-off between speed and quality: the model keeps the autoregressive property in global but generates words parallelly in local. Experiments on MSCOCO show that SATIC can achieve a better trade-off without bells and whistles.
Paper: https://arxiv.org/abs/2106.09436
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-satic
#imagecaptioning #deeplearning #transformer
Current state-of-the-art image captioning models use autoregressive decoders - they generate one word after another, which leads to heavy latency during inference. Non-autoregressive models predict all the words in parallel; however, they suffer from quality degradation as they remove word dependence excessively.
The authors suggest a semi-autoregressive approach to image captioning to improve a trade-off between speed and quality: the model keeps the autoregressive property in global but generates words parallelly in local. Experiments on MSCOCO show that SATIC can achieve a better trade-off without bells and whistles.
Paper: https://arxiv.org/abs/2106.09436
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-satic
#imagecaptioning #deeplearning #transformer
Forwarded from Spark in me (Alexander)
Transformer Module Optimization
Article on how to apply different methods to make your transformer network up to 10x smaller and faster:
- Plain model optimization and PyTorch tricks;
- How and why to use FFT instead of self-attention;
- Model Factorization and quantization;
https://habr.com/ru/post/563778/
#deep_learning
Article on how to apply different methods to make your transformer network up to 10x smaller and faster:
- Plain model optimization and PyTorch tricks;
- How and why to use FFT instead of self-attention;
- Model Factorization and quantization;
https://habr.com/ru/post/563778/
#deep_learning
Хабр
Сжимаем трансформеры: простые, универсальные и прикладные способы cделать их компактными и быстрыми
Сейчас в сфере ML постоянно слышно про невероятные "успехи" трансформеров в разных областях. Но появляется все больше статей о том, что многие из этих успехов м...
Forwarded from Towards NLP🇺🇦
DocNLI
Natural Language Inference (NLI) is the task of determining whether a “hypothesis” is true (entailment), false (contradiction), or undetermined (neutral) given a “premise”.
Previously, this task was solved for sentence-level texts. A new work "DOCNLI: A Large-scale Dataset for Document-level Natural Language Inference" to be appeared in ACL 2021 presenting the study for document/paragraph level NLI:
https://arxiv.org/abs/2106.09449v1
In Github repo you can find data and pretrained weights of RoBERTa:
https://github.com/salesforce/DocNLI
For release in HuggingFace we, probably, should wait...
P.S. I am already waiting to test this setup for fake news detection🙃
Natural Language Inference (NLI) is the task of determining whether a “hypothesis” is true (entailment), false (contradiction), or undetermined (neutral) given a “premise”.
Previously, this task was solved for sentence-level texts. A new work "DOCNLI: A Large-scale Dataset for Document-level Natural Language Inference" to be appeared in ACL 2021 presenting the study for document/paragraph level NLI:
https://arxiv.org/abs/2106.09449v1
In Github repo you can find data and pretrained weights of RoBERTa:
https://github.com/salesforce/DocNLI
For release in HuggingFace we, probably, should wait...
P.S. I am already waiting to test this setup for fake news detection🙃
Article on how to use #XGBoost for #timeseries forcasting
Link: https://machinelearningmastery.com/xgboost-for-time-series-forecasting/
Link: https://machinelearningmastery.com/xgboost-for-time-series-forecasting/
👍2
Forwarded from Denis Sexy IT 🇬🇧
Recently I have found an Instagram of artist from Tomsk, Evgeny Schwenk – he redraws characters from Soviet cartoons as if they were real people. I have applied neural.love neural network which made his drawings even more realistic. Just a bit of Photoshop (mainly for hats) and here we go.
I guess Karlsson-on-the-Roof is my best result.
I guess Karlsson-on-the-Roof is my best result.
👍2
RL + NLP + Minecraft = Awesomeness
The video from Data Fest Online 2021 about IGLU Competition which was accepted at competition track of NeurIPS 2021
Link: https://youtu.be/mbDY8uxk9bs
The video from Data Fest Online 2021 about IGLU Competition which was accepted at competition track of NeurIPS 2021
Link: https://youtu.be/mbDY8uxk9bs
YouTube
Data Fest Online 2021 | IGLU Competition @ NeurIPS 2021
Data Fest Online 2021 https://fest.ai/2021/
RL + Catalyst track https://ods.ai/tracks/catalyst-and-rl-df2021
RL + Catalyst track https://ods.ai/tracks/catalyst-and-rl-df2021
New Coding Assistant Tool From OpenAI and Microsoft
Github announced new tool for improving coding experience: Github's copilot, developed with Microsoft and OpenAI's help. This looks really promosing, at least from the announce perspective: imaging just typing convert_datetime_to_date and getting function for that. Looking forward to the actual demo.
Project: https://copilot.github.com
Blog entry: https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/
CNBC news post: https://www.cnbc.com/2021/06/29/microsoft-github-copilot-ai-offers-coding-suggestions.html
#OpenAI #microsoft #coding #CS #computerlanguageunderstanding #CLU #Github
Github announced new tool for improving coding experience: Github's copilot, developed with Microsoft and OpenAI's help. This looks really promosing, at least from the announce perspective: imaging just typing convert_datetime_to_date and getting function for that. Looking forward to the actual demo.
Project: https://copilot.github.com
Blog entry: https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/
CNBC news post: https://www.cnbc.com/2021/06/29/microsoft-github-copilot-ai-offers-coding-suggestions.html
#OpenAI #microsoft #coding #CS #computerlanguageunderstanding #CLU #Github
MMPX Style-Preserving Pixel Art Magnification
Work on #pixel graphics resolution upscale. Hopefully we will get all the classic games auto-remastered someday.
Publication: https://www.jcgt.org/published/0010/02/04/
Article: https://www.jcgt.org/published/0010/02/04/paper.pdf
#CV #superresolution #upscale
Work on #pixel graphics resolution upscale. Hopefully we will get all the classic games auto-remastered someday.
Publication: https://www.jcgt.org/published/0010/02/04/
Article: https://www.jcgt.org/published/0010/02/04/paper.pdf
#CV #superresolution #upscale
👍2
This media is not supported in your browser
VIEW IN TELEGRAM
Habitat 2.0: Training home assistant robots with faster simulation and new benchmarks
Facebook released a new simulation platform to train robots in. Yeah, virtual robots in virtual environment, which can be a real space replica. This work brings us closer to domestic use of assistive robots.
Project website: https://ai.facebook.com/blog/habitat-20-training-home-assistant-robots-with-faster-simulation-and-new-benchmarks
Paper: https://ai.facebook.com/research/publications/habitat-2.0-training-home-assistants-to-rearrange-their-habitat
#Facebook #DigitalTwin #VR #RL #assistiverobots
Facebook released a new simulation platform to train robots in. Yeah, virtual robots in virtual environment, which can be a real space replica. This work brings us closer to domestic use of assistive robots.
Project website: https://ai.facebook.com/blog/habitat-20-training-home-assistant-robots-with-faster-simulation-and-new-benchmarks
Paper: https://ai.facebook.com/research/publications/habitat-2.0-training-home-assistants-to-rearrange-their-habitat
#Facebook #DigitalTwin #VR #RL #assistiverobots
Cloud-Native MLOps Framework
In this video, Artem Koval, Big Data and Machine Learning Practice Lead at Clear Scale, will analyse the requirements for modern MLOps and the main trends: Human-Centered AI, Fairness, Explainability, Model Monitoring, Human Augmented AI.
Link: https://youtu.be/K8s6dD7TPH4
In this video, Artem Koval, Big Data and Machine Learning Practice Lead at Clear Scale, will analyse the requirements for modern MLOps and the main trends: Human-Centered AI, Fairness, Explainability, Model Monitoring, Human Augmented AI.
Link: https://youtu.be/K8s6dD7TPH4
YouTube
Artem Koval | Cloud-Native MLOps Framework
Data Fest Online 2021 https://fest.ai/2021/
ML REPA track https://ods.ai/tracks/ml-repa-df2021
Presentation: https://yadi.sk/i/a25573AB8IZUyw
In this video we will analyse the requirements for modern MLOps and the main trends: Human-Centered AI, Fairness…
ML REPA track https://ods.ai/tracks/ml-repa-df2021
Presentation: https://yadi.sk/i/a25573AB8IZUyw
In this video we will analyse the requirements for modern MLOps and the main trends: Human-Centered AI, Fairness…