Data Science by ODS.ai 🦜
46.2K subscribers
643 photos
74 videos
7 files
1.73K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
Download Telegram
Experimenting with CLIP+VQGAN to Create AI Generated Art

Tips and tricks on prompts to #vqclip. TLDR:

* Adding rendered in unreal engine, trending on artstation, top of /r/art improves image quality significally.
* Using the pipe to split a prompt into separate prompts that are steered towards independently may be counterproductive.

Article: https://blog.roboflow.com/ai-generated-art/
Colab Notebook: https://colab.research.google.com/drive/1go6YwMFe5MX6XM9tv-cnQiSTU50N9EeT

#visualization #gan #generation #generatinveart #vqgan #clip
Forwarded from Towards NLP🇺🇦
RoBERTa English Toxicity Classifier

We have released our fine-tuned RoBERTa based toxicity classifier for English language on🤗:
https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier

The model was trained on the merge of the English parts of the three datasets by Jigsaw. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.
So, you can use it now conveniently for any of your research or industrial tasks☺️
👍1
It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning

Researchers from #Yandex have discovered that the reasoning capabilities of cross-lingual Transformers are concentrated in a small set of attention heads. A new multilingual dataset could encourage research on commonsense reasoning in Russian, French, Chinese and other languages.

Link: https://research.yandex.com/news/a-few-attention-heads-for-reasoning-in-multiple-languages

ArXiV: https://arxiv.org/abs/2106.12066

#transformer #nlu #nlp
Forwarded from Silero News (Alexander)
We Have Published a Model For Text Repunctuation and Recapitalization

The model works with SINGLE sentences (albeit long ones) and:

- Inserts capital letters and basic punctuation marks (dot, comma, hyphen, question mark, exclamation mark, dash for Russian);
- Works for 4 languages (Russian, English, German, Spanish) and can be extended;
- By design is domain agnostic and is not based on any hard-coded rules;
- Has non-trivial metrics and succeeds in the task of improving text readability;

Links:

- Model repo - https://github.com/snakers4/silero-models#text-enhancement
- Colab notebook - https://colab.research.google.com/github/snakers4/silero-models/blob/master/examples_te.ipynb
- Russian article - https://habr.com/ru/post/581946/
- English article - https://habr.com/ru/post/581960/
👍3
AI for Earth Monitoring course

Course is about how to apply data science to datasets of Earth images collected by satellites. This course would benefit people interested in jumping into the real world application and working with real Earth observation image data.

Start date: 18 Oct. 2021
Duration: 6 weeks
Cost: Free
Link: https://bit.ly/3lerMti
👍2
Entropy and complexity unveil the landscape of memes evolution

Sunday research about how memes evolved from 2011 to present.
TLDR: memes are getting more complex and require more contextual knowledge to understand.

Link: https://www.nature.com/articles/s41598-021-99468-6
Data: https://github.com/cdcslab/MemesEvolution

#memes #openresearch
​​A Recipe For Arbitrary Text Style Transfer with Large Language Models

Text style transfer is rewriting text to incorporate additional or alternative stylistic elements while preserving the overall semantics and structure.

Large language models are trained only for continuation, but recently many approaches showed that it is possible to perform other NLP tasks by expressing them as prompts that encourage the model to output the desired answer as the continuation.

The authors present a new prompting method (augmented zero-shot learning), which frames style transfer as a sentence rewriting task and requires only natural language instruction.

There are many great examples in the paper and on the project page - both formal and informal.
For example, "include the word "oregano"" and "in the style of a pirate".

Paper: https://arxiv.org/abs/2109.03910
Code: https://storage.googleapis.com/style-transfer-paper-123/index.html

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-llmdialog

#deeplearning #nlp #styletransfer
👍2
​​🔥Alias-Free Generative Adversarial Networks (StyleGAN3) release

King is dead! Long live the King! #StyleGAN2 was #SOTA and default standard for generating images. #Nvidia released update version, which will lead to more realistic images generated by the community.

Article: https://nvlabs.github.io/stylegan3/
GitHub: https://github.com/NVlabs/stylegan3
Colab: https://colab.research.google.com/drive/1BXNHZBai-pXtP-ncliouXo_kUiG1Pq7M

#GAN #dl
Dear advertisers, who spammed @opendatasciencebot, you are kindly welcome to advertise on this channel for 1 ETH (~ $4,300).

This might seem unreasonable overpriced, but don’t fall for it — it is. We do not promote anything we won’t post here for free, because we are privileged and blessed to work on the sphere with :goodenough: compensation to put up a higher price tag on it.

😌
👍21
Forwarded from Gradient Dude
​​On Neural Rendering

What is Neural Rendering? In a nutshell, neural rendering is when we take classic algorithms for image rendering from computer graphics and replace a part of the pipeline with neural networks (stupid, but effective). Neural rendering learns to render and represent a scene from one or more input photos by simulating the physical process of a camera that captures the scene. A key property of 3D neural rendering is the disentanglement of the camera capturing process (i.e., the projection and image formation) and the representation of a 3D scene during training. That is, we learn an explicit (voxels, point clouds, parametric surfaces) or an implicit (signed distance function) representation of a 3D scene. For training, we use observations of the scene from several camera viewpoints. The network is trained on these observations by rendering the estimated 3D scene from the training viewpoints, and minimizing the difference between the rendered and observed images. This learned scene representation can be rendered from any virtual camera in order to synthesize novel views. It is important for learning that the entire rendering pipeline is differentiable.

You may have noticed, that the topic of neural rendering, including all sorts of nerfs-schmerfs, is now a big hype in computer vision. You might say that neural rendering is very slow, and you'd be right. A typical training session on a small scene with ~ 50 input photos takes about 5.5 hours for the fastest method on a single GPU, but neural rendering methods have made significant progress in the last year improving both fidelity and efficiency. To catch up on all the recent developments in this direction, I highly recommend reading this SOTA report "Advances in Neural Rendering".

The gif is from Volume Rendering of Neural Implicit Surfaces paper.
👍3
READ//ABLE NLP competition(s)

Registration for the technology contests – Satellites is open.
These contests are made for a wide range of junior developers interested in natural language processing (NLP).
Contests-satellites held in a separate from the main competition READ//ABLE schedule.
The contests are series of competitions of text analysis, so the participating teams will be able to use the results of developments as a basis for participating in the main competition.

Fund: ~ 14,000 USD per sub-competition
Deadline: December 1 here
Link: https://ai.upgreat.one/satellites/

#NLP #contest
👍2🔥1
​​Swin Transformer V2: Scaling Up Capacity and Resolution

The authors present techniques for scaling Swin Transformer up to 3 billion parameters and making it capable of training with images of up to 1,536×1,536 resolution.

Vision models have the following difficulties when trying to scale them up: instability issues at scale, high GPU memory consumption for high-resolution images, and the fact that downstream tasks usually require high-resolution images/windows, while the models are pretrained on lower resolutions and the transfer isn't always efficient.

The authors introduce the following technics to circumvent those problems:
- a post normalization technique and a scaled cosine attention approach to improve the stability of large vision models;
- a log-spaced continuous position bias technique to effectively transfer models pre-trained at low-resolution images and windows to their higher-resolution counterparts;

In addition, they share how they were able to decrease GPU consumption significantly.

Swin Transformer V2 sets new records on four representative vision benchmarks: 84.0% top-1 accuracy on ImageNet-V2 image classification, 63.1 / 54.4 box / mask mAP on COCO object detection, 59.9 mIoU on ADE20K semantic segmentation, and 86.8% top-1 accuracy on Kinetics-400 video action classification.

Paper: https://arxiv.org/abs/2111.09883
Code: https://github.com/microsoft/Swin-Transformer

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-swin-v2

#deeplearning #cv #transformer
👍3
Acquisition of Chess Knowledge in AlphaZero

69-pages paper of analysis how #AlphaZero plays chess. TLDR: lots of concepts self-learned by neural network can be mapped to human concepts.

This means that generally speaking we can train neural networks to do some task and then learn something from them. Opposite is also true: we might imagine teaching neural networks some human concepts in order to maek them more efficient.

Post: https://en.chessbase.com/post/acquisition-of-chess-knowledge-in-alphazero
Paper: https://arxiv.org/pdf/2111.09259.pdf

#RL
👍3💩1
Augmented Reality for Haptic Teleoperation of a Robot with an Event-Based Soft Tactile Sensor

This paper presents a new teleoperation approach using an augmented reality-based interface combined with optimized haptic feedback to finely manipulate visually occluded objects.

The dynamic growth of emerging Augmented Reality (AR) interfaces has a high potential interest in robotic telemanipulation of objects under limited visibility conditions. On the user’s horizon, the real-world environment is overlayed by the virtual images of the robot end-effector and the object. To optimize the user experience in teleoperation, the visual augmentation is accompanied by a haptic stimulus. They both transmit the rendered signal of the contact force visually and haptically, respectively. The contact force is measured by an optical event-based tactile sensor (E-BTS) with a soft pad, vibrotactile stimuli are generated by the hand-held device (HHD) and the AR is projected in the head-mounted device (HMD).

Authors demonstrated experimentally their approach on teleoperated robot arm puncturing an occluded non-rigid membrane placed in vertical raw with similar membranes. A comparative study with 10 subjects has been carried out to quantify the impact of AR in a force control task with a human in the control loop. The results of the experiment show a promising potential application in the cable insertion in an industrial assembly task.

Video: YouTube

#AR
👍3
​​NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion

In this paper, Microsoft Research Asia and Peking University researchers share a unified multimodal (texts, images, videos, sketches) pre-trained model called NÜWA that can generate new or manipulate existing visual data for various visual synthesis tasks. Furthermore, they have designed a 3D transformer encoder-decoder framework with a 3D Nearby Attention (3DNA) mechanism to consider the nature of the visual data and reduce the computational complexity.

NÜWA achieves state-of-the-art results on text-to-image generation, text-to-video generation, video prediction, and several other tasks and demonstrates good results on zero-shot text-guided image and video manipulation tasks.

Paper: https://arxiv.org/abs/2111.12417
Code: https://github.com/microsoft/NUWA

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-nuwa

#deeplearning #cv #transformer #pretraining
👍2
Forwarded from Machinelearning
📹 End-to-End Referring Video Object Segmentation with Multimodal Transformers

Github: https://github.com/mttr2021/MTTR

Paper: https://arxiv.org/abs/2111.14821v1

Dataset: https://kgavrilyuk.github.io/publication/actor_action/

@ai_machinelearning_big_data
YaTalks — Yandex's conference for IT community.

Yandex will host its traditional conference on 3-4 December (starting tomorrow). Registration is open.

One of the tracks is devoted to Machine/Deep Learning with the focus on content generation.

Featured reports:

📚How too train text model on the minimal corpus
🎙️How Yandex.Browser Machine Translation works
🤖 Facial Expressions Animation

Conference website: https://yatalks.yandex.ru/?from=tg_opendatascience

#conference #mt #nlu
👍5