Tokyo Olympics Alternative medals table
Article on how teams performed with the respect to behavior expected by regression model.
Link: https://ig.ft.com/tokyo-olympics-alternative-medal-table/
Article on how teams performed with the respect to behavior expected by regression model.
Link: https://ig.ft.com/tokyo-olympics-alternative-medal-table/
Ft
Tokyo Olympics alternative medals table
Which countries are under-performing and over-performing?
ββVirtual fitting room launched by our friends
#in3D launched a 3D virtual fitting room with Replicant Fashion house. 30+ designers, 60+ looks.
Great example of the AI-driven product!
Desktop: https://www.replicant.fashion/digitaltwin
iPhone: https://apps.apple.com/us/app/in3d-3d-body-scanning/id1467153183
#aiproduct #fitting #metaverse
#in3D launched a 3D virtual fitting room with Replicant Fashion house. 30+ designers, 60+ looks.
Great example of the AI-driven product!
Desktop: https://www.replicant.fashion/digitaltwin
iPhone: https://apps.apple.com/us/app/in3d-3d-body-scanning/id1467153183
#aiproduct #fitting #metaverse
ββDomain-Aware Universal Style Transfer
Style transfer aims to reproduce content images with the styles from reference images. Modern style transfer methods can successfully apply arbitrary styles to images in either an artistic or a photo-realistic way. However, due to their structural limitations, they can do it only within a specific domain: the degrees of content preservation and stylization depends on a predefined target domain. As a result, both photo-realistic and artistic models have difficulty in performing the desired style transfer for the other domain.
The authors propose Domain-aware Style Transfer Networks (DSTN) that transfer not only the style but also the property of domain (i.e., domainness) from a given reference image. Furthermore, they design a novel domainess indicator (based on the texture and structural features) and introduce a unified framework with domain-aware skip connection to adaptively transfer the stroke and palette to the input contents guided by the domainness indicator.
Extensive experiments validate that their model produces better qualitative results and outperforms previous methods in terms of proxy metrics on both artistic and photo-realistic stylizations.
Paper: https://arxiv.org/abs/2108.04441
Code: https://github.com/Kibeom-Hong/Domain-Aware-Style-Transfer
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-dstn
#deeplearning #cv #styletransfer
Style transfer aims to reproduce content images with the styles from reference images. Modern style transfer methods can successfully apply arbitrary styles to images in either an artistic or a photo-realistic way. However, due to their structural limitations, they can do it only within a specific domain: the degrees of content preservation and stylization depends on a predefined target domain. As a result, both photo-realistic and artistic models have difficulty in performing the desired style transfer for the other domain.
The authors propose Domain-aware Style Transfer Networks (DSTN) that transfer not only the style but also the property of domain (i.e., domainness) from a given reference image. Furthermore, they design a novel domainess indicator (based on the texture and structural features) and introduce a unified framework with domain-aware skip connection to adaptively transfer the stroke and palette to the input contents guided by the domainness indicator.
Extensive experiments validate that their model produces better qualitative results and outperforms previous methods in terms of proxy metrics on both artistic and photo-realistic stylizations.
Paper: https://arxiv.org/abs/2108.04441
Code: https://github.com/Kibeom-Hong/Domain-Aware-Style-Transfer
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-dstn
#deeplearning #cv #styletransfer
π2
14 seconds of April #Nvidia 's CEO speech was generated in silico
Why this important: demand for usage of 3080 and newer GPU models might also get pumped by CGI artists and researchers working in VR / AR tech.
And this raises the bar for #speechsinthesis / #speechgeneration and definately for the rendering of photorealistic picture.
YouTube making of video: https://www.youtube.com/watch?v=1qhqZ9ECm70&t=1430s
Vice article on the subject: https://www.vice.com/en/article/88nbpa/nvidia-reveals-its-ceo-was-computer-generated-in-keynote-speech
Why this important: demand for usage of 3080 and newer GPU models might also get pumped by CGI artists and researchers working in VR / AR tech.
And this raises the bar for #speechsinthesis / #speechgeneration and definately for the rendering of photorealistic picture.
YouTube making of video: https://www.youtube.com/watch?v=1qhqZ9ECm70&t=1430s
Vice article on the subject: https://www.vice.com/en/article/88nbpa/nvidia-reveals-its-ceo-was-computer-generated-in-keynote-speech
YouTube
Connecting in the Metaverse: The Making of the GTC Keynote
See how a small team of artists were able to blur the line between real and rendered in NVIDIAβs #GTC21 keynote in this behind-the-scenes documentary. Read more: https://nvda.ws/3s97Tpy
@NVIDIAOmniverse is an open platform built for virtual collaborationβ¦
@NVIDIAOmniverse is an open platform built for virtual collaborationβ¦
π1
ββProgram Synthesis with Large Language Models
Paper compares models used for program synthesis in general purpose programming languages against two new benchmarks, MBPP (The Mostly Basic Programming Problems) and MathQA-Python, in both the few-shot and fine-tuning regimes.
MBPP contains 974 programming tasks, designed to be solvable by entry-level programmers. MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text.
Largest fine-tuned model achieves 83.8 percent accuracy on the latter benchmark.
Why this is interesting: better models for code / problem understanding means improved search for the coding tasks and the improvement of the coding-assistant projects like #TabNine or #Copilot
ArXiV: https://arxiv.org/abs/2108.07732
#DL #NLU #codewritingcode #benchmark
Paper compares models used for program synthesis in general purpose programming languages against two new benchmarks, MBPP (The Mostly Basic Programming Problems) and MathQA-Python, in both the few-shot and fine-tuning regimes.
MBPP contains 974 programming tasks, designed to be solvable by entry-level programmers. MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text.
Largest fine-tuned model achieves 83.8 percent accuracy on the latter benchmark.
Why this is interesting: better models for code / problem understanding means improved search for the coding tasks and the improvement of the coding-assistant projects like #TabNine or #Copilot
ArXiV: https://arxiv.org/abs/2108.07732
#DL #NLU #codewritingcode #benchmark
π1
Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity
#Baidu research proposed a structure-aware interactive graph neural network ( #SIGN ) to better learn representations of protein-ligand complexes, since drug discovery relies on the successful prediction of protein-ligand binding affinity.
Link: https://dl.acm.org/doi/10.1145/3447548.3467311
#biolearning #deeplearning
#Baidu research proposed a structure-aware interactive graph neural network ( #SIGN ) to better learn representations of protein-ligand complexes, since drug discovery relies on the successful prediction of protein-ligand binding affinity.
Link: https://dl.acm.org/doi/10.1145/3447548.3467311
#biolearning #deeplearning
Forwarded from ΠΠ°Ρ
ΠΎΠ΄ΠΊΠΈ Π² ΠΎΠΏΠ΅Π½ΡΠΎΡΡΠ΅
βββ‘οΈBreeaking news!
Big project, first public release! From the creator of FastAPI and Typer: SQLModel.
SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.
SQLModel is based on Python type annotations, and powered by Pydantic and SQLAlchemy.
SQLModel is, in fact, a thin layer on top of Pydantic and SQLAlchemy, carefully designed to be compatible with both.
The key features are:
- Intuitive to write: Great editor support. Completion everywhere. Less time debugging. Designed to be easy to use and learn. Less time reading docs.
- Easy to use: It has sensible defaults and does a lot of work underneath to simplify the code you write.
- Compatible: It is designed to be compatible with FastAPI, Pydantic, and SQLAlchemy.
- Extensible: You have all the power of SQLAlchemy and Pydantic underneath.
- Short: Minimize code duplication. A single type annotation does a lot of work. No need to duplicate models in SQLAlchemy and Pydantic.
https://github.com/tiangolo/sqlmodel
Big project, first public release! From the creator of FastAPI and Typer: SQLModel.
SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.
SQLModel is based on Python type annotations, and powered by Pydantic and SQLAlchemy.
SQLModel is, in fact, a thin layer on top of Pydantic and SQLAlchemy, carefully designed to be compatible with both.
The key features are:
- Intuitive to write: Great editor support. Completion everywhere. Less time debugging. Designed to be easy to use and learn. Less time reading docs.
- Easy to use: It has sensible defaults and does a lot of work underneath to simplify the code you write.
- Compatible: It is designed to be compatible with FastAPI, Pydantic, and SQLAlchemy.
- Extensible: You have all the power of SQLAlchemy and Pydantic underneath.
- Short: Minimize code duplication. A single type annotation does a lot of work. No need to duplicate models in SQLAlchemy and Pydantic.
https://github.com/tiangolo/sqlmodel
π₯1
Forwarded from Silero News (Alexander)
New German V4 Model and English V5 Models
New and improved models in Silero-models! Community edition versions available here: https://github.com/snakers4/silero-models
Huge performance improvements for two new models:
- English V5 (quality)
- German V3 (quality)
The models currently are available in the following flavors:
- English V5
The quality growth visualization:
New and improved models in Silero-models! Community edition versions available here: https://github.com/snakers4/silero-models
Huge performance improvements for two new models:
- English V5 (quality)
- German V3 (quality)
The models currently are available in the following flavors:
- English V5
jit
(small), onnx
(small), jit_q
(small, quantized), jit_xlarge
, onnx_xlarge
- German V3 jit_large
, onnx_large
The xsmall
model family for English in on the way.The quality growth visualization:
GitHub
GitHub - snakers4/silero-models: Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassinglyβ¦
Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple - snakers4/silero-models
π2
ββiRobot with poop detection
iRobot (company building cleaning house robots) had a problem with robots regarding pet poops. So they built a special model along with physical models of poop to test the product.
iRobot official YouTube: https://www.youtube.com/watch?v=2rj3VUmRNnU
TechCrunch: https://techcrunch.com/2021/09/09/actuator-4/
#aiproduct #marketinggurus
iRobot (company building cleaning house robots) had a problem with robots regarding pet poops. So they built a special model along with physical models of poop to test the product.
iRobot official YouTube: https://www.youtube.com/watch?v=2rj3VUmRNnU
TechCrunch: https://techcrunch.com/2021/09/09/actuator-4/
#aiproduct #marketinggurus
New attempt at proving Pβ NP
Martin Dowd published a 5-page paper claiming to contain a proof that P β NP. This is a fundamental question, comparing quickly checkable against quickly solvalble problems.
Basically, proving P != NP would mean that there will be unlimited demand alphago-like solutions in different spheres, because that will mean (as a scientific fact) that there are problems not having fast [enough] analytical solutions.
ResearchGate: https://www.researchgate.net/publication/354423778_P_Does_Not_Equal_NP
Wiki on the problem: https://en.wikipedia.org/wiki/P_versus_NP_problem
#fundamental #pnenp #computerscience
Martin Dowd published a 5-page paper claiming to contain a proof that P β NP. This is a fundamental question, comparing quickly checkable against quickly solvalble problems.
Basically, proving P != NP would mean that there will be unlimited demand alphago-like solutions in different spheres, because that will mean (as a scientific fact) that there are problems not having fast [enough] analytical solutions.
ResearchGate: https://www.researchgate.net/publication/354423778_P_Does_Not_Equal_NP
Wiki on the problem: https://en.wikipedia.org/wiki/P_versus_NP_problem
#fundamental #pnenp #computerscience
ResearchGate
130+ million publications organized by topic on ResearchGate
ResearchGate is a network dedicated to science and research. Connect, collaborate and discover scientific publications, jobs and conferences. All for free.
ββCounting Happiness and Where it Comes From
Researches asked 10 000 Mechanical Turk participants to name 10 things which are making them happy, resulting in creation of HappyDB.
And since that DB is open, Nathan Yau analyzed and vizualized this database in the perspective of subjects and actions, producing intersting visualization.
Hope that daily reading @opendatascience makes you at least content, if not happy.
Happines reason visualization link: https://flowingdata.com/2021/07/29/counting-happiness
HappyDB link: https://megagon.ai/projects/happydb-a-happiness-database-of-100000-happy-moments/
#dataset #emotions #visualization
Researches asked 10 000 Mechanical Turk participants to name 10 things which are making them happy, resulting in creation of HappyDB.
And since that DB is open, Nathan Yau analyzed and vizualized this database in the perspective of subjects and actions, producing intersting visualization.
Hope that daily reading @opendatascience makes you at least content, if not happy.
Happines reason visualization link: https://flowingdata.com/2021/07/29/counting-happiness
HappyDB link: https://megagon.ai/projects/happydb-a-happiness-database-of-100000-happy-moments/
#dataset #emotions #visualization
π2
ββSwinIR: Image Restoration Using Swin Transformer
Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from low-quality images (e.g., downscaled, noisy, and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers, which show impressive performance on high-level vision tasks.
The authors use a model SwinIR based on the Swin Transformers. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks (image super-resolution, image denoising, and JPEG compression artifact reduction) by up to 0.14~0.45dB, while the total number of parameters can be reduced by up to 67%.
Paper: https://arxiv.org/abs/2108.10257
Code: https://github.com/JingyunLiang/SwinIR
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-swinir
#deeplearning #cv #transformer #superresolution #imagerestoration
Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from low-quality images (e.g., downscaled, noisy, and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers, which show impressive performance on high-level vision tasks.
The authors use a model SwinIR based on the Swin Transformers. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks (image super-resolution, image denoising, and JPEG compression artifact reduction) by up to 0.14~0.45dB, while the total number of parameters can be reduced by up to 67%.
Paper: https://arxiv.org/abs/2108.10257
Code: https://github.com/JingyunLiang/SwinIR
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-swinir
#deeplearning #cv #transformer #superresolution #imagerestoration
π2π₯1
ββSummarizing Books with Human Feedback
#OpenAI fine-tuned #GPT3 to summarize books well enough to be human-readable. Main approach: recursively split text into parts and then meta-summarize summaries.
This is really important because once there will be a great summarization #SOTA we won't need editors to write posts for you. And researchers ultimatively will have some asisstance interpreting models' results.
BlogPost: https://openai.com/blog/summarizing-books/
ArXiV: https://arxiv.org/abs/2109.10862
#summarization #NLU #NLP
#OpenAI fine-tuned #GPT3 to summarize books well enough to be human-readable. Main approach: recursively split text into parts and then meta-summarize summaries.
This is really important because once there will be a great summarization #SOTA we won't need editors to write posts for you. And researchers ultimatively will have some asisstance interpreting models' results.
BlogPost: https://openai.com/blog/summarizing-books/
ArXiV: https://arxiv.org/abs/2109.10862
#summarization #NLU #NLP
π2
ββAI Generated Pokemon Sprites with GPT-2
Author trained #GPT2 model to generate #pokemon sprites, encoding them as the lines of characters (including color). Surprisingly, results were decent, so this leaves us wonder if #GPT3 results would be better.
YouTube: https://www.youtube.com/watch?v=Z9K3cwSL6uM
GitHub: https://github.com/MatthewRayfield/pokemon-gpt-2
Article: https://matthewrayfield.com/articles/ai-generated-pokemon-sprites-with-gpt-2/
Example: https://matthewrayfield.com/projects/ai-pokemon/
#NLU #NLP #generation #neuralart
Author trained #GPT2 model to generate #pokemon sprites, encoding them as the lines of characters (including color). Surprisingly, results were decent, so this leaves us wonder if #GPT3 results would be better.
YouTube: https://www.youtube.com/watch?v=Z9K3cwSL6uM
GitHub: https://github.com/MatthewRayfield/pokemon-gpt-2
Article: https://matthewrayfield.com/articles/ai-generated-pokemon-sprites-with-gpt-2/
Example: https://matthewrayfield.com/projects/ai-pokemon/
#NLU #NLP #generation #neuralart
ββThis Olesya doesn't exist
Author trained StyleGAN2-ADA network on 2445 personal photos to generate new photo on the site each time there is a refresh or click.
Website: https://thisolesyadoesnotexist.glitch.me
Olesya's personal site: https://monolesan.com
#StyleGAN2 #StyleGAN2ADA #generation #thisXdoesntexist
Author trained StyleGAN2-ADA network on 2445 personal photos to generate new photo on the site each time there is a refresh or click.
Website: https://thisolesyadoesnotexist.glitch.me
Olesya's personal site: https://monolesan.com
#StyleGAN2 #StyleGAN2ADA #generation #thisXdoesntexist
ββReal numbers, data science and chaos: How to fit any dataset with a single parameter
Gentle reminder that measure of information is bit and that single parameter can contain more information than multiple parameters.
ArXiV: https://arxiv.org/abs/1904.12320
#cs #bits #math
Gentle reminder that measure of information is bit and that single parameter can contain more information than multiple parameters.
ArXiV: https://arxiv.org/abs/1904.12320
#cs #bits #math
Interesting idea for using GitHub panes for data #visualization
Source: https://twitter.com/levelsio/status/1443133071230791680
Live: https://nomadlist.com/open
Source: https://twitter.com/levelsio/status/1443133071230791680
Live: https://nomadlist.com/open
Experimenting with CLIP+VQGAN to Create AI Generated Art
Tips and tricks on prompts to #vqclip. TLDR:
* Adding
* Using the pipe to split a prompt into separate prompts that are steered towards independently may be counterproductive.
Article: https://blog.roboflow.com/ai-generated-art/
Colab Notebook: https://colab.research.google.com/drive/1go6YwMFe5MX6XM9tv-cnQiSTU50N9EeT
#visualization #gan #generation #generatinveart #vqgan #clip
Tips and tricks on prompts to #vqclip. TLDR:
* Adding
rendered in unreal engine
, trending on artstation
, top of /r/art
improves image quality significally.* Using the pipe to split a prompt into separate prompts that are steered towards independently may be counterproductive.
Article: https://blog.roboflow.com/ai-generated-art/
Colab Notebook: https://colab.research.google.com/drive/1go6YwMFe5MX6XM9tv-cnQiSTU50N9EeT
#visualization #gan #generation #generatinveart #vqgan #clip
Forwarded from Towards NLPπΊπ¦
RoBERTa English Toxicity Classifier
We have released our fine-tuned RoBERTa based toxicity classifier for English language onπ€:
https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier
The model was trained on the merge of the English parts of the three datasets by Jigsaw. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.
So, you can use it now conveniently for any of your research or industrial tasksβΊοΈ
We have released our fine-tuned RoBERTa based toxicity classifier for English language onπ€:
https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier
The model was trained on the merge of the English parts of the three datasets by Jigsaw. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76.
So, you can use it now conveniently for any of your research or industrial tasksβΊοΈ
huggingface.co
s-nlp/roberta_toxicity_classifier Β· Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
π1