We got lot's of fine messages and feedback, let's discuss most notable papers and news of 2021 to assemble Community 2021 WrapUp in our chat:
https://t.iss.one/datascience_chat
https://t.iss.one/datascience_chat
Telegram
Data Chat
By ODS.ai
2021 WrapUps and Summaries
Those are two technical posts summarizing the progress which were published during 2021.
Papers with Code 2021 : A Year in Review post by Papers with Code
Medium: https://medium.com/paperswithcode/papers-with-code-2021-a-year-in-review-de75d5a77b8b
Post on KDNuggers
Post: https://www.kdnuggets.com/2021/12/2021-year-review-amazing-ai-papers.html
#summary
Those are two technical posts summarizing the progress which were published during 2021.
Papers with Code 2021 : A Year in Review post by Papers with Code
Medium: https://medium.com/paperswithcode/papers-with-code-2021-a-year-in-review-de75d5a77b8b
Post on KDNuggers
Post: https://www.kdnuggets.com/2021/12/2021-year-review-amazing-ai-papers.html
#summary
Medium
Papers with Code 2021 : A Year in Review
Papers with Code indexes various machine learning artifactsβββpapers, code, resultsβββto facilitate discovery and comparison. Using thisβ¦
π11π₯2π€©2
The Illustrated Retrieval Transformer
by @jayalammar
The latest batch of language models can be much smaller yet achieve GPT-3 like performance by being able to query a database or search the web for information. A key indication is that building larger and larger models is not the only way to improve performance.
https://jalammar.github.io/illustrated-retrieval-transformer/
#nlp #gpt3 #retro #deepmind
by @jayalammar
The latest batch of language models can be much smaller yet achieve GPT-3 like performance by being able to query a database or search the web for information. A key indication is that building larger and larger models is not the only way to improve performance.
https://jalammar.github.io/illustrated-retrieval-transformer/
#nlp #gpt3 #retro #deepmind
π₯16π14β€2π€©1
π¦ Hi!
We are the first Telegram Data Science channel.
Channel was started as a collection of notable papers, news and releases shared for the members of Open Data Science (ODS) community. Through the years of just keeping the thing going we grew to an independent online Media supporting principles of Free and Open access to the information related to Data Science.
Ultimate Posts
* Where to start learning more about Data Science. https://github.com/open-data-science/ultimate_posts/tree/master/where_to_start
* @opendatascience channel audience research. https://github.com/open-data-science/ods_channel_stats_eda
Open Data Science
ODS.ai is an international community of people anyhow related to Data Science.
Website: https://ods.ai
Hashtags
Through the years we accumulated a big collection of materials, most of them accompanied by hashtags.
#deeplearning #DL β post about deep neural networks (> 1 layer)
#cv β posts related to Computer Vision. Pictures and videos
#nlp #nlu β Natural Language Processing and Natural Language Understanding. Texts and sequences
#audiolearning #speechrecognition β related to audio information processing
#ar β augmeneted reality related content
#rl β Reinforcement Learning (agents, bots and neural networks capable of playing games)
#gan #generation #generatinveart #neuralart β about neural artt and image generation
#transformer #vqgan #vae #bert #clip #StyleGAN2 #Unet #resnet #keras #Pytorch #GPT3 #GPT2 β related to special architectures or frameworks
#coding #CS β content related to software engineering sphere
#OpenAI #microsoft #Github #DeepMind #Yandex #Google #Facebook #huggingface β hashtags related to certain companies
#productionml #sota #recommendation #embeddings #selfdriving #dataset #opensource #analytics #statistics #attention #machine #translation #visualization
Chats
- Data Science Chat https://t.iss.one/datascience_chat
- ODS Slack through invite form at website
ODS resources
* Main website: https://ods.ai
* ODS Community Telegram Channel (in Russian): @ods_ru
* ML trainings Telegram Channel: @mltrainings
* ODS Community Twitter: https://twitter.com/ods_ai
Feedback and Contacts
You are welcome to reach administration through telegram bot: @opendatasciencebot
We are the first Telegram Data Science channel.
Channel was started as a collection of notable papers, news and releases shared for the members of Open Data Science (ODS) community. Through the years of just keeping the thing going we grew to an independent online Media supporting principles of Free and Open access to the information related to Data Science.
Ultimate Posts
* Where to start learning more about Data Science. https://github.com/open-data-science/ultimate_posts/tree/master/where_to_start
* @opendatascience channel audience research. https://github.com/open-data-science/ods_channel_stats_eda
Open Data Science
ODS.ai is an international community of people anyhow related to Data Science.
Website: https://ods.ai
Hashtags
Through the years we accumulated a big collection of materials, most of them accompanied by hashtags.
#deeplearning #DL β post about deep neural networks (> 1 layer)
#cv β posts related to Computer Vision. Pictures and videos
#nlp #nlu β Natural Language Processing and Natural Language Understanding. Texts and sequences
#audiolearning #speechrecognition β related to audio information processing
#ar β augmeneted reality related content
#rl β Reinforcement Learning (agents, bots and neural networks capable of playing games)
#gan #generation #generatinveart #neuralart β about neural artt and image generation
#transformer #vqgan #vae #bert #clip #StyleGAN2 #Unet #resnet #keras #Pytorch #GPT3 #GPT2 β related to special architectures or frameworks
#coding #CS β content related to software engineering sphere
#OpenAI #microsoft #Github #DeepMind #Yandex #Google #Facebook #huggingface β hashtags related to certain companies
#productionml #sota #recommendation #embeddings #selfdriving #dataset #opensource #analytics #statistics #attention #machine #translation #visualization
Chats
- Data Science Chat https://t.iss.one/datascience_chat
- ODS Slack through invite form at website
ODS resources
* Main website: https://ods.ai
* ODS Community Telegram Channel (in Russian): @ods_ru
* ML trainings Telegram Channel: @mltrainings
* ODS Community Twitter: https://twitter.com/ods_ai
Feedback and Contacts
You are welcome to reach administration through telegram bot: @opendatasciencebot
GitHub
ultimate_posts/where_to_start at master Β· open-data-science/ultimate_posts
Ultimate posts for opendatascience telegram channel - open-data-science/ultimate_posts
π56π₯15β€7π₯°2π2π2β‘1π1π1
Data Science by ODS.ai π¦ pinned Β«π¦ Hi! We are the first Telegram Data Science channel. Channel was started as a collection of notable papers, news and releases shared for the members of Open Data Science (ODS) community. Through the years of just keeping the thing going we grew to an independentβ¦Β»
Highly accurate protein structure prediction with AlphaFold
Anna Potapenko @DeepMind
Predicting a proteinβs structure from its primary sequence has been a grand challenge in biology for the past 50 years, holding the promise to bridge the gap between the pace of genomics discovery and resulting structural characterization. In this talk, we will describe work at DeepMind to develop AlphaFold, a new deep learning-based system for structure prediction that achieves high accuracy across a wide range of targets. We demonstrated our system in the 14th biennial Critical Assessment of Protein Structure Prediction (CASP14) across a wide range of difficult targets, where the assessors judged our predictions to be at an accuracy βcompetitive with experimentβ for approximately 2/3rds of proteins. The talk will cover both the underlying machine learning ideas and the implications for biological research.
https://youtu.be/oD34Q1qeMII
Anna Potapenko @DeepMind
Predicting a proteinβs structure from its primary sequence has been a grand challenge in biology for the past 50 years, holding the promise to bridge the gap between the pace of genomics discovery and resulting structural characterization. In this talk, we will describe work at DeepMind to develop AlphaFold, a new deep learning-based system for structure prediction that achieves high accuracy across a wide range of targets. We demonstrated our system in the 14th biennial Critical Assessment of Protein Structure Prediction (CASP14) across a wide range of difficult targets, where the assessors judged our predictions to be at an accuracy βcompetitive with experimentβ for approximately 2/3rds of proteins. The talk will cover both the underlying machine learning ideas and the implications for biological research.
https://youtu.be/oD34Q1qeMII
YouTube
[Colloquium] Highly accurate protein structure prediction with AlphaFold
Predicting a proteinβs structure from its primary sequence has been a grand challenge in biology for the past 50 years, holding the promise to bridge the gap between the pace of genomics discovery and resulting structural characterization. In this talk, weβ¦
π30π₯12π5β€1
Forwarded from Code Mining
Using public datasets in commercial software
Great paper:
An extremely important question for the Data Science community: it turns out (wat?) that not all publicly available datasets can be used to build commercial solutions π£π£π£.
Authors examine license agreements of 6 popular datasets used in Computer Vision (CIFAR-10, ImageNet, Cityscapes, FFHQ, VGGFaces2 and MS COCO) and conclude that the models trained on these data can not be commercialized at least.
An example of the results of the CIFAR-10 dataset's license analysis is shown in the screenshot.
It was logical to assume this, but for many community members this may be the opening of the century.
Prepared by @codemining for ods.ai. Subscribe!
Great paper:
Can I use this publicly available dataset to build commercial AI software? Most likely not
.An extremely important question for the Data Science community: it turns out (wat?) that not all publicly available datasets can be used to build commercial solutions π£π£π£.
Authors examine license agreements of 6 popular datasets used in Computer Vision (CIFAR-10, ImageNet, Cityscapes, FFHQ, VGGFaces2 and MS COCO) and conclude that the models trained on these data can not be commercialized at least.
An example of the results of the CIFAR-10 dataset's license analysis is shown in the screenshot.
It was logical to assume this, but for many community members this may be the opening of the century.
Prepared by @codemining for ods.ai. Subscribe!
π30π₯7
ββThere had been less posts than usual as you might have noticed, only because editor-in-chief's (mine) attention been directed to DeFi space in general and NFT in particular.
However once involved with the beauty of AI and art, one can't just exit it, so I've been working on an NFT-collection drop as an exercise to study the field. Because as Feynman taught us 'What I cannot create, I do not understand'.
After getting some early results with #StyleGAN2 I realised that I need to study how drop mechanics and everything else works and asked a very talented friend to help me with generating the art for the collection. On my humble scale, he is top 0.1% researcher in the field (though he might argue with that, or you might, but let's wait for his scientific papers published here to judge on that), so I was sure that the art will be dope.
And it was so great we decided to make a real NFT drop instead of purely experimental one. Soon we will publish all the intersting technical details on the architecture and approache we used to generate beautiful toadz. So stay tuned for the post on how to generate astonishing art and subsribe to the collection twitter not to miss any details.
Twitter: https://twitter.com/toadverseNFT
However once involved with the beauty of AI and art, one can't just exit it, so I've been working on an NFT-collection drop as an exercise to study the field. Because as Feynman taught us 'What I cannot create, I do not understand'.
After getting some early results with #StyleGAN2 I realised that I need to study how drop mechanics and everything else works and asked a very talented friend to help me with generating the art for the collection. On my humble scale, he is top 0.1% researcher in the field (though he might argue with that, or you might, but let's wait for his scientific papers published here to judge on that), so I was sure that the art will be dope.
And it was so great we decided to make a real NFT drop instead of purely experimental one. Soon we will publish all the intersting technical details on the architecture and approache we used to generate beautiful toadz. So stay tuned for the post on how to generate astonishing art and subsribe to the collection twitter not to miss any details.
Twitter: https://twitter.com/toadverseNFT
π26π₯8π1π€1
Scholarships in UAE
MBZUAI in Abu Dhabi is a modern and innovative university in Artificial Intelligence. All admitted candidates are granted a full scholarship, which includes: tuition fee, accommodation, health insurance, round air transportation. Additionally students get a monthly stipend of 8,000 DH (USD $2200) for MSc and 10,000 DH (USD $2700) for PhD.
So, if you are interested, welcome to join MBZUAI info session with current MBZUAI students from Ukraine and Kazakhstan: Friday, Jan 21 at 5pm Moscow time / 4pm of Kyiv time. (Session is today)
MBZUAI website: https://mbzuai.ac.ae/
Info session Zoom: https://tinyurl.com/MBZUAI-Ukraine
MBZUAI in Abu Dhabi is a modern and innovative university in Artificial Intelligence. All admitted candidates are granted a full scholarship, which includes: tuition fee, accommodation, health insurance, round air transportation. Additionally students get a monthly stipend of 8,000 DH (USD $2200) for MSc and 10,000 DH (USD $2700) for PhD.
So, if you are interested, welcome to join MBZUAI info session with current MBZUAI students from Ukraine and Kazakhstan: Friday, Jan 21 at 5pm Moscow time / 4pm of Kyiv time. (Session is today)
MBZUAI website: https://mbzuai.ac.ae/
Info session Zoom: https://tinyurl.com/MBZUAI-Ukraine
π10
what we know about the beginning of february
* alpha-code -- a system that can compete at average human level in competitive coding competitions like codeforces. an exciting leap in ai problem-solving capabilities, combining many advances in machine learning
link: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode
* solving some formal mathematics statement curriculum learning -- neural network that solved two problems from the international math olympiad
link: https://openai.com/blog/formal-math/
* "hyperdetailed render of my bizarro acid trip, detailed architectural render. I can't believe how detailed this is"
link: https://twitter.com/Somnai_dreams/status/1489108710962384896
* alpha-code -- a system that can compete at average human level in competitive coding competitions like codeforces. an exciting leap in ai problem-solving capabilities, combining many advances in machine learning
link: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode
* solving some formal mathematics statement curriculum learning -- neural network that solved two problems from the international math olympiad
link: https://openai.com/blog/formal-math/
* "hyperdetailed render of my bizarro acid trip, detailed architectural render. I can't believe how detailed this is"
link: https://twitter.com/Somnai_dreams/status/1489108710962384896
π₯16π7π€1π€―1
Forwarded from Alexey Smirnov
AIModel-Mutator: Finding Vulnerabilities in TensorFlow
Another current study on the security of machine learning models, and information on how framework bugs (such as Tensorflow) can affect it. For example, from 2019 to 2021, the number of CVEs for TF increased 15 times.
Qian Feng, a senior security researcher at Baidu Security talks about the important work they did with their colleagues.
As we know, it's pretty easy to corrupt models, they freely distributed and without any additional checks, so short deep dive into the problem in this video: https://www.youtube.com/watch?v=7QqbJRZ6CxU.
Prepared by @codemining.
Another current study on the security of machine learning models, and information on how framework bugs (such as Tensorflow) can affect it. For example, from 2019 to 2021, the number of CVEs for TF increased 15 times.
Qian Feng, a senior security researcher at Baidu Security talks about the important work they did with their colleagues.
As we know, it's pretty easy to corrupt models, they freely distributed and without any additional checks, so short deep dive into the problem in this video: https://www.youtube.com/watch?v=7QqbJRZ6CxU.
Prepared by @codemining.
π11π±8π€3
Data Science by ODS.ai π¦
ββThere had been less posts than usual as you might have noticed, only because editor-in-chief's (mine) attention been directed to DeFi space in general and NFT in particular. However once involved with the beauty of AI and art, one can't just exit it, soβ¦
GLIDE for image augmentation aka ToadVerse technical details
Technical details on how we used GLIDE for image augmentation.
Article Link: https://mirror.xyz/kefirski.eth/XN1cV27uHcAjN_tPSc_ckgSz4B3Nfh5l5HH9lRs9xEE
#GAN #StyleGAN2 #GLIDE #art #art_generation
Technical details on how we used GLIDE for image augmentation.
Article Link: https://mirror.xyz/kefirski.eth/XN1cV27uHcAjN_tPSc_ckgSz4B3Nfh5l5HH9lRs9xEE
#GAN #StyleGAN2 #GLIDE #art #art_generation
mirror.xyz
GLIDE for image augmentation aka ToadVerse technical details
We had an idea of shipping derivative of the Cryptoadz NFT collection because we like art, vibe, and community. We decided to exercise the idea of the existence of parallel blockchains and to integrate that into the lore of our collection. So, Toadverse asβ¦
π10
ββSimple book about #ML β Machine Learning Simplified
The main purpose of the book is to build an intuitive understanding of how algorithms work through basic examples. In order to understand the presented material, it is enough to know basic mathematics and linear algebra.
After reading this book, you will know the basics of supervised learning, understand complex mathematical models, understand the entire pipeline of a typical ML project, and also be able to share your knowledge with colleagues from related industries and with technical professionals.
And for those who find the theoretical part not enough - the book is supplemented with a repository on GitHub, which has Python implementation of all methods and algorithms described in chapters.
Book is absolutely free to read.
Link: themlsbook.com
#wheretostart #book
The main purpose of the book is to build an intuitive understanding of how algorithms work through basic examples. In order to understand the presented material, it is enough to know basic mathematics and linear algebra.
After reading this book, you will know the basics of supervised learning, understand complex mathematical models, understand the entire pipeline of a typical ML project, and also be able to share your knowledge with colleagues from related industries and with technical professionals.
And for those who find the theoretical part not enough - the book is supplemented with a repository on GitHub, which has Python implementation of all methods and algorithms described in chapters.
Book is absolutely free to read.
Link: themlsbook.com
#wheretostart #book
π52π6
AlphaCode Explained: AI Code Generation
AlphaCode is DeepMind's new massive language model for generating code. It is similar to OpenAI Codex, except for in the paper they provide a bit more analysis. The field of NLP within AI and ML has exploded get a lot more papers all the time. This video can help you understand how AlphaCode works and what some of the key takeaways are.
youtube: https://www.youtube.com/watch?v=t3Yh56efKGI
blog post: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode
paper: https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf
AlphaCode is DeepMind's new massive language model for generating code. It is similar to OpenAI Codex, except for in the paper they provide a bit more analysis. The field of NLP within AI and ML has exploded get a lot more papers all the time. This video can help you understand how AlphaCode works and what some of the key takeaways are.
youtube: https://www.youtube.com/watch?v=t3Yh56efKGI
blog post: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode
paper: https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf
YouTube
AlphaCode Explained: AI Code Generation
AlphaCode is DeepMind's new massive language model for generating code. It is similar to OpenAI Codex, except for in the paper they provide a bit more analysis. The field of NLP within AI and ML has exploded get a lot more papers all the time. Hopefullyβ¦
π24
Forwarded from Machinelearning
π OCTIS : Optimizing and Comparing Topic Models is Simple!
Github: https://github.com/mind-Lab/octis
Paper: https://arxiv.org/abs/2202.07631v1
Dataset: https://paperswithcode.com/dataset/20-newsgroups
@ai_machinelearning_big_data
Github: https://github.com/mind-Lab/octis
Paper: https://arxiv.org/abs/2202.07631v1
Dataset: https://paperswithcode.com/dataset/20-newsgroups
@ai_machinelearning_big_data
GitHub
GitHub - MIND-Lab/OCTIS: OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (acceptedβ¦
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track) - MIND-Lab/OCTIS
π13π4
Forwarded from Silero News (Alexander)
One Voice Detector to Rule Them All
A brief English article about our VAD got released on The Gradient!
Please follow the link to learn:
- Which values we did pursue;
- Why we decided to create our own VAD;
- Which criteria and metrics we optimized;
- A brief overview of what is available in general;
- How it compares with well-established and similar class solutions;
Links:
- The article https://thegradient.pub/one-voice-detector-to-rule-them-all/
- The VAD is always available on Github (please give us a βοΈ) here - https://github.com/snakers4/silero-vad
PS
- Also new features probably will be reserved for later quarters, but you can vote here
- Also you can find a Russian article here
A brief English article about our VAD got released on The Gradient!
Please follow the link to learn:
- Which values we did pursue;
- Why we decided to create our own VAD;
- Which criteria and metrics we optimized;
- A brief overview of what is available in general;
- How it compares with well-established and similar class solutions;
Links:
- The article https://thegradient.pub/one-voice-detector-to-rule-them-all/
- The VAD is always available on Github (please give us a βοΈ) here - https://github.com/snakers4/silero-vad
PS
- Also new features probably will be reserved for later quarters, but you can vote here
- Also you can find a Russian article here
The Gradient
One Voice Detector to Rule Them All
VAD is among the most important and fundamental algorithms in any production or data preparation pipelines related to speech
π9π1
Forwarded from Self Supervised Boy
How Useful is Self-Supervised Pretraining for Visual Tasks?
A relatively old paper (CVPR2020), by our fast life standards. Nevertheless, it has a pair of practical takeaways.
Authors created a synthetic dataset with several degrees of freedom to vary difficulty. It varies from almost monochrome objects to randomized textures and positioning on image.
The target was to compare how good different self-supervised approaches help to tune for different downstream tasks. From classification to depth estimation.
Two practical takeways are:
1. The self-supervised method utility is wildly dependent on task, markup amount and even data complexity.
2. A linear evaluation score, so popular in papers, has almost no correlation with actual fine-tuning results.
Authors found out, that there is no improvement by self-supervised training when lots of labeled data presented (which became kinda well known since then). Based on this, they hypothesise, that improvement of SSL pre-training is rather kind of a regularization than optimization. That is, SSL pre-training helps to find wider optimum, not better. Though, to claim this, some kind of loss plane investigation would be more helpful.
Source: here
A relatively old paper (CVPR2020), by our fast life standards. Nevertheless, it has a pair of practical takeaways.
Authors created a synthetic dataset with several degrees of freedom to vary difficulty. It varies from almost monochrome objects to randomized textures and positioning on image.
The target was to compare how good different self-supervised approaches help to tune for different downstream tasks. From classification to depth estimation.
Two practical takeways are:
1. The self-supervised method utility is wildly dependent on task, markup amount and even data complexity.
2. A linear evaluation score, so popular in papers, has almost no correlation with actual fine-tuning results.
Authors found out, that there is no improvement by self-supervised training when lots of labeled data presented (which became kinda well known since then). Based on this, they hypothesise, that improvement of SSL pre-training is rather kind of a regularization than optimization. That is, SSL pre-training helps to find wider optimum, not better. Though, to claim this, some kind of loss plane investigation would be more helpful.
Source: here
π17π€5π±4
Forwarded from Towards NLPπΊπ¦
Models based on graphs are quite important for a lot of tasks in NLP. There is an overview from Michael Bronstein about what he is expecting for upcoming year for the Graph ML field:
1. Geometry becomes increasingly important in ML.
2. Message passing is still the dominant paradigm in GNNs.
3. Differential equations give rise to new GNN architectures.
4. Old ideas from Signal Processing, Neuroscience, and Physics get a new life.
5. Modeling complex systems requires going beyond graphs.
6. Reasoning, axiomatisation, and generalisation are still big open questions in Graph ML.
7. Graphs become increasingly popular in Reinforcement Learning, but probably still have a way to go.
8. AlphaFold 2 is a triumph of Geometric ML and a paradigm shift in structural biology.
9. Drug discovery and design benefits from GNNs and their confluence with Transformers.
10. AI-first drug discovery is increasingly using Geometric and Graph ML.
11. Quantum ML benefits from graph-based methods.
[link]
1. Geometry becomes increasingly important in ML.
2. Message passing is still the dominant paradigm in GNNs.
3. Differential equations give rise to new GNN architectures.
4. Old ideas from Signal Processing, Neuroscience, and Physics get a new life.
5. Modeling complex systems requires going beyond graphs.
6. Reasoning, axiomatisation, and generalisation are still big open questions in Graph ML.
7. Graphs become increasingly popular in Reinforcement Learning, but probably still have a way to go.
8. AlphaFold 2 is a triumph of Geometric ML and a paradigm shift in structural biology.
9. Drug discovery and design benefits from GNNs and their confluence with Transformers.
10. AI-first drug discovery is increasingly using Geometric and Graph ML.
11. Quantum ML benefits from graph-based methods.
[link]
π36β€4
Detection of COVID-19 using multimodal data from a wearable device: results from the first TemPredict Study
Some time ago in a different world one of the channel editors shared permmission to use data from sleep & activity tracker Oura Ring to develop an algorithm for COVID-19 prediction.
Results of this study continue to arrive. Today team shared the second manuscript from the first TemPredict Study in Nature Scientific Reports. This manuscript details an algorithm designed to detect COVID-19 using data from the Oura Ring. Alogirthm publication: www.nature.com/articles/s41598-022-07314-0
The first publication from the first TemPredict Study will continue to be available online for you to access at any time, at this link: https://www.nature.com/articles/s41598-020-78355-6
The first publication from the second TemPredict Study (correlations between data from the Oura Ring and data from a LabCorp antibody blood test) will also continue to be available online for you to access at any time, at this link: https://www.mdpi.com/2076-393X/10/2/264
That's the power of the international collaboration πͺ
#oura #covid #biolearning #medical #health
Some time ago in a different world one of the channel editors shared permmission to use data from sleep & activity tracker Oura Ring to develop an algorithm for COVID-19 prediction.
Results of this study continue to arrive. Today team shared the second manuscript from the first TemPredict Study in Nature Scientific Reports. This manuscript details an algorithm designed to detect COVID-19 using data from the Oura Ring. Alogirthm publication: www.nature.com/articles/s41598-022-07314-0
The first publication from the first TemPredict Study will continue to be available online for you to access at any time, at this link: https://www.nature.com/articles/s41598-020-78355-6
The first publication from the second TemPredict Study (correlations between data from the Oura Ring and data from a LabCorp antibody blood test) will also continue to be available online for you to access at any time, at this link: https://www.mdpi.com/2076-393X/10/2/264
That's the power of the international collaboration πͺ
#oura #covid #biolearning #medical #health
Nature
Detection of COVID-19 using multimodal data from a wearable device: results from the first TemPredict Study
Scientific Reports - Detection of COVID-19 using multimodal data from a wearable device: results from the first TemPredict Study
π14