Using AI to generate recipes from food images
Facebook developed image 2 recipe architecture.
Paper: https://research.fb.com/publications/inverse-cooking-recipe-generation-from-food-images/
Code: https://github.com/facebookresearch/inversecooking
Link: https://ai.facebook.com/blog/inverse-cooking/
#CV #DL
Facebook developed image 2 recipe architecture.
Paper: https://research.fb.com/publications/inverse-cooking-recipe-generation-from-food-images/
Code: https://github.com/facebookresearch/inversecooking
Link: https://ai.facebook.com/blog/inverse-cooking/
#CV #DL
Meta Research
Inverse Cooking: Recipe Generation from Food Images - Meta Research
People enjoy food photography because they appreciate food. Behind each meal there is a story described in a complex recipe and, unfortunately, by simply looking at a food image we do not have access to its preparation process. Therefore, in this paper we…
AlphaStar: Mastering the Game of StarCraft II
Talk by David Silver: https://slideslive.com/38916905/alphastar-mastering-the-game-of-starcraft-ii
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
Talk by David Silver: https://slideslive.com/38916905/alphastar-mastering-the-game-of-starcraft-ii
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
SlidesLive
David Silver | AlphaStar: Mastering the Game of StarCraft II
The Challenge of Open Source MT
https://www.cms-connected.com/News-Archive/June-2019/The-Challenge-of-Open-Source-Machine-Translation#.XQjjqDevkh4.twitter
https://www.cms-connected.com/News-Archive/June-2019/The-Challenge-of-Open-Source-Machine-Translation#.XQjjqDevkh4.twitter
CMSC
The Challenge of Open Source Machine Translation
MT is considered one of the most difficult problems in the general Artificial Intelligence (AI) and machine learning field.@
@
@
Stand-Alone Self-Attention in Vision Models
Ramachandran et al.: https://arxiv.org/abs/1906.05909
#ArtificialIntelligence #DeepLearning #MachineLearning
Ramachandran et al.: https://arxiv.org/abs/1906.05909
#ArtificialIntelligence #DeepLearning #MachineLearning
From CVPR 2019: Turning Doodles into Stunning, Photorealistic Landscapes.
NVIDIA research harnesses generative adversarial networks to create highly realistic scenes.
Artists can use paintbrush and paint bucket tools to design their own landscapes with labels like river, rock and cloud
https://www.profillic.com/paper/arxiv:1903.07291
NVIDIA research harnesses generative adversarial networks to create highly realistic scenes.
Artists can use paintbrush and paint bucket tools to design their own landscapes with labels like river, rock and cloud
https://www.profillic.com/paper/arxiv:1903.07291
Profillic
Profillic: AI research & source code to supercharge your projects
Explore state-of-the-art in machine learning, AI, and robotics research. Browse papers, source code, models, and more by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language…
STEAL, a new algorithm developed by NVIDIA Research and being presented this week at #CVPR2019, automatically refines the boundaries of objects in training datasets, making them more exact.
Link to the blog: https://news.developer.nvidia.com/nvidia-research-released-at-cvpr-helps-developers-create-better-visual-datasets/
Link to the blog: https://news.developer.nvidia.com/nvidia-research-released-at-cvpr-helps-developers-create-better-visual-datasets/
NVIDIA Technical Blog
NVIDIA Research Released at CVPR Helps Developers Create Better Visual Datasets | NVIDIA Technical Blog
STEAL, a new algorithm developed by NVIDIA Research, automatically refines the boundaries of objects in training datasets, making them more exact.
Best Paper Award at the AI for social good Workshop at #ICML2019 https://medium.com/@jasonphang/deep-neural-networks-improve-radiologists-performance-in-breast-cancer-screening-565eb2bd3c9f
[code] https://github.com/nyukat/breast_cancer_classifier
[preprint] https://arxiv.org/pdf/1903.08297.pdf
[data specs] https://cs.nyu.edu/~kgeras/reports/datav1.0.pdf
[ICML '19] https://aiforsocialgood.github.io/icml2019/acceptedpapers.htm
[code] https://github.com/nyukat/breast_cancer_classifier
[preprint] https://arxiv.org/pdf/1903.08297.pdf
[data specs] https://cs.nyu.edu/~kgeras/reports/datav1.0.pdf
[ICML '19] https://aiforsocialgood.github.io/icml2019/acceptedpapers.htm
New Paper:
Stand-Alone Self-Attention in Vision Models
https://arxiv.org/abs/1906.05909
Can attention work as a stand-alone primitive for vision models?
We develop a pure self-attention model by replacing the spatial convolutions in a ResNet by a simple, local self-attention layer.
Stand-Alone Self-Attention in Vision Models
https://arxiv.org/abs/1906.05909
Can attention work as a stand-alone primitive for vision models?
We develop a pure self-attention model by replacing the spatial convolutions in a ResNet by a simple, local self-attention layer.
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.
Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.
Full article: https://news.mit.edu/2019/computer-model-brain-visual-cortex-0502
Science paper: https://science.sciencemag.org/content/364/6439/eaav9436
Biorxiv (open access): https://www.biorxiv.org/content/10.1101/461525v1
https://t.iss.one/ArtificialIntelligenceArticles
Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.
Full article: https://news.mit.edu/2019/computer-model-brain-visual-cortex-0502
Science paper: https://science.sciencemag.org/content/364/6439/eaav9436
Biorxiv (open access): https://www.biorxiv.org/content/10.1101/461525v1
https://t.iss.one/ArtificialIntelligenceArticles
MIT News
Putting vision models to the test
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex. The results suggest that the current versions of these models are similar enough to the brain to allow them to actually control…
Can deep neural networks help cognitive scientists understand the brain?
Radoslaw M. Cichy & Daniel Kaiser discuss the value of DNNs as scientific models in a new TICS Opinion piece.
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30034-8 https://t.iss.one/ArtificialIntelligenceArticles
Radoslaw M. Cichy & Daniel Kaiser discuss the value of DNNs as scientific models in a new TICS Opinion piece.
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30034-8 https://t.iss.one/ArtificialIntelligenceArticles
Telegram
ArtificialIntelligenceArticles
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
The future depends on some graduate student who is deeply suspicious of everything I have said. - Geoffrey Hinton https://t.iss.one/ArtificialIntelligenceArticles
Long Short-Term Memory: From Zero to Hero with PyTorch
https://blog.floydhub.com/long-short-term-memory-from-zero-to-hero-with-pytorch/
https://blog.floydhub.com/long-short-term-memory-from-zero-to-hero-with-pytorch/
Efficient Exploration via State Marginal Matching
Lee et al.
Blog: https://sites.google.com/view/state-marginal-matching
Paper: https://arxiv.org/abs/1906.05274
Code: https://github.com/RLAgent/state-marginal-matching
#ArtificialIntelligence #MachineLearning #ReinforcementLearning
Lee et al.
Blog: https://sites.google.com/view/state-marginal-matching
Paper: https://arxiv.org/abs/1906.05274
Code: https://github.com/RLAgent/state-marginal-matching
#ArtificialIntelligence #MachineLearning #ReinforcementLearning
Google
State Marginal Matching
Learning the Depths of Moving People by Watching Frozen People
Li et al.: https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Learning_the_Depths_of_Moving_People_by_Watching_Frozen_People_CVPR_2019_paper.pdf
#ArtificialIntelligence #DeepLearning #MachineLearning
Li et al.: https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Learning_the_Depths_of_Moving_People_by_Watching_Frozen_People_CVPR_2019_paper.pdf
#ArtificialIntelligence #DeepLearning #MachineLearning
Theoretical Physics for Deep Learning workshop at #ICML2019
Slides and videos: https://sites.google.com/view/icml2019phys4dl/schedule?authuser=0
#Physics #DeepLearning
Slides and videos: https://sites.google.com/view/icml2019phys4dl/schedule?authuser=0
#Physics #DeepLearning