Researchers at Google deepmind work on some of the most complex and interesting challenges in AI. Their world-class research has resulted in hundreds of peer-reviewed papers, including in Nature and Science. It's a great resource to follow AI research!!
https://deepmind.com/
https://deepmind.com/
Google DeepMind
Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and …
Now you can train an AI to swipe Tinder for you (Auto-Tinder)
https://www.marktechpost.com/2019/10/27/now-you-can-train-an-ai-to-swipe-tinder-for-you-auto-tinder/
https://www.marktechpost.com/2019/10/27/now-you-can-train-an-ai-to-swipe-tinder-for-you-auto-tinder/
MarkTechPost
Now you can train an AI to swipe Tinder for you (Auto-Tinder)
If you ever used a dating app, you may know this name “Tinder”. It's a swiping app to select and show interest in someone's profile card via the right swipe.Auto-Tinder was developed to automate the process of swiping without giving pain to your thumb. Auto…
Neuroscientists at University College London started with a simple question: Does the visual cortex represent stimuli with many different response patterns, or does it use similar patterns over and over again? The answer revealed a surprising mathematical rule at work. A power law that governs how the brain encodes sensory inputs as neural activity is tuned to keep our perceptions in balance. If the drop-off in neural responses was faster, important details would be lost. If it were slower, trivia would overwhelm us.
https://www.quantamagazine.org/a-power-law-keeps-the-brains-perceptions-balanced-20191022/
https://www.quantamagazine.org/a-power-law-keeps-the-brains-perceptions-balanced-20191022/
Quanta Magazine
A Power Law Keeps the Brain’s Perceptions Balanced
Researchers have discovered a surprising mathematical relationship in the brain’s representations of sensory information, with possible applications to AI
Few-Shot Unsupervised Image-to-Image Translation
paper https://arxiv.org/pdf/1905.01723.pdf
code https://github.com/NVlabs/FUNIT
paper https://arxiv.org/pdf/1905.01723.pdf
code https://github.com/NVlabs/FUNIT
GitHub
GitHub - NVlabs/FUNIT: Translate images to unseen domains in the test time with few example images.
Translate images to unseen domains in the test time with few example images. - NVlabs/FUNIT
From NeurIPS 2019: Particularly helpful for fighting wild forest fires: real-time segmentation of fire perimeter from aerial full-motion infrared video
https://www.profillic.com/paper/arxiv:1910.06407
FireNet: Real-time Segmentation of Fire Perimeter from Aerial Video
https://www.profillic.com/paper/arxiv:1910.06407
FireNet: Real-time Segmentation of Fire Perimeter from Aerial Video
Profillic
FireNet: Real-time Segmentation of Fire Perimeter from Aerial Video - Profillic
Explore state-of-the-art in machine learning, AI, and robotics. Browse models, source code, papers by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language processing, robotics…
Pytorch-Struct
Fast, general, and tested differentiable structured prediction in PyTorch. By Harvard NLP : https://github.com/harvardnlp/pytorch-struct
#PyTorch #DeepLearning #ArtificialIntelligence
Fast, general, and tested differentiable structured prediction in PyTorch. By Harvard NLP : https://github.com/harvardnlp/pytorch-struct
#PyTorch #DeepLearning #ArtificialIntelligence
GitHub
GitHub - harvardnlp/pytorch-struct: Fast, general, and tested differentiable structured prediction in PyTorch
Fast, general, and tested differentiable structured prediction in PyTorch - harvardnlp/pytorch-struct
Capacity, Bandwidth, and Compositionality in Emergent Language Learning
Resnick et al.: https://arxiv.org/abs/1910.11424
#ArtificialIntelligence #MachineLearning #MultiagentSystems
Resnick et al.: https://arxiv.org/abs/1910.11424
#ArtificialIntelligence #MachineLearning #MultiagentSystems
Credit Risk Analysis Using Machine Learning and Deep Learning Models By Peter Martey Dominique Guegan and Bertrand Hassani.
Github: https://github.com/brainy749/CreditRiskPaper
Paper/Article: https://www.mdpi.com/2227-9091/6/2/38/htm
https://t.iss.one/ArtificialIntelligenceArticles
Github: https://github.com/brainy749/CreditRiskPaper
Paper/Article: https://www.mdpi.com/2227-9091/6/2/38/htm
https://t.iss.one/ArtificialIntelligenceArticles
GitHub
brainy749/CreditRiskPaper
Codes for replication and implementation of techniques in our credit risk article - brainy749/CreditRiskPaper
Kaggle:
"Competition Launch: TensorFlow 2.0 Question Answering"
More: https://www.kaggle.com/c/tensorflow2-question-answering
"Competition Launch: TensorFlow 2.0 Question Answering"
More: https://www.kaggle.com/c/tensorflow2-question-answering
Kaggle
TensorFlow 2.0 Question Answering
Identify the answers to real user questions about Wikipedia page content
A deep learning framework for neuroscience - Blake A. Richards et al.
@ArtificialIntelligenceArticles
https://www.nature.com/articles/s41593-019-0520-2
@ArtificialIntelligenceArticles
@ArtificialIntelligenceArticles
https://www.nature.com/articles/s41593-019-0520-2
@ArtificialIntelligenceArticles
Nature
A deep learning framework for neuroscience
Nature Neuroscience - A deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit...
Deep Learning Drizzle
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
GitHub by Marimuthu Kalimuthu: https://github.com/kmario23/deep-learning-drizzle
Webpage: https://deep-learning-drizzle.github.io
#artificialintelligence #deeplearning #machinelearning #reinforcementlearning
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!!
GitHub by Marimuthu Kalimuthu: https://github.com/kmario23/deep-learning-drizzle
Webpage: https://deep-learning-drizzle.github.io
#artificialintelligence #deeplearning #machinelearning #reinforcementlearning
GitHub
GitHub - kmario23/deep-learning-drizzle: Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision…
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!! - kmario23/deep-learning-drizzle
Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
Gupta et al.: https://arxiv.org/abs/1910.11956
Website : https://relay-policy-learning.github.io
#ReinforcementLearning #MachineLearning #Robotics
Gupta et al.: https://arxiv.org/abs/1910.11956
Website : https://relay-policy-learning.github.io
#ReinforcementLearning #MachineLearning #Robotics
arXiv.org
Relay Policy Learning: Solving Long-Horizon Tasks via Imitation...
We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks. This general and universally-applicable, two-phase...
Deep learning is good at finding patterns in reams of data, but can't explain how they're connected. Turing Award winner Yoshua Bengio wants to change that.
https://www.wired.com/story/ai-pioneer-algorithms-understand-why/
#DeepLearning #AI
https://www.wired.com/story/ai-pioneer-algorithms-understand-why/
#DeepLearning #AI
WIRED
An AI Pioneer Wants His Algorithms to Understand the 'Why'
Deep learning is good at finding patterns in reams of data, but can't explain how they're connected. Turing Award winner Yoshua Bengio wants to change that.
Deep causal representation learning for unsupervised domain adaptation
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
arXiv.org
Deep causal representation learning for unsupervised domain adaptation
Studies show that the representations learned by deep neural networks can be transferred to similar prediction tasks in other domains for which we do not have enough labeled data. However, as we...
Neural Network Distiller: A Python Package For DNN Compression Research
Zmora et al.: https://arxiv.org/abs/1910.12232
#DeepLearning #MachineLearning #Python
Zmora et al.: https://arxiv.org/abs/1910.12232
#DeepLearning #MachineLearning #Python
arXiv.org
Neural Network Distiller: A Python Package For DNN Compression Research
This paper presents the philosophy, design and feature-set of Neural Network
Distiller, an open-source Python package for DNN compression research.
Distiller is a library of DNN compression...
Distiller, an open-source Python package for DNN compression research.
Distiller is a library of DNN compression...
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang : https://arxiv.org/abs/1910.12478
#ArtificialIntelligence #DeepLearning #MachineLearning
Greg Yang : https://arxiv.org/abs/1910.12478
#ArtificialIntelligence #DeepLearning #MachineLearning
arXiv.org
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks...
Wide neural networks with random weights and biases are Gaussian processes, as originally observed by Neal (1995) and more recently by Lee et al. (2018) and Matthews et al. (2018) for deep...