ArtificialIntelligenceArticles
2.96K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery
Wallach et al.: https://arxiv.org/abs/1510.02855
#MachineLearning #DeepLearning #Biomolecules
There are quotes from Yoshua Bengio, Samy Bengio, Rich Richard S. Sutton, Pieter Abbeel, Sergey Levine, David Cox, and me.
Some of my quotes: << “My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge. @ArtificialIntelligenceArticles
“Imagine that you give the machine a piece of input, a video clip, for example, and ask it to predict what happens next,” Dr. LeCun said in his office at New York University, decorated with stills from the movie “2001: A Space Odyssey.” “For the machine to train itself to do this, it has to develop some representation of the data. It has to understand that there are objects that are animate and others that are inanimate. The inanimate objects have predictable trajectories, the other ones don’t.”
After a self-supervised computer system “watches” millions of YouTube videos, he said, it will distill some representation of the world from them. Then, when the system is asked to perform a particular task, it can draw on that representation — in other words, it can teach itself.

https://www.nytimes.com/2020/04/08/technology/ai-computers-learning-supervised-unsupervised.html

https://t.iss.one/ArtificialIntelligenceArticles
Friston said he always assumed his ideas about how neurons organize would be used to build more efficient neuromorphic computer chips—hardware that tries to mimic how the brain processes information much more closely than today’s standard computer chips do. The idea of trying to integrate biological neurons with semiconductors is not, Friston said, an idea he’d anticipated.
“But to my surprise and delight they have gone straight for the real thing,” he said of Cortical Labs’ use of real biological neurons. “What this group has been able to do is, to my mind, the right way forward to making these ideas work in practice.”

https://fortune.com/2020/03/30/startup-human-neurons-computer-chips/
RLlib: Scalable Reinforcement Learning
RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.
RLlib natively supports TensorFlow, TensorFlow Eager, and PyTorch, but most of its internals are framework agnostic.
The Ray Team : https://ray.readthedocs.io/en/latest/rllib.html
#ReinforcementLearning #PyTorch #TensorFlow