In his HLF Laureate Portrait #ACMTuringAward recipient Yoshua Bengio discusses what attracted him to AI and why we need to allow people to try and fail. How trial and error is fundamental to the scientific endeavor and innovation.
https://www.youtube.com/watch?v=PHhFI8JexLg
https://www.youtube.com/watch?v=PHhFI8JexLg
YouTube
HLF Laureate Portraits: Yoshua Bengio
The Heidelberg Laureate Forum Foundation presents the HLF Laureate Portraits: Yoshua Bengio; ACM A.M. Turing Award, 2018.
Interview recorded in 2020.
In this series, join us as we meet with the top mathematicians and computer scientists – recipients of…
Interview recorded in 2020.
In this series, join us as we meet with the top mathematicians and computer scientists – recipients of…
“You don’t understand something until you’ve written it down carefully — carefully enough to explain it to somebody else. And if you haven’t done that, you’re just thinking you understand it,” Lamport,
@HLForum
2019
https://www.zmescience.com/science/difference-programming-coding-15112019/amp/
@HLForum
2019
https://www.zmescience.com/science/difference-programming-coding-15112019/amp/
ZME Science
The difference between programming and coding with Leslie Lamport
Coding is the easy part of programming. Leslie Lamport, 2013 Turing Award Laureate and inventor of LaTeX, explains why the two are fundamentally different.
A Probabilistic Framework for Imitating Human Race Driver Behavior. https://arxiv.org/abs/2001.08255
How Much Position Information Do Convolutional Neural Networks Encode?. https://arxiv.org/abs/2001.08248
Deep Depth Prior for Multi-View Stereo. https://arxiv.org/abs/2001.07791
Deep Metric Structured Learning For Facial Expression Recognition. https://arxiv.org/abs/2001.06612
How Much Position Information Do Convolutional Neural Networks Encode?
Islam et al.: https://arxiv.org/abs/2001.08248
#ArtificialIntelligence #DeepLearning #NeuralNetworks
Islam et al.: https://arxiv.org/abs/2001.08248
#ArtificialIntelligence #DeepLearning #NeuralNetworks
Google's Dataset Search is out of beta
Google Scholar, but for Datasets is out of beta. 25 million datasets have been indexed. Dataset owners can have their data indexed by publishing it on their website, described as per open standards.
https://datasetsearch.research.google.com/
Google Scholar, but for Datasets is out of beta. 25 million datasets have been indexed. Dataset owners can have their data indexed by publishing it on their website, described as per open standards.
https://datasetsearch.research.google.com/
Learning to adapt class-specific features across domains for semantic segmentation. https://arxiv.org/abs/2001.08311
DNNs as Layers of Cooperating Classifiers
Davel et al.: https://arxiv.org/abs/2001.06178
#DeepLearning #MachineLearning #NeuralNetworks
Davel et al.: https://arxiv.org/abs/2001.06178
#DeepLearning #MachineLearning #NeuralNetworks
Deep Java Library: New Deep Learning Toolkit for Java Developers
https://www.infoq.com/news/2020/01/deep-java-library/
https://www.infoq.com/news/2020/01/deep-java-library/
InfoQ
Deep Java Library: New Deep Learning Toolkit for Java Developers
Amazon released Deep Java Library (DJL), an open-source library with Java APIs to simplify training, testing, deploying, and making predictions with deep-learning models. DJL is framework agnostic; it abstracts away commonly used deep-learning functions,…
UNDERSTANDING MACHINE LEARNING
From Theory to Algorithms
Download: https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf
From Theory to Algorithms
Download: https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf
If you are looking for a source of inspiration for your next research or your next AI project, here you can find a big lake to swim: Google Research gives you access to more than 5700 publications since 1998.
Google Research is a division of Google that tackles challenges that define the technology of today and tomorrow, and you can have access to its research work as they publish it. https://medium.com/tech-cult-heartbeat/do-you-know-you-can-have-access-to-5779-publications-from-google-research-right-now-b42b2313c325
Google Research is a division of Google that tackles challenges that define the technology of today and tomorrow, and you can have access to its research work as they publish it. https://medium.com/tech-cult-heartbeat/do-you-know-you-can-have-access-to-5779-publications-from-google-research-right-now-b42b2313c325
Medium
You can have access to 5779 publications from Google Research right now.
If you are looking for inspiration for your next research or AI project, here you can find a big lake to swim: Google Research!
List of Open-Source RL Algorithms
By Sergey Kolesnikov, https://docs.google.com/spreadsheets/d/1EeFPd-XIQ3mq_9snTlAZSsFY7Hbnmd7P5bbT8LPuMn0/edit#gid=0
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
By Sergey Kolesnikov, https://docs.google.com/spreadsheets/d/1EeFPd-XIQ3mq_9snTlAZSsFY7Hbnmd7P5bbT8LPuMn0/edit#gid=0
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
Google Docs
Open-source RL
Behaviour Suite for Reinforcement Learning, or ‘bsuite’
Osband et al. https://arxiv.org/abs/1908.03568v2
A collection of carefully-designed experiments that investigate core capabilities of RL agents
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
Osband et al. https://arxiv.org/abs/1908.03568v2
A collection of carefully-designed experiments that investigate core capabilities of RL agents
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
Deepfakes may be a useful tool for spies https://www.technologyreview.com/f/613778/deepfakes-spies-espionage/
A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications
https://arxiv.org/abs/2001.06937
https://arxiv.org/abs/2001.06937