Congrats to Dr. Rahaf Aljundi on receiving her PhD from KULeuven (advised by Prof. Tinne Tuytelaars). I am happy about our fruitful collaboration on continual learning and that it was a part of her well-deserved PhD.
Please see her PhD thesis in the link below; seasoned continual learning research ranging from the use of unlabeled data leveraged by MAS (our ECCV18 collaboration) that is also inspired from Hebbian learning theory, use of language, ACCV18), later her work on task free continual learning and making it more online at CVPR19 and NeurIPS19 (at MILA).
https://arxiv.org/abs/1910.02718
Please see her PhD thesis in the link below; seasoned continual learning research ranging from the use of unlabeled data leveraged by MAS (our ECCV18 collaboration) that is also inspired from Hebbian learning theory, use of language, ACCV18), later her work on task free continual learning and making it more online at CVPR19 and NeurIPS19 (at MILA).
https://arxiv.org/abs/1910.02718
PyTorch 1.3 is now available with iOS / Android support, quantization, named tensors, type promotion, and more: bit.ly/2OCfNpR
pytorch.org
An open source deep learning platform that provides a seamless path from research prototyping to production deployment.
The State of Machine Learning Frameworks in 2019
By Horace He : https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
#MachineLearning #PyTorch #TensorFlow
By Horace He : https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
#MachineLearning #PyTorch #TensorFlow
The Gradient
The State of Machine Learning Frameworks in 2019
Since deep learning regained prominence in 2012, many machine learning frameworks have clamored to become the new favorite among researchers and industry practitioners. From the early academic outputs Caffe and Theano to the massive industry-backed PyTorch…
Practical Posterior Error Bounds from Variational Objectives
Jonathan H. Huggins, Mikołaj Kasprzak, Trevor Campbell, Tamara Broderick : https://arxiv.org/abs/1910.04102
#MachineLearning #StatisticsTheory #VariationalInference
Jonathan H. Huggins, Mikołaj Kasprzak, Trevor Campbell, Tamara Broderick : https://arxiv.org/abs/1910.04102
#MachineLearning #StatisticsTheory #VariationalInference
NGBoost: Natural Gradient Boosting for Probabilistic Prediction
Duan et al.: https://arxiv.org/pdf/1910.03225v1.pdf
#MachineLearning #NaturalGradientBoosting
Duan et al.: https://arxiv.org/pdf/1910.03225v1.pdf
#MachineLearning #NaturalGradientBoosting
Benchmarking Every Open Source Model
By Papers With Code : https://sotabench.com
#DeepLearning #PyTorch #TensorFlow https://t.iss.one/ArtificialIntelligenceArticles
By Papers With Code : https://sotabench.com
#DeepLearning #PyTorch #TensorFlow https://t.iss.one/ArtificialIntelligenceArticles
yoshua bengio :
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way ;-)
@ArtificialIntelligenceArticles
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way ;-)
@ArtificialIntelligenceArticles
AttoNets, A New AI That is Faster & Efficient For Edge Computing (Paper link included)
https://www.marktechpost.com/2019/10/11/attonets-a-new-ai-that-is-faster-efficient-for-edge-computing/
https://www.marktechpost.com/2019/10/11/attonets-a-new-ai-that-is-faster-efficient-for-edge-computing/
MarkTechPost
AttoNets, A New AI That is Faster & Efficient For Edge Computing
An AI team at the University of Waterloo, Canada, developed a new type of compact family of deep neural networks (AttoNets), which can even run on smartphones, tablets, and other mobile devices. The main problem with available neural networks is they require…
Videos for the Machine Learning for Physics and the Physics of Learning fall long program are now available on our YouTube page! Watch them via this link: https://www.youtube.com/playlist?list=PLHyI3Fbmv0SfQfS1rknFsr_UaaWpJ1EKA&fbclid=IwAR3WCSjcjDDekd7kgA9Usl_May3DpSorfNzkO-miYviROCllxeb5lsGrGMY #MLP2019 https://t.iss.one/ArtificialIntelligenceArticles
The State of Transfer Learning in NLP
By Sebastian Ruder : https://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing #NLP
By Sebastian Ruder : https://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing #NLP
ruder.io
The State of Transfer Learning in NLP
This post expands on the NAACL 2019 tutorial on Transfer Learning in NLP. It highlights key insights and takeaways and provides updates based on recent work.
OpenSpiel: A Framework for Reinforcement Learning in Games
"OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games."
Lanctot et al.: https://arxiv.org/pdf/1908.09453v4.pdf
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
"OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games."
Lanctot et al.: https://arxiv.org/pdf/1908.09453v4.pdf
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
Learn to Explain Efficiently via Neural Logic Inductive Learning
Yuan Yang and Le Song : https://arxiv.org/abs/1910.02481
#ArtificialIntelligence #DeepLearning #MachineLearning
Yuan Yang and Le Song : https://arxiv.org/abs/1910.02481
#ArtificialIntelligence #DeepLearning #MachineLearning
ICYMI: NADS-Net: Driver and Seat Belt Detection via Convolutional Neural Network!
https://www.profillic.com/paper/arxiv:1910.03695
https://www.profillic.com/paper/arxiv:1910.03695
Profillic
Profillic: AI models, code & research to supercharge your projects
Explore state-of-the-art in machine learning, AI, and robotics research. Browse models, source code, papers by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language processing…
"The current state of AI and Deep Learning: A reply to Yoshua Bengio"
By Gary Marcus : https://medium.com/@GaryMarcus/the-current-state-of-ai-and-deep-learning-a-reply-to-yoshua-bengio-77952ead7970 https://t.iss.one/ArtificialIntelligenceArticles
By Gary Marcus : https://medium.com/@GaryMarcus/the-current-state-of-ai-and-deep-learning-a-reply-to-yoshua-bengio-77952ead7970 https://t.iss.one/ArtificialIntelligenceArticles