yoshua bengio :
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way ;-)
@ArtificialIntelligenceArticles
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way ;-)
@ArtificialIntelligenceArticles
AttoNets, A New AI That is Faster & Efficient For Edge Computing (Paper link included)
https://www.marktechpost.com/2019/10/11/attonets-a-new-ai-that-is-faster-efficient-for-edge-computing/
https://www.marktechpost.com/2019/10/11/attonets-a-new-ai-that-is-faster-efficient-for-edge-computing/
MarkTechPost
AttoNets, A New AI That is Faster & Efficient For Edge Computing
An AI team at the University of Waterloo, Canada, developed a new type of compact family of deep neural networks (AttoNets), which can even run on smartphones, tablets, and other mobile devices. The main problem with available neural networks is they require…
Videos for the Machine Learning for Physics and the Physics of Learning fall long program are now available on our YouTube page! Watch them via this link: https://www.youtube.com/playlist?list=PLHyI3Fbmv0SfQfS1rknFsr_UaaWpJ1EKA&fbclid=IwAR3WCSjcjDDekd7kgA9Usl_May3DpSorfNzkO-miYviROCllxeb5lsGrGMY #MLP2019 https://t.iss.one/ArtificialIntelligenceArticles
The State of Transfer Learning in NLP
By Sebastian Ruder : https://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing #NLP
By Sebastian Ruder : https://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing #NLP
ruder.io
The State of Transfer Learning in NLP
This post expands on the NAACL 2019 tutorial on Transfer Learning in NLP. It highlights key insights and takeaways and provides updates based on recent work.
OpenSpiel: A Framework for Reinforcement Learning in Games
"OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games."
Lanctot et al.: https://arxiv.org/pdf/1908.09453v4.pdf
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
"OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games."
Lanctot et al.: https://arxiv.org/pdf/1908.09453v4.pdf
#ArtificialIntelligence #DeepLearning #ReinforcementLearning
Learn to Explain Efficiently via Neural Logic Inductive Learning
Yuan Yang and Le Song : https://arxiv.org/abs/1910.02481
#ArtificialIntelligence #DeepLearning #MachineLearning
Yuan Yang and Le Song : https://arxiv.org/abs/1910.02481
#ArtificialIntelligence #DeepLearning #MachineLearning
ICYMI: NADS-Net: Driver and Seat Belt Detection via Convolutional Neural Network!
https://www.profillic.com/paper/arxiv:1910.03695
https://www.profillic.com/paper/arxiv:1910.03695
Profillic
Profillic: AI models, code & research to supercharge your projects
Explore state-of-the-art in machine learning, AI, and robotics research. Browse models, source code, papers by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language processing…
"The current state of AI and Deep Learning: A reply to Yoshua Bengio"
By Gary Marcus : https://medium.com/@GaryMarcus/the-current-state-of-ai-and-deep-learning-a-reply-to-yoshua-bengio-77952ead7970 https://t.iss.one/ArtificialIntelligenceArticles
By Gary Marcus : https://medium.com/@GaryMarcus/the-current-state-of-ai-and-deep-learning-a-reply-to-yoshua-bengio-77952ead7970 https://t.iss.one/ArtificialIntelligenceArticles
Yann lecun
When someone makes statements about the current limits of AI and gives vague proposals on how to lift them, check their past contributions to the field (i.e. well-cited articles in peer-reviewed AI journals or conferences).
If they don't have any, don't believe them until they do.
Limitations are pretty obvious. You can tell how obvious by how many people work on lifting them.
Vague proposals are a dime a dozen. Reducing them to practice and showing that they work is the hard part.
When someone makes statements about the current limits of AI and gives vague proposals on how to lift them, check their past contributions to the field (i.e. well-cited articles in peer-reviewed AI journals or conferences).
If they don't have any, don't believe them until they do.
Limitations are pretty obvious. You can tell how obvious by how many people work on lifting them.
Vague proposals are a dime a dozen. Reducing them to practice and showing that they work is the hard part.
Lucid
A collection of infrastructure and tools for research in neural network interpretability : https://github.com/tensorflow/lucid
#Tensorflow #Interpretability #Visualization #MachineLearning #Colab
@ArtificialIntelligenceArticles
A collection of infrastructure and tools for research in neural network interpretability : https://github.com/tensorflow/lucid
#Tensorflow #Interpretability #Visualization #MachineLearning #Colab
@ArtificialIntelligenceArticles
GitHub
GitHub - tensorflow/lucid: A collection of infrastructure and tools for research in neural network interpretability.
A collection of infrastructure and tools for research in neural network interpretability. - tensorflow/lucid
Siraj Raval, the popular AI tutor, a Columbia uni grad, is a fake guy! I have been telling this to my interns all along!! They wouldn’t listen.
https://twitter.com/AndrewM_Webb/status/1183150368945049605?s=19
https://twitter.com/AndrewM_Webb/status/1183150368945049605?s=19
Top 10 Best Datasets for Applied ML
For the development of AI ,machine learning and data science project its important to gather relevant data. Below given are the 10 best machine learning datasets such a way that you can download the dataset and can develop your machine learning project.
1. ImageNet
ImageNet is one of the best datasets for machine learning. Generally, it can be used in computer vision . This project is an image dataset, it was developed by Fei Fei Li and other researcher working on computer vision. See their TED talk here https://www.youtube.com/watch?v=40riCqvRoMs .
https://www.image-net.org/download-faq
2. Indians Diabetics Dataset
If you want to apply machine learning in health care,then you can use this Pima Indian Diabetics dataset in your healthcare system. We all know that diabetes is one of the most common dangerous diseases. You can use this dataset in your diabetes detection system. This dataset is from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of this dataset is to predict whether or not a patient has diabetes based on specific diagnostic measurement.
https://www.kaggle.com/uciml/pima-indians-diabetes-database
3. Boston House Price Dataset
Do you want to practice regression algorithm? Then you can use this dataset in your machine learning problem. This dataset is collected from the area of Boston Mass.
https://www.kaggle.com/vikrishnan/boston-house-prices
4. HotpotQA
Do you want to work with natural language processing? We all know natural language processing covers a big range area in machine learning. So, if you want to develop a system based on natural language processing (NLP) concept then this dataset is for you my friend. It is collected by a team of NLP researchers at Carnegie Mellon University, Stanford University.
https://hotpotqa.github.io/
5. Labelme
Image processing is one of the amazing is of machine learning. If you are interested in developing an image processing system, then you can use this Labelme dataset in your machine learning project. This dataset is a large volume dataset of annotated images.
https://labelme2.csail.mit.edu/Release3.0/browserTools/php/dataset.php
6. Facial Image Dataset
You can use this interesting machine learning dataset for your computer vision project. This dataset is standard and free to use. Moreover, it contains a variation of data like variation of background and scale, and variation of expressions. This standard dataset helps to evaluate a system precisely.
https://cswww.essex.ac.uk/mv/allfaces/faces94.html
7. Chars74K Dataset
Optical Character recognition is one of the classic classification problems of pattern recognition. This interesting machine learning dataset consists of 64 classes (0–9, A-Z, a-z), 7705 characters taken from natural images, 3410 hand-drawn characters, and 62992 synthesized characters from computer fonts.
https://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/#download
8. YouTube Dataset
Are you an expert in machine learning research area or want to do something with video classification? Then, this dataset for machine learning project might help you. Also, you might be glad to know that Google has shared a labeled dataset with 8M classified YouTube Videos and its’ IDs
https://research.google.com/youtube8m/.
9. Amazon Reviews Dataset
We all know natural language processing is about text data. To solve a real-world application, you need ML dataset. Also, this Amazon reviews dataset is one of them. It contains 35 million reviews from Amazon spanning 18 years (up to March 2013).
https://snap.stanford.edu/data/web-Amazon.html
10.xView
If you are an expert in machine learning and you can handle a tricky problem or project, then I must suggest you use this dataset in your project or system. This dataset is one of the standard datasets for imaging problem. Moreover, it is one of the most extensive public datasets.
https://xviewdataset.org/#dataset
CLOSING WORDS:
Dataset is an integral part of machine learning applications. It can be available in different formats like .txt, .csv, and many more.
For the development of AI ,machine learning and data science project its important to gather relevant data. Below given are the 10 best machine learning datasets such a way that you can download the dataset and can develop your machine learning project.
1. ImageNet
ImageNet is one of the best datasets for machine learning. Generally, it can be used in computer vision . This project is an image dataset, it was developed by Fei Fei Li and other researcher working on computer vision. See their TED talk here https://www.youtube.com/watch?v=40riCqvRoMs .
https://www.image-net.org/download-faq
2. Indians Diabetics Dataset
If you want to apply machine learning in health care,then you can use this Pima Indian Diabetics dataset in your healthcare system. We all know that diabetes is one of the most common dangerous diseases. You can use this dataset in your diabetes detection system. This dataset is from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of this dataset is to predict whether or not a patient has diabetes based on specific diagnostic measurement.
https://www.kaggle.com/uciml/pima-indians-diabetes-database
3. Boston House Price Dataset
Do you want to practice regression algorithm? Then you can use this dataset in your machine learning problem. This dataset is collected from the area of Boston Mass.
https://www.kaggle.com/vikrishnan/boston-house-prices
4. HotpotQA
Do you want to work with natural language processing? We all know natural language processing covers a big range area in machine learning. So, if you want to develop a system based on natural language processing (NLP) concept then this dataset is for you my friend. It is collected by a team of NLP researchers at Carnegie Mellon University, Stanford University.
https://hotpotqa.github.io/
5. Labelme
Image processing is one of the amazing is of machine learning. If you are interested in developing an image processing system, then you can use this Labelme dataset in your machine learning project. This dataset is a large volume dataset of annotated images.
https://labelme2.csail.mit.edu/Release3.0/browserTools/php/dataset.php
6. Facial Image Dataset
You can use this interesting machine learning dataset for your computer vision project. This dataset is standard and free to use. Moreover, it contains a variation of data like variation of background and scale, and variation of expressions. This standard dataset helps to evaluate a system precisely.
https://cswww.essex.ac.uk/mv/allfaces/faces94.html
7. Chars74K Dataset
Optical Character recognition is one of the classic classification problems of pattern recognition. This interesting machine learning dataset consists of 64 classes (0–9, A-Z, a-z), 7705 characters taken from natural images, 3410 hand-drawn characters, and 62992 synthesized characters from computer fonts.
https://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/#download
8. YouTube Dataset
Are you an expert in machine learning research area or want to do something with video classification? Then, this dataset for machine learning project might help you. Also, you might be glad to know that Google has shared a labeled dataset with 8M classified YouTube Videos and its’ IDs
https://research.google.com/youtube8m/.
9. Amazon Reviews Dataset
We all know natural language processing is about text data. To solve a real-world application, you need ML dataset. Also, this Amazon reviews dataset is one of them. It contains 35 million reviews from Amazon spanning 18 years (up to March 2013).
https://snap.stanford.edu/data/web-Amazon.html
10.xView
If you are an expert in machine learning and you can handle a tricky problem or project, then I must suggest you use this dataset in your project or system. This dataset is one of the standard datasets for imaging problem. Moreover, it is one of the most extensive public datasets.
https://xviewdataset.org/#dataset
CLOSING WORDS:
Dataset is an integral part of machine learning applications. It can be available in different formats like .txt, .csv, and many more.
YouTube
How we teach computers to understand pictures | Fei Fei Li
When a very young child looks at a picture, she can identify simple elements: "cat," "book," "chair." Now, computers are getting smart enough to do that too. What's next? In a thrilling talk, computer vision expert Fei-Fei Li describes the state of the art…
A Generalized Framework for Population Based Training
Li et al.: https://arxiv.org/abs/1902.01894
#ArtificialIntelligence #ClusterComputing #MachineLearning
Li et al.: https://arxiv.org/abs/1902.01894
#ArtificialIntelligence #ClusterComputing #MachineLearning
arXiv.org
A Generalized Framework for Population Based Training
Population Based Training (PBT) is a recent approach that jointly optimizes
neural network weights and hyperparameters which periodically copies weights of
the best performers and mutates...
neural network weights and hyperparameters which periodically copies weights of
the best performers and mutates...
Causality and deceit: Do androids watch action movies?. https://arxiv.org/abs/1910.04383
Geoff Hinton's favorite Gary Marcus quote.
https://www.cs.toronto.edu/~hinton/marcusquote.html
@ArtificialIntelligenceArticles
https://www.cs.toronto.edu/~hinton/marcusquote.html
@ArtificialIntelligenceArticles
Rules of Machine Learning: Best Practices for ML Engineering
By Martin Zinkevich: https://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
#ArtificialIntelligence #MachineLearning
By Martin Zinkevich: https://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
#ArtificialIntelligence #MachineLearning
Deep RL Bootcamp
By Pieter Abbeel, Rocky Duan, Peter Chen, Andrej Karpathy et al.: https://sites.google.com/view/deep-rl-bootcamp/lectures
#100DaysOfMLCode #ArtificialIntelligence #DeepLearning #MachineLearning #NeuralNetworks #ReinforcementLearning
By Pieter Abbeel, Rocky Duan, Peter Chen, Andrej Karpathy et al.: https://sites.google.com/view/deep-rl-bootcamp/lectures
#100DaysOfMLCode #ArtificialIntelligence #DeepLearning #MachineLearning #NeuralNetworks #ReinforcementLearning