ArtificialIntelligenceArticles
2.98K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
News classification using classic Machine Learning tools (TF-IDF) and modern NLP approach based on transfer learning (ULMFIT) deployed on GCP

By Imad El Hanafi

Live version: https://nlp.imadelhanafi.com/

Github: https://github.com/imadelh/NLP-news-classification

Blog: https://imadelhanafi.com/posts/text_classification_ulmfit/

#DeepLearning #MachineLearning #NLP
ArtificialIntelligenceArticles
How your brain invents morality: fantastic interview of neurophilosopher Patricia Churchland on the neuro-evolutionary origin of morality. https://www.vox.com/future-perfect/2019/7/8/20681558/conscience-patricia-churchland-neuroscience-morality-empathy-philosophy
Patricia Churchland: I am baffled that in 2019, so many intellectuals are still offended by that ideas that the brain is a machine, and everything it does is some sort of computation, including emotions, morality, etc.
I'm baffled that people still believe that if that is the case, we should think less of humans. We should not.
Science has been bringing humans down from their pedestal for centuries. One should be used to it by now.
Whatever happened to rational thought?
Using electrode implants that feed data into computational models known as neural networks, scientists reconstructed words and sentences from brain activity that were, in some cases, intelligible to human listeners.

https://www.sciencemag.org/news/2019/01/artificial-intelligence-turns-brain-activity-speech

@ArtificialIntelligenceArticles
Probabilistic Logic Neural Networks for Reasoning
Meng Qu, Jian Tang : https://arxiv.org/abs/1906.08495
#MachineLearning #ArtificialIntelligence #NeuralNetworks
My position is very similar to Yoshua's.
Making sequential reasoning compatible with gradient-based learning is one of the challenges of the next decade.
But gradient-based learning applied to networks of parameterized modules (aka "deep learning") is part of the solution.


Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way

https://t.iss.one/ArtificialIntelligenceArticles
Seeing What a GAN Cannot Generate
https://ganseeing.csail.mit.edu/