Using electrode implants that feed data into computational models known as neural networks, scientists reconstructed words and sentences from brain activity that were, in some cases, intelligible to human listeners.
https://www.sciencemag.org/news/2019/01/artificial-intelligence-turns-brain-activity-speech
@ArtificialIntelligenceArticles
https://www.sciencemag.org/news/2019/01/artificial-intelligence-turns-brain-activity-speech
@ArtificialIntelligenceArticles
Science
Artificial intelligence turns brain activity into speech
Fed data from invasive brain recordings, algorithms reconstruct heard and spoken sounds
This Tensorflow based Python Library ‘Spleeter’ splits vocals from finished tracks
Github: https://github.com/deezer/spleeter
https://www.marktechpost.com/2019/11/10/this-tensorflow-based-python-library-spleeter-splits-vocals-from-finished-tracks/
Github: https://github.com/deezer/spleeter
https://www.marktechpost.com/2019/11/10/this-tensorflow-based-python-library-spleeter-splits-vocals-from-finished-tracks/
GitHub
GitHub - deezer/spleeter: Deezer source separation library including pretrained models.
Deezer source separation library including pretrained models. - deezer/spleeter
Probabilistic Logic Neural Networks for Reasoning
Meng Qu, Jian Tang : https://arxiv.org/abs/1906.08495
#MachineLearning #ArtificialIntelligence #NeuralNetworks
Meng Qu, Jian Tang : https://arxiv.org/abs/1906.08495
#MachineLearning #ArtificialIntelligence #NeuralNetworks
US National Security Commission on Artificial Intelligence
Interim Report for Congress, November 2019
#AI #ArtificialIntelligence #Security #NSCAI
https://www.nationaldefensemagazine.org/-/media/sites/magazine/03_linkedfiles/nscai-interim-report-for-congress.ashx?la=en
Interim Report for Congress, November 2019
#AI #ArtificialIntelligence #Security #NSCAI
https://www.nationaldefensemagazine.org/-/media/sites/magazine/03_linkedfiles/nscai-interim-report-for-congress.ashx?la=en
My position is very similar to Yoshua's.
Making sequential reasoning compatible with gradient-based learning is one of the challenges of the next decade.
But gradient-based learning applied to networks of parameterized modules (aka "deep learning") is part of the solution.
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way
https://t.iss.one/ArtificialIntelligenceArticles
Making sequential reasoning compatible with gradient-based learning is one of the challenges of the next decade.
But gradient-based learning applied to networks of parameterized modules (aka "deep learning") is part of the solution.
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way
https://t.iss.one/ArtificialIntelligenceArticles
Telegram
ArtificialIntelligenceArticles
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
Optimizing Millions of Hyperparameters by Implicit Differentiation
Lorraine et al.: https://arxiv.org/abs/1911.02590
#ArtificialIntelligence #MachineLearning
Lorraine et al.: https://arxiv.org/abs/1911.02590
#ArtificialIntelligence #MachineLearning
arXiv.org
Optimizing Millions of Hyperparameters by Implicit Differentiation
We propose an algorithm for inexpensive gradient-based hyperparameter optimization that combines the implicit function theorem (IFT) with efficient inverse Hessian approximations. We present...
Seeing What a GAN Cannot Generate
https://ganseeing.csail.mit.edu/
https://ganseeing.csail.mit.edu/
Free linear Algebra textbook with solutions https://joshua.smcvt.edu/linearalgebra/#current_version
A neural network that transforms a design mock-up into a static website
GitHub by Emil Wallner : https://github.com/emilwallner/Screenshot-to-code
#ArtificialIntelligence #DeepLearning #MachineLearning
GitHub by Emil Wallner : https://github.com/emilwallner/Screenshot-to-code
#ArtificialIntelligence #DeepLearning #MachineLearning
GitHub
GitHub - emilwallner/Screenshot-to-code: A neural network that transforms a design mock-up into a static website.
A neural network that transforms a design mock-up into a static website. - emilwallner/Screenshot-to-code
Story Realization: Expanding Plot Events into Sentences
Ammanabrolu et al.: https://arxiv.org/abs/1909.03480
#ArtificialIntelligence #DeepLearning #MachineLearning
Ammanabrolu et al.: https://arxiv.org/abs/1909.03480
#ArtificialIntelligence #DeepLearning #MachineLearning
Knowledge Distillation for Incremental Learning in Semantic Segmentation. https://arxiv.org/abs/1911.03462
AIM 2019 Challenge on Image Demoireing: Methods and Results. https://arxiv.org/abs/1911.03461
Towards Domain Adaptation from Limited Data for Question Answering Using Deep Neural Netw... https://arxiv.org/abs/1911.02655
Worst Cases Policy Gradients
Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov : https://arxiv.org/abs/1911.03618
#MachineLearning #DeepLearning #ArtificialIntelligence
Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov : https://arxiv.org/abs/1911.03618
#MachineLearning #DeepLearning #ArtificialIntelligence
Teaching a neural network to use a calculator
Blog by Reiichiro Nakano : https://reiinakano.com/2019/11/12/solving-probability.html
#ArtificialIntelligence #DeepLearning #MachineLearning
Blog by Reiichiro Nakano : https://reiinakano.com/2019/11/12/solving-probability.html
#ArtificialIntelligence #DeepLearning #MachineLearning
reiinakano’s blog
Teaching a neural network to use a calculator
This article explores a seq2seq architecture for solving simple probability problems in Deepmind’s Mathematics Dataset. A transformer is used to map questions to intermediate steps, while an external symbolic calculator evaluates intermediate expressions.…