A curated list of gradient boosting research papers from the last 25 years with implementations. It covers NeurIPS, ICML, ICLR, KDD, ICDM, CIKM, AAAI etc.
https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers
https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers
GitHub
GitHub - benedekrozemberczki/awesome-gradient-boosting-papers: A curated list of gradient boosting research papers with implementations.
A curated list of gradient boosting research papers with implementations. - GitHub - benedekrozemberczki/awesome-gradient-boosting-papers: A curated list of gradient boosting research papers with ...
Very interesting work applying machine learning to higher order logics and theorem proofs. This could eventually change how we understand and program many different things.
https://arxiv.org/abs/1904.03241
https://arxiv.org/abs/1904.03241
arXiv.org
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary...
DeepMind Made a Math Test For Neural Networks
https://arxiv.org/abs/1904.01557
https://arxiv.org/abs/1904.01557
arXiv.org
Analysing Mathematical Reasoning Abilities of Neural Models
Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back...
Should AI Research Try to Model the Human Brain?
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
Trends in Cognitive Sciences
Reinforcement Learning, Fast and Slow
Deep reinforcement learning (RL) methods have driven impressive advances in artificial
intelligence in recent years, exceeding human performance in domains ranging from
Atari to Go to no-limit poker. This progress has drawn the attention of cognitive
scientists…
intelligence in recent years, exceeding human performance in domains ranging from
Atari to Go to no-limit poker. This progress has drawn the attention of cognitive
scientists…
Attentive Generative Adversarial Network for Raindrop Removal from A Single Image
Abstract : "Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training (...)."
Qian et al.: https://arxiv.org/pdf/1711.10098.pdf
#artificialintelligence #deeplearning #generativeadversarialnetwork
Abstract : "Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training (...)."
Qian et al.: https://arxiv.org/pdf/1711.10098.pdf
#artificialintelligence #deeplearning #generativeadversarialnetwork
DLTK – Deep Learning Toolkit for Medical Image Analysis, built on top of TensorFlow
Fast prototyping with a low entry threshold & reproducibility in image analysis applications, with a particular focus on medical imaging.
https://github.com/DLTK/DLTK
Fast prototyping with a low entry threshold & reproducibility in image analysis applications, with a particular focus on medical imaging.
https://github.com/DLTK/DLTK
GitHub
GitHub - DLTK/DLTK: Deep Learning Toolkit for Medical Image Analysis
Deep Learning Toolkit for Medical Image Analysis. Contribute to DLTK/DLTK development by creating an account on GitHub.
News & Views: In Nature Methods, two research teams report substantial improvements in the accurate prediction of fragment ion spectra by deep neural networks.
https://www.nature.com/articles/s41592-019-0428-5.epdf
https://www.nature.com/articles/s41592-019-0428-5.epdf
Nature Methods
Deep learning adds an extra dimension to peptide fragmentation
The interpretation of fragmentation patterns in tandem mass spectrometry is crucial for peptide sequencing, but the relative intensities of these patterns are difficult to predict computationally. Two groups have applied deep neural networks to address this…
This Review, published in Nature Reviews Genetics, describes different deep learning techniques and how they can be applied to extract biologically relevant information from large, complex genomic data sets.
https://www.nature.com/articles/s41576-019-0122-6.epdf
https://www.nature.com/articles/s41576-019-0122-6.epdf
Nature Reviews Genetics
Deep learning: new computational modelling techniques for genomics
This Review describes different deep learning techniques and how they can be applied to extract biologically relevant information from large, complex genomic data sets.
Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)
In the past years developing deep neural architectures either required manual design (e.g. AlexNet, ResNet, MobileNet, ...) or require expensive search among possible predefined block structures of layers (NAS, MNas, DART,...). What if we see a neural network as a completely unstructured graph? where each node is running a simple sensing operation over a single data-point or a channel (e.g. 2d filter) and all the nodes are wired up massively in the network. In this paper we explain how to discover a good wiring of a neural network that minimizes the loss function with a limited amount of computation. We relax the typical notion of layers and instead enable channels to form connections independent of each other. This allows for a much larger space of possible networks. The wiring of our network is not fixed during training – as we learn the network parameters we also learn the structure itself.
In the past years developing deep neural architectures either required manual design (e.g. AlexNet, ResNet, MobileNet, ...) or require expensive search among possible predefined block structures of layers (NAS, MNas, DART,...). What if we see a neural network as a completely unstructured graph? where each node is running a simple sensing operation over a single data-point or a channel (e.g. 2d filter) and all the nodes are wired up massively in the network. In this paper we explain how to discover a good wiring of a neural network that minimizes the loss function with a limited amount of computation. We relax the typical notion of layers and instead enable channels to form connections independent of each other. This allows for a much larger space of possible networks. The wiring of our network is not fixed during training – as we learn the network parameters we also learn the structure itself.
MelNet: A Generative Model for Audio in the Frequency Domain
Sean Vasquez and Mike Lewis: https://arxiv.org/abs/1906.01083
Blog: https://sjvasquez.github.io/blog/melnet/
#ArtificialIntelligence #DeepLearning #MachineLearning
Sean Vasquez and Mike Lewis: https://arxiv.org/abs/1906.01083
Blog: https://sjvasquez.github.io/blog/melnet/
#ArtificialIntelligence #DeepLearning #MachineLearning
A summary of the debate on human-level AI organized by the World Science Festival last Friday.
I shared the stage with Gary Kasparov, Shannon Vallor, Hod Lipson, and moderator Daniel Sieberg.
https://www.zdnet.com/article/artificial-general-intelligence-is-a-rorschach-test-do-we-need-orangutans/
I shared the stage with Gary Kasparov, Shannon Vallor, Hod Lipson, and moderator Daniel Sieberg.
https://www.zdnet.com/article/artificial-general-intelligence-is-a-rorschach-test-do-we-need-orangutans/
ZDNet
Artificial general intelligence is a Rorschach Test: Perhaps we need orangutans?
A panel discussion between Facebook’s Yann LeCun and fellow AI thinkers debates whether the term artificial general intelligence even means anything. Perhaps the answer is machines more like orangutans.
Functional Adversarial Attacks
Cassidy Laidlaw and Soheil Feizi: https://arxiv.org/abs/1906.00001
#ArtificialIntelligence #DeepLearning #MachineLearning
Cassidy Laidlaw and Soheil Feizi: https://arxiv.org/abs/1906.00001
#ArtificialIntelligence #DeepLearning #MachineLearning
Deep Learning and the Game of Go
GitHub : https://github.com/maxpumperla/deep_learning_and_the_game_of_go
#artificialintelligence #machinelearning #reinforcementlearning
GitHub : https://github.com/maxpumperla/deep_learning_and_the_game_of_go
#artificialintelligence #machinelearning #reinforcementlearning
IoT Network Security from the Perspective of Adversarial Deep Learning. arxiv.org/abs/1906.00076
GRAM: Scalable Generative Models for Graphs with Graph Attention Mechanism
Kawai et al.: https://arxiv.org/abs/1906.01861
#ArtificialIntelligence #GenerativeModels #MachineLearning
Kawai et al.: https://arxiv.org/abs/1906.01861
#ArtificialIntelligence #GenerativeModels #MachineLearning
arXiv.org
Scalable Generative Models for Graphs with Graph Attention Mechanism
Graphs are ubiquitous real-world data structures, and generative models that
approximate distributions over graphs and derive new samples from them have
significant importance. Among the known...
approximate distributions over graphs and derive new samples from them have
significant importance. Among the known...
Unsupervised Object Segmentation by Redrawing
Chen et al.: https://arxiv.org/abs/1905.13539
#ArtificialIntelligence #DeepLearning #MachineLearning
Chen et al.: https://arxiv.org/abs/1905.13539
#ArtificialIntelligence #DeepLearning #MachineLearning
Nice collection of PyTorch (and some TF) Jupyter notebooks for everything deep learning by Sabastian Raschka.
https://github.com/rasbt/deeplearning-models
https://github.com/rasbt/deeplearning-models
GitHub
GitHub - rasbt/deeplearning-models: A collection of various deep learning architectures, models, and tips
A collection of various deep learning architectures, models, and tips - rasbt/deeplearning-models
Wogrammer has a new story about a female software engineer in Iran, Melika Farahani.
https://medium.com/wogrammer/how-melika-farahani-builds-her-confidence-and-a-path-to-success-41d10026a442
https://medium.com/wogrammer/how-melika-farahani-builds-her-confidence-and-a-path-to-success-41d10026a442
Medium
How Melika Farahani Builds Her Confidence and a Path to Success
As soon as she showed an interest in technology, Melika Farahani’s family encouraged her to pursue that path. Despite being a young girl…
Out With The Old and In With The New: How Samira Korani promotes Artificial Intelligence & Tech in Iran
https://medium.com/wogrammer/out-with-the-old-and-in-with-the-new-how-samira-korani-promotes-artificial-intelligence-tech-in-bd00ae7d4f92 https://t.iss.one/ArtificialIntelligenceArticles
https://medium.com/wogrammer/out-with-the-old-and-in-with-the-new-how-samira-korani-promotes-artificial-intelligence-tech-in-bd00ae7d4f92 https://t.iss.one/ArtificialIntelligenceArticles