Inferring the quantum density matrix with machine learning
Cranmer et al.: https://arxiv.org/abs/1904.05903
#QuantumPhysics #Physics #ArtificialIntelligence #MachineLearning
Cranmer et al.: https://arxiv.org/abs/1904.05903
#QuantumPhysics #Physics #ArtificialIntelligence #MachineLearning
Linguistic Knowledge and Transferability of Contextual Representations
Liu et al.: https://arxiv.org/abs/1903.08855
#ArtificialIntelligence #DeepLearning #MachineLearning
Liu et al.: https://arxiv.org/abs/1903.08855
#ArtificialIntelligence #DeepLearning #MachineLearning
CS294-158 Deep Unsupervised Learning
Ilya Sutskever @ilyasut guest lecture on GPT-2: https://youtu.be/X-B3nAN7YRM
#DeepLearning #MachineLearning #UnsupervisedLearning
Ilya Sutskever @ilyasut guest lecture on GPT-2: https://youtu.be/X-B3nAN7YRM
#DeepLearning #MachineLearning #UnsupervisedLearning
There's a new convolution operation in town!
For CNNs there's a proposed "Octave Convolution" (OctConv) which can be used as a direct replacement of plain vanilla convolutions without any adjustments in the network architecture.
The idea of OctConv is that, in images, information is conveyed at different frequencies i.e. high frequencies show fine details whereas low-frequencies show global structures.
The idea then is to factorize the feature maps into a high-frequency/low-frequency feature maps and then reduce the spatial resolutions of the low-frequency maps by an octave. This not only leads to lower memory/computation cost but also to better evaluation results such as accuracy in an image classification task.
Paper: https://export.arxiv.org/pdf/1904.05049
For CNNs there's a proposed "Octave Convolution" (OctConv) which can be used as a direct replacement of plain vanilla convolutions without any adjustments in the network architecture.
The idea of OctConv is that, in images, information is conveyed at different frequencies i.e. high frequencies show fine details whereas low-frequencies show global structures.
The idea then is to factorize the feature maps into a high-frequency/low-frequency feature maps and then reduce the spatial resolutions of the low-frequency maps by an octave. This not only leads to lower memory/computation cost but also to better evaluation results such as accuracy in an image classification task.
Paper: https://export.arxiv.org/pdf/1904.05049
Evolved Art with Transparent, Overlapping, and Geometric Shapes
Berg et al.: https://arxiv.org/abs/1904.06110
#NeuralComputing #EvolutionaryComputing #ArtificialIntelligence
Berg et al.: https://arxiv.org/abs/1904.06110
#NeuralComputing #EvolutionaryComputing #ArtificialIntelligence
Full Stack Deep Learning Bootcamp
(Most of) Lectures of Day 1: https://fullstackdeeplearning.com/march2019
Happy learning!
#ArtificialIntelligence #DeepLearning #MachineLearning
(Most of) Lectures of Day 1: https://fullstackdeeplearning.com/march2019
Happy learning!
#ArtificialIntelligence #DeepLearning #MachineLearning
L2 Regularization and Batch Norm
Blog by David Wu: https://blog.janestreet.com/l2-regularization-and-batch-norm/
#artificialintelligence #datascience #machinelearning
Blog by David Wu: https://blog.janestreet.com/l2-regularization-and-batch-norm/
#artificialintelligence #datascience #machinelearning
Jane Street Blog
L2 Regularization and Batch Norm
This blog post is about an interesting detail about machine learningthat I came across as a researcher at Jane Street - that of the interaction between L2 re...
Google releases massive visual databases for machine learning
https://www.datasciencecentral.com/profiles/blogs/google-releases-massive-visual-databases-for-machine-learning
https://www.datasciencecentral.com/profiles/blogs/google-releases-massive-visual-databases-for-machine-learning
Data Science Central
Google releases massive visual databases for machine learning - DataScienceCentral.com
This article was written by Richard Lawler. Richard’s been tech obsessed since first laying hands on an Atari joystick. . Millions of images and YouTube videos, linked and tagged to teach computers what a spoon is. It seems like we hear about a new breakthrough…
Visual-Inertial Mapping with Non-Linear Factor Recovery. https://arxiv.org/abs/1904.06504
Towards Self-similarity Consistency and Feature Discrimination for Unsupervised Domain https://arxiv.org/abs/1904.06490
YouTube UGC Dataset for Video Compression Research. https://arxiv.org/abs/1904.06457
Patch redundancy in images: a statistical testing framework and some applications. https://arxiv.org/abs/1904.06428
Towards Accurate One-Stage Object Detection with AP-Loss. https://arxiv.org/abs/1904.06373
The iWildCam 2018 Challenge Dataset. https://arxiv.org/abs/1904.05986
Best Paper Awards in Computer Science (since 1996)
A well maintained list: https://jeffhuang.com/best_paper_awards.html
#artificialintelligence #award #machinelearning #papers #research @ArtificialIntelligenceArticles
A well maintained list: https://jeffhuang.com/best_paper_awards.html
#artificialintelligence #award #machinelearning #papers #research @ArtificialIntelligenceArticles
Disney trying to automate making an animation
https://arxiv.org/pdf/1904.05440.pdf
https://arxiv.org/pdf/1904.05440.pdf
Visualizing Attention in Transformer-Based Language Representation Models
Jesse Vig: https://arxiv.org/abs/1904.02679
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
Jesse Vig: https://arxiv.org/abs/1904.02679
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
Natural Language Semantics With Pictures: Some Language & Vision Datasets and Potential U... https://arxiv.org/abs/1904.07318