InterpretML: A Unified Framework for Machine Learning Interpretability. https://arxiv.org/abs/1909.09223
arXiv.org
InterpretML: A Unified Framework for Machine Learning Interpretability
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability -...
Learning to Conceal: A Deep Learning Based Method for Preserving Privacy and Avoiding Pre... https://arxiv.org/abs/1909.09156
arXiv.org
Learning to Conceal: A Deep Learning Based Method for Preserving...
In this paper, we introduce a learning model able to conceals personal
information (e.g. gender, age, ethnicity, etc.) from an image, while
maintaining any additional information present in the...
information (e.g. gender, age, ethnicity, etc.) from an image, while
maintaining any additional information present in the...
8 Solid Career Tips We Can Take From Andrew Ng’s New Webinar
https://analyticsindiamag.com/8-solid-career-tips-we-can-take-from-andrew-ngs-new-webinar/
https://analyticsindiamag.com/8-solid-career-tips-we-can-take-from-andrew-ngs-new-webinar/
Analytics India Magazine
8 Solid Career Tips We Can Take From Andrew Ng’s New Webinar
Chinese-Americal computer scientist and statistician, Andrew Ng is one of the most popular researchers among the millennials for his work in artificial intellig
The Curious Case of Neural Text Degeneration
Holtzman et al.: https://arxiv.org/abs/1904.09751#
#ArtificialIntelligence #DeepLearning #MachineLearning
Holtzman et al.: https://arxiv.org/abs/1904.09751#
#ArtificialIntelligence #DeepLearning #MachineLearning
arXiv.org
The Curious Case of Neural Text Degeneration
Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical...
Handbook of Graphical Models
Marloes Maathuis, Mathias Drton, Steen Lauritzen, Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf…
#ArtificialIntelligence #GraphicalModels #Handbook
Marloes Maathuis, Mathias Drton, Steen Lauritzen, Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf…
#ArtificialIntelligence #GraphicalModels #Handbook
Mathematics for Machine Learning
https://gwthomas.github.io/docs/math4ml.pdf
https://gwthomas.github.io/docs/math4ml.pdf
Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
By 🤗 Hugging Face : https://huggingface.co/transformers
#Transformers #MachineLearning #NLP
By 🤗 Hugging Face : https://huggingface.co/transformers
#Transformers #MachineLearning #NLP
Recurrent Independent Mechanisms
Goyal et al.: https://arxiv.org/abs/1909.10893
#MachineLearning #DeepLearning #ArtificialIntelligence
Goyal et al.: https://arxiv.org/abs/1909.10893
#MachineLearning #DeepLearning #ArtificialIntelligence
arXiv.org
Recurrent Independent Mechanisms
Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose...
Cleaning tedious data is not simple as it seems
https://www.youtube.com/watch?v=MiiWzJE0fEA
https://www.youtube.com/watch?v=MiiWzJE0fEA
YouTube
"Probabilistic scripts for automating common-sense tasks" by Alexander Lew
As engineers, we love automating tedious tasks. But when those tasks require common-sense reasoning, automation can be difficult. Consider, for example, cleaning a messy dataset-full of typos, NULL values, numbers in the wrong units, and other problems. People…
A Roundup Review of the Best Deep Learning Books
https://blog.soshace.com/en/python/a-roundup-review-of-the-best-deep-learning-books/
https://blog.soshace.com/en/python/a-roundup-review-of-the-best-deep-learning-books/
Soshace
A Roundup Review of the Best Deep Learning Books - Soshace
If you’re interested in starting out or expanding your knowledge in neural networks and deep learning, then this roundup review of the best deep learning books might be a good starting point.
Here's lexfridman conversation with Leonard Susskind, a professor of theoretical physics at Stanford, one of the fathers of string theory, and one of the greatest physicists of our time, both as a researcher and an educator : https://lnkd.in/eP9pR4d https://t.iss.one/ArtificialIntelligenceArticles
Attention? Attention!
Blog by Lilian Weng : https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
#machinelearning #neuralnetwork #transformers
Blog by Lilian Weng : https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
#machinelearning #neuralnetwork #transformers
Lil'Log
Attention Attention
InterpretML: A Unified Framework for Machine Learning Interpretability. https://arxiv.org/abs/1909.09223
Robot Sound Interpretation: Combining Sight and Sound in Learning-Based Control. https://arxiv.org/abs/1909.09172
DeepView: Visualizing the behavior of deep neural networks in a part of the data space. https://arxiv.org/abs/1909.09154
Timage -- A Robust Time Series Classification Pipeline. https://arxiv.org/abs/1909.09149
HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models
Human evaluation for generative models have been ad-hoc.
They propose a standard human benchmark for generative realism that is grounded in psychophysics research in perception.
https://arxiv.org/abs/1904.01121
https://hype.stanford.edu/
Human evaluation for generative models have been ad-hoc.
They propose a standard human benchmark for generative realism that is grounded in psychophysics research in perception.
https://arxiv.org/abs/1904.01121
https://hype.stanford.edu/
arXiv.org
HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models
Generative models often use human evaluations to measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained...
BagNet: Berkeley Analog Generator with Layout Optimizer Boosted with Deep Neural Networks
Hakhamaneshi et al.: https://arxiv.org/abs/1907.10515
#SignalProcessing #MachineLearning #NeuralComputing
Hakhamaneshi et al.: https://arxiv.org/abs/1907.10515
#SignalProcessing #MachineLearning #NeuralComputing
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
https://www.frontiersin.org/articles/10.3389/fncom.2017.00024/full
https://www.frontiersin.org/articles/10.3389/fncom.2017.00024/full
Frontiers
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction…