Continuous Adaptation for Machine Learning System to Data Changes
https://blog.tensorflow.org/2021/12/continuous-adaptation-for-machine.html
https://blog.tensorflow.org/2021/12/continuous-adaptation-for-machine.html
blog.tensorflow.org
Continuous Adaptation for Machine Learning System to Data Changes
Learn how ML models can continuously adapt as the world changes, avoid issues, and take advantage of new realities in this guest blog post.
Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize
https://ai.googleblog.com/2021/12/improving-vision-transformer-efficiency.html
@tensorflowblog
https://ai.googleblog.com/2021/12/improving-vision-transformer-efficiency.html
@tensorflowblog
research.google
Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize
Posted by Michael Ryoo, Research Scientist, Robotics at Google and Anurag Arnab, Research Scientist, Google Research Transformer models consistentl...
General and Scalable Parallelization for Neural Networks
https://ai.googleblog.com/2021/12/general-and-scalable-parallelization.html
@tensorflowblog
https://ai.googleblog.com/2021/12/general-and-scalable-parallelization.html
@tensorflowblog
research.google
General and Scalable Parallelization for Neural Networks
Posted by Yuanzhong Xu and Yanping Huang, Software Engineers; Google Research, Brain Team Scaling neural networks, whether it be the amount of trai...
More Efficient In-Context Learning with GLaM
https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html
@tensorflowblog
https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html
@tensorflowblog
research.google
More Efficient In-Context Learning with GLaM
Posted by Andrew M Dai and Nan Du, Research Scientists, Google Research, Brain Team Large language models (e.g., GPT-3) have many significant capab...
A Fast WordPiece Tokenization System
https://ai.googleblog.com/2021/12/a-fast-wordpiece-tokenization-system.html
@tensorflowblog
https://ai.googleblog.com/2021/12/a-fast-wordpiece-tokenization-system.html
@tensorflowblog
research.google
A Fast WordPiece Tokenization System
Posted by Xinying Song, Staff Software Engineer and Denny Zhou, Senior Staff Research Scientist, Google Research Tokenization is a fundamental pre-...
Interpretable Deep Learning for Time Series Forecasting
https://ai.googleblog.com/2021/12/interpretable-deep-learning-for-time.html
@tensorflowblog
https://ai.googleblog.com/2021/12/interpretable-deep-learning-for-time.html
@tensorflowblog
blog.research.google
Interpretable Deep Learning for Time Series Forecasting
Training Machine Learning Models More Efficiently with Dataset Distillation
https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html
@tensorflowblog
https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html
@tensorflowblog
research.google
Training Machine Learning Models More Efficiently with Dataset Distillation
Posted by Timothy Nguyen1, Research Engineer and Jaehoon Lee, Senior Research Scientist, Google Research For a machine learning (ML) algorithm to b...
A Scalable Approach for Partially Local Federated Learning
https://ai.googleblog.com/2021/12/a-scalable-approach-for-partially-local.html
@tensorflowblog
https://ai.googleblog.com/2021/12/a-scalable-approach-for-partially-local.html
@tensorflowblog
Googleblog
A Scalable Approach for Partially Local Federated Learning
Google Research: Themes from 2021 and Beyond
https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html
@tensorflowblog
https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html
@tensorflowblog
research.google
Google Research: Themes from 2021 and Beyond
Posted by Jeff Dean, Senior Fellow and SVP of Google Research, on behalf of the entire Google Research community Over the last several decades, I'v...
Scaling Vision with Sparse Mixture of Experts
https://ai.googleblog.com/2022/01/scaling-vision-with-sparse-mixture-of.html
@tensorflowblog
https://ai.googleblog.com/2022/01/scaling-vision-with-sparse-mixture-of.html
@tensorflowblog
Googleblog
Scaling Vision with Sparse Mixture of Experts
Learning to Route by Task for Efficient Inference
https://ai.googleblog.com/2022/01/learning-to-route-by-task-for-efficient.html
@tensorflowblog
https://ai.googleblog.com/2022/01/learning-to-route-by-task-for-efficient.html
@tensorflowblog
Googleblog
Learning to Route by Task for Efficient Inference
Introducing StylEx: A New Approach for Visual Explanation of Classifiers
https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html
@tensorflowblog
https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html
@tensorflowblog
Googleblog
Introducing StylEx: A New Approach for Visual Explanation of Classifiers
LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything
https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html
@tensorflowblog
https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html
@tensorflowblog
research.google
LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything
Posted by Heng-Tze Cheng, Senior Staff Software Engineer and Romal Thoppilan, Senior Software Engineer, Google Research, Brain Team Language models...
LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything
https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html
@tensorflowblog
https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html
@tensorflowblog
research.google
LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything
Posted by Heng-Tze Cheng, Senior Staff Software Engineer and Romal Thoppilan, Senior Software Engineer, Google Research, Brain Team Language models...