(https://axnegar.fahares.com/axnegar/KNFemZUOJBWQVo/4928764.jpg)
#مقاله
#هوش_مصنوعی
🌎Learning about the world through video
At TwentyBN, we build AI systems that enable a human-like visual understanding of the world. Today, we are releasing two large-scale video datasets (256,591 labeled videos) to teach machines visual common sense. The first dataset allows machines to develop a fine-grained understanding of basic actions that occur in the physical world. The second dataset of dynamic hand gestures enables robust cognition models for human-computer interaction.
مقاله
https://arxiv.org/abs/1706.04261
دیتاست استفاده شده در مقاله
https://www.twentybn.com/datasets
#مقاله
#هوش_مصنوعی
🌎Learning about the world through video
At TwentyBN, we build AI systems that enable a human-like visual understanding of the world. Today, we are releasing two large-scale video datasets (256,591 labeled videos) to teach machines visual common sense. The first dataset allows machines to develop a fine-grained understanding of basic actions that occur in the physical world. The second dataset of dynamic hand gestures enables robust cognition models for human-computer interaction.
مقاله
https://arxiv.org/abs/1706.04261
دیتاست استفاده شده در مقاله
https://www.twentybn.com/datasets
(https://axnegar.fahares.com/axnegar/UkPVNkELLZovxY/4928996.jpg)
#مقاله
#هوش_مصنوعی
🔵A simple neural network module for relational reasoning
Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
https://arxiv.org/abs/1706.01427
#مقاله
#هوش_مصنوعی
🔵A simple neural network module for relational reasoning
Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
https://arxiv.org/abs/1706.01427
(https://axnegar.fahares.com/axnegar/J9nr9ikO1gKiMs/4933478.jpg)
#هوش_مصنوعی
#معرفی_کتاب
#یادگیری_ماشین
#پایتون
A Practical Implementation Guide to Predictive Data Analytics Using Python
Covers basic to advanced topics in an easy step-oriented manner
Concise on theory, strong focus on practical and hands-on approach
Explores advanced topics, such as Hyper-parameter tuning, deep natural language processing, neural network and deep learning
Describes state-of-art best practices for model tuning for better model accuracy
🔵About The Book:
This book is your practical guide towards novice to master in machine learning with Python in six steps. The six steps path has been designed based on the “Six degrees of separation” theory which states that everyone and everything is a maximum of six steps away. Note that the theory deals with the quality of connections, rather than their existence. So, a great effort has been taken to design an eminent, yet simple six steps covering fundamentals to advanced topics gradually that will help a beginner walk his way from no or least knowledge of machine learning in Python to all the way to becoming a master practitioner. This book is also helpful for current Machine Learning practitioners to learn the advanced topics such as Hyperparameter tuning, various ensemble techniques, Natural Language Processing (NLP), deep learning, and basics of reinforcement learning.
🌎Who This Book Is For:
This book will serve as a great resource for learning machine learning concepts and implementation techniques for:
Python developers or data engineers looking to expand their knowledge or career into machine learning area.
A current non-Python (R, SAS, SPSS, Matlab or any other language) machine learning practitioners looking to expand their implementation skills in Python.
Novice machine learning practitioners looking to learn advanced topics such as hyperparameter tuning, various ensemble techniques, Natural Language Processing (NLP), deep learning, and basics of reinforcement learning.
https://www.datasciencecentral.com/profiles/blogs/book-mastering-machine-learning-with-python-in-six-steps?utm_content=buffer4efbf&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
https://www.apress.com/us/book/9781484228654
#هوش_مصنوعی
#معرفی_کتاب
#یادگیری_ماشین
#پایتون
A Practical Implementation Guide to Predictive Data Analytics Using Python
Covers basic to advanced topics in an easy step-oriented manner
Concise on theory, strong focus on practical and hands-on approach
Explores advanced topics, such as Hyper-parameter tuning, deep natural language processing, neural network and deep learning
Describes state-of-art best practices for model tuning for better model accuracy
🔵About The Book:
This book is your practical guide towards novice to master in machine learning with Python in six steps. The six steps path has been designed based on the “Six degrees of separation” theory which states that everyone and everything is a maximum of six steps away. Note that the theory deals with the quality of connections, rather than their existence. So, a great effort has been taken to design an eminent, yet simple six steps covering fundamentals to advanced topics gradually that will help a beginner walk his way from no or least knowledge of machine learning in Python to all the way to becoming a master practitioner. This book is also helpful for current Machine Learning practitioners to learn the advanced topics such as Hyperparameter tuning, various ensemble techniques, Natural Language Processing (NLP), deep learning, and basics of reinforcement learning.
🌎Who This Book Is For:
This book will serve as a great resource for learning machine learning concepts and implementation techniques for:
Python developers or data engineers looking to expand their knowledge or career into machine learning area.
A current non-Python (R, SAS, SPSS, Matlab or any other language) machine learning practitioners looking to expand their implementation skills in Python.
Novice machine learning practitioners looking to learn advanced topics such as hyperparameter tuning, various ensemble techniques, Natural Language Processing (NLP), deep learning, and basics of reinforcement learning.
https://www.datasciencecentral.com/profiles/blogs/book-mastering-machine-learning-with-python-in-six-steps?utm_content=buffer4efbf&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
https://www.apress.com/us/book/9781484228654
(https://axnegar.fahares.com/axnegar/ICLotCHTZgb6sn/4933655.jpg)
#خبر
#یادگیری_ماشین
#مغز
🔵Machine Learning and the Language of the Brain
For years, researchers have been trying to figure out how the human brain organizes language – what happens in the brain when a person is presented with a word or an image. The work has academic rewards of its own, given the ongoing push by researchers to better understand the myriad ways in which the human brain works.
At the same time, ongoing studies can help doctors and scientists learn how to better treat people with aphasia or other brain disorders caused by strokes, tumors or trauma that impair a person’s ability to communicate – to speak, read, write and listen.
Tom Mitchell, the E. Fredkin University professor at Carnegie Mellon University who helps lead a neurosemantics research team, for the past several years has been marrying brain imaging technologies like functional MRI (fMRI) and magnetoencephalography (MEG) with machine learning techniques to develop models for better learning how the brain understands what it reads and sees and to answer an array of questions that cascade from that – including whether neural representations are similar from one person to another, if anything changes depending on language and how the brain handles not only single words but adjective-noun combinations, verbs, phrases and full sentences.
https://www.nextplatform.com/2017/06/26/machine-learning-language-brain/
#خبر
#یادگیری_ماشین
#مغز
🔵Machine Learning and the Language of the Brain
For years, researchers have been trying to figure out how the human brain organizes language – what happens in the brain when a person is presented with a word or an image. The work has academic rewards of its own, given the ongoing push by researchers to better understand the myriad ways in which the human brain works.
At the same time, ongoing studies can help doctors and scientists learn how to better treat people with aphasia or other brain disorders caused by strokes, tumors or trauma that impair a person’s ability to communicate – to speak, read, write and listen.
Tom Mitchell, the E. Fredkin University professor at Carnegie Mellon University who helps lead a neurosemantics research team, for the past several years has been marrying brain imaging technologies like functional MRI (fMRI) and magnetoencephalography (MEG) with machine learning techniques to develop models for better learning how the brain understands what it reads and sees and to answer an array of questions that cascade from that – including whether neural representations are similar from one person to another, if anything changes depending on language and how the brain handles not only single words but adjective-noun combinations, verbs, phrases and full sentences.
https://www.nextplatform.com/2017/06/26/machine-learning-language-brain/
(https://axnegar.fahares.com/axnegar/QUgiPqlBTi0vAR/4936535.jpg)
#خبر
#شبکه_عصبی
🔵Draw Together with a Neural Network
We made an interactive web experiment that lets you draw together with a recurrent neural network model called sketch-rnn. We taught this neural net to draw by training it on millions of doodles collected from the Quick, Draw! game. Once you start drawing an object, sketch-rnn will come up with many possible ways to continue drawing this object based on where you left off. Try the first demo.
In the above demo, you are instructed to start drawing a particular object. Once you stop doodling, the neural network takes over and attempts to guess the rest of your doodle. You can take over drawing again and continue where you left off. We trained around 100 models you can choose to experiment with, and some models are trained on multiple categories.
https://magenta.tensorflow.org/sketch-rnn-demo
#خبر
#شبکه_عصبی
🔵Draw Together with a Neural Network
We made an interactive web experiment that lets you draw together with a recurrent neural network model called sketch-rnn. We taught this neural net to draw by training it on millions of doodles collected from the Quick, Draw! game. Once you start drawing an object, sketch-rnn will come up with many possible ways to continue drawing this object based on where you left off. Try the first demo.
In the above demo, you are instructed to start drawing a particular object. Once you stop doodling, the neural network takes over and attempts to guess the rest of your doodle. You can take over drawing again and continue where you left off. We trained around 100 models you can choose to experiment with, and some models are trained on multiple categories.
https://magenta.tensorflow.org/sketch-rnn-demo
(https://axnegar.fahares.com/axnegar/RWW7qkbo5JBqL7/4936702.jpg)
#مقاله
#هوش_مصنوعی
🔵Deep Semantics-Aware Photo Adjustment
Automatic photo adjustment is to mimic the photo retouching style of professional photographers and automatically adjust photos to the learned style. There have been many attempts to model the tone and the color adjustment globally with low-level color statistics. Also, spatially varying photo adjustment methods have been studied by exploiting high-level features and semantic label maps. Those methods are semantics-aware since the color mapping is dependent on the high-level semantic context. However, their performance is limited to the pre-computed hand-crafted features and it is hard to reflect user's preference to the adjustment. In this paper, we propose a deep neural network that models the semantics-aware photo adjustment. The proposed network exploits bilinear models that are the multiplicative interaction of the color and the contexual features.
https://arxiv.org/abs/1706.08260v1
#مقاله
#هوش_مصنوعی
🔵Deep Semantics-Aware Photo Adjustment
Automatic photo adjustment is to mimic the photo retouching style of professional photographers and automatically adjust photos to the learned style. There have been many attempts to model the tone and the color adjustment globally with low-level color statistics. Also, spatially varying photo adjustment methods have been studied by exploiting high-level features and semantic label maps. Those methods are semantics-aware since the color mapping is dependent on the high-level semantic context. However, their performance is limited to the pre-computed hand-crafted features and it is hard to reflect user's preference to the adjustment. In this paper, we propose a deep neural network that models the semantics-aware photo adjustment. The proposed network exploits bilinear models that are the multiplicative interaction of the color and the contexual features.
https://arxiv.org/abs/1706.08260v1
🔵Deep learning with Microsoft Cognitive Toolkit
Explore the toolkit we use to build AI tools and train your own deep learning algorithms to learn like the human brain.
https://www.youtube.com/watch?v=OEnGqsvw52E
Explore the toolkit we use to build AI tools and train your own deep learning algorithms to learn like the human brain.
https://www.youtube.com/watch?v=OEnGqsvw52E
YouTube
Deep learning with Microsoft Cognitive Toolkit
Explore the toolkit we use to build AI tools and train your own deep learning algorithms to learn like the human brain.
(https://axnegar.fahares.com/axnegar/l1Oeh9iV3jMUNt/4948024.jpg)
#مقاله
#یادگیری_عمیق
🔵Awesome Deep learning papers and other resources
A list of recent papers regarding deep learning and deep reinforcement learning. They are sorted by time to see the recent papers first. I will renew the recent papers and add notes to these papers.
You should find the papers and software with star flag are more important or popular.
https://github.com/endymecy/awesome-deeplearning-resources
#مقاله
#یادگیری_عمیق
🔵Awesome Deep learning papers and other resources
A list of recent papers regarding deep learning and deep reinforcement learning. They are sorted by time to see the recent papers first. I will renew the recent papers and add notes to these papers.
You should find the papers and software with star flag are more important or popular.
https://github.com/endymecy/awesome-deeplearning-resources
(https://axnegar.fahares.com/axnegar/SC2x8mg3M7lCUY/4948139.jpg)
#آموزش
#بینایی_ماشین
🔵Analyzing The Papers Behind Facebook's Computer Vision Approach
You know that company called Facebook? Yeah, the one that has 1.6 billion people hooked on their website. Take all of the happy birthday posts, embarrassing pictures of you as a little kid, that one family relative that likes every single one of your statuses, and you have a whole lot of data to analyze.
https://adeshpande3.github.io/adeshpande3.github.io/Analyzing-the-Papers-Behind-Facebook's-Computer-Vision-Approach/
#آموزش
#بینایی_ماشین
🔵Analyzing The Papers Behind Facebook's Computer Vision Approach
You know that company called Facebook? Yeah, the one that has 1.6 billion people hooked on their website. Take all of the happy birthday posts, embarrassing pictures of you as a little kid, that one family relative that likes every single one of your statuses, and you have a whole lot of data to analyze.
https://adeshpande3.github.io/adeshpande3.github.io/Analyzing-the-Papers-Behind-Facebook's-Computer-Vision-Approach/
#خبر
https://www.computerworlduk.com/galleries/data/ways-organisations-are-using-google-deepminds-machine-learning-algorithms-3647513/
https://www.computerworlduk.com/galleries/data/ways-organisations-are-using-google-deepminds-machine-learning-algorithms-3647513/
ComputerworldUK
7 Real-Life Use Cases for Google DeepMind’s Machine Learning Systems
The artificial intelligence company has already revealed some intriguing applications of its technology
🔵TimeNet: Pre-trained deep recurrent neural network for time series classification
Inspired by the tremendous success of deep Convolutional Neural Networks as generic feature extractors for images, we propose TimeNet: a deep recurrent neural network (RNN) trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series. Rather than relying on data from the problem domain, TimeNet attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously. Once trained, TimeNet can be used as a generic off-the-shelf feature extractor for time series. The representations or embeddings given by a pre-trained TimeNet are found to be useful for time series classification (TSC). For several publicly available datasets from UCR TSC Archive and an industrial telematics sensor data from vehicles, we observe that a classifier learned over the TimeNet embeddings yields significantly better performance compared to (i) a classifier learned over the embeddings given by a domain-specific RNN, as well as (ii) a nearest neighbor classifier based on Dynamic Time Warping.
https://arxiv.org/abs/1706.08838
Inspired by the tremendous success of deep Convolutional Neural Networks as generic feature extractors for images, we propose TimeNet: a deep recurrent neural network (RNN) trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series. Rather than relying on data from the problem domain, TimeNet attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously. Once trained, TimeNet can be used as a generic off-the-shelf feature extractor for time series. The representations or embeddings given by a pre-trained TimeNet are found to be useful for time series classification (TSC). For several publicly available datasets from UCR TSC Archive and an industrial telematics sensor data from vehicles, we observe that a classifier learned over the TimeNet embeddings yields significantly better performance compared to (i) a classifier learned over the embeddings given by a domain-specific RNN, as well as (ii) a nearest neighbor classifier based on Dynamic Time Warping.
https://arxiv.org/abs/1706.08838
(https://axnegar.fahares.com/axnegar/UYs2z9cy5SSvU7/4950668.jpg)
#مقاله
#شناسایی_فعالیت
🔵Recurrent Residual Learning for Action Recognition
Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent ConvNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer.
https://arxiv.org/abs/1706.08807
#مقاله
#شناسایی_فعالیت
🔵Recurrent Residual Learning for Action Recognition
Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent ConvNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer.
https://arxiv.org/abs/1706.08807
ArtificialIntelligenceArticles
Andrew Ng
👁🗨#معرفی_محقیق_هوش_مصنوعی
#معرفی_محققین_هوش_مصنوعی_در_جهان
شما می دانید که هوش مصنوعی چیست و همچنین در همه ی زمینه ها گسترده شده است .اما ممکن است شما با محققان و تکنولوژیست های هوش مصنوعی آشنایی نداشته باشید.در این کانال قصد داریم طبق برنامه هر هفته جمعه ها، با معرفی محققین و چهره های هوش مصنوعی ایران و جهان در کانال مقالات هوش مصنوعی بپردازیم همراه ما باشید.
🔵ANDREW NG
اگر دوره های یادگیری ماشین Coursera را دیده باشید، باید آندرو نگ را بشناسید. او یکی از افراد فعال در پیشبرد علم یادگیری ماشین است و دوره که ذکرش رفت، توانسته تاکنون به 100 هزار نفر مقدمات یادگیری ماشین را یاد بدهد.
حوزه تخصصی او deep learning است و به واسطه همین تخصص توانسته پروژه بزرگ Google Brain شرکت گوگل را به پیش ببرد که در آن الگوریتم پیاده سازی شده توانست بدون هیچ گونه هدایتی مفهوم گربه را یاد بگیرد.
او استاد دانشگاه استنفورد هم هست و در چند سال اخیر در شرکتی که با کمک دوستانش تاسیس کرده، به دنبال انجام کارهای خارق العاده تری در حوزه deep learning است. اسم آن شرکت Baidu Research می باشد.
Andrew Ng is VP & Chief Scientist of Baidu; Co-Chairman and Co-Founder of Coursera; and an Adjunct Professor at Stanford University.
In 2011 he led the development of Stanford University’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class to over 100,000 students, leading to the founding of Coursera. Ng’s goal is to give everyone in the world access to a great education, for free. Today, Coursera partners with some of the top universities in the world to offer high quality online courses, and is the largest MOOC platform in the world.
Ng also works on machine learning with an emphasis on deep learning. He founded and led the “Google Brain” project which developed massive-scale deep learning algorithms. This resulted in the famous “Google cat” result, in which a massive neural network with 1 billion parameters learned from unlabeled YouTube videos to detect cats. More recently, he continues to work on deep learning and its applications to computer vision and speech, including such applications as .
سایت شخصی :
https://www.andrewng.org/
https://online.stanford.edu/instructors/andrew-ng
🔹https://en.wikipedia.org/wiki/Andrew_Ng
🔵سخنرانی ان در زمینه ی هوش مصنوعی الکترسیته است
https://www.youtube.com/watch?v=21EiKfQYZXc
https://tedxboston.us/speaker/ng-2016
https://www.youtube.com/watch?v=AY4ajbu_G3k
آموزش یادگیری ماشین :
https://www.coursera.org/instructor/andrewng
🔹در گوگل اسکولار:
https://scholar.google.nl/scholar?hl=nl&q=Andrew+Ng&btnG=&lr=
لینکدین :
https://www.linkedin.com/in/andrewyng/
تویتر :
https://twitter.com/AndrewYNg
#معرفی_محققین_هوش_مصنوعی_در_جهان
شما می دانید که هوش مصنوعی چیست و همچنین در همه ی زمینه ها گسترده شده است .اما ممکن است شما با محققان و تکنولوژیست های هوش مصنوعی آشنایی نداشته باشید.در این کانال قصد داریم طبق برنامه هر هفته جمعه ها، با معرفی محققین و چهره های هوش مصنوعی ایران و جهان در کانال مقالات هوش مصنوعی بپردازیم همراه ما باشید.
🔵ANDREW NG
اگر دوره های یادگیری ماشین Coursera را دیده باشید، باید آندرو نگ را بشناسید. او یکی از افراد فعال در پیشبرد علم یادگیری ماشین است و دوره که ذکرش رفت، توانسته تاکنون به 100 هزار نفر مقدمات یادگیری ماشین را یاد بدهد.
حوزه تخصصی او deep learning است و به واسطه همین تخصص توانسته پروژه بزرگ Google Brain شرکت گوگل را به پیش ببرد که در آن الگوریتم پیاده سازی شده توانست بدون هیچ گونه هدایتی مفهوم گربه را یاد بگیرد.
او استاد دانشگاه استنفورد هم هست و در چند سال اخیر در شرکتی که با کمک دوستانش تاسیس کرده، به دنبال انجام کارهای خارق العاده تری در حوزه deep learning است. اسم آن شرکت Baidu Research می باشد.
Andrew Ng is VP & Chief Scientist of Baidu; Co-Chairman and Co-Founder of Coursera; and an Adjunct Professor at Stanford University.
In 2011 he led the development of Stanford University’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class to over 100,000 students, leading to the founding of Coursera. Ng’s goal is to give everyone in the world access to a great education, for free. Today, Coursera partners with some of the top universities in the world to offer high quality online courses, and is the largest MOOC platform in the world.
Ng also works on machine learning with an emphasis on deep learning. He founded and led the “Google Brain” project which developed massive-scale deep learning algorithms. This resulted in the famous “Google cat” result, in which a massive neural network with 1 billion parameters learned from unlabeled YouTube videos to detect cats. More recently, he continues to work on deep learning and its applications to computer vision and speech, including such applications as .
سایت شخصی :
https://www.andrewng.org/
https://online.stanford.edu/instructors/andrew-ng
🔹https://en.wikipedia.org/wiki/Andrew_Ng
🔵سخنرانی ان در زمینه ی هوش مصنوعی الکترسیته است
https://www.youtube.com/watch?v=21EiKfQYZXc
https://tedxboston.us/speaker/ng-2016
https://www.youtube.com/watch?v=AY4ajbu_G3k
آموزش یادگیری ماشین :
https://www.coursera.org/instructor/andrewng
🔹در گوگل اسکولار:
https://scholar.google.nl/scholar?hl=nl&q=Andrew+Ng&btnG=&lr=
لینکدین :
https://www.linkedin.com/in/andrewyng/
تویتر :
https://twitter.com/AndrewYNg
(https://axnegar.fahares.com/axnegar/BuDYfSJbReVMyt/4951821.jpg)
#آموزش
#یادگیری_عمیق
🌎Deep Learning CNN’s in Tensorflow with GPUs
In this tutorial, you’ll learn the architecture of a convolutional neural network (CNN), how to create a CNN in Tensorflow, and provide predictions on labels of images. Finally, you’ll learn how to run the model on a GPU so you can spend your time creating better models, not waiting for them to converge.
https://hackernoon.com/deep-learning-cnns-in-tensorflow-with-gpus-cba6efe0acc2
#آموزش
#یادگیری_عمیق
🌎Deep Learning CNN’s in Tensorflow with GPUs
In this tutorial, you’ll learn the architecture of a convolutional neural network (CNN), how to create a CNN in Tensorflow, and provide predictions on labels of images. Finally, you’ll learn how to run the model on a GPU so you can spend your time creating better models, not waiting for them to converge.
https://hackernoon.com/deep-learning-cnns-in-tensorflow-with-gpus-cba6efe0acc2
(https://axnegar.fahares.com/axnegar/ay1PVFtx7OswKm/4954035.jpg)
#هوش_مصنوعی
#چالش
🔵Challenges of #ArtificialIntelligence
Until few years ago, #ArtificialIntelligence (#AI) was similar to nuclear fusion in unfulfilled promise. It had been around a long time but had not reached the spectacular heights foreseen in its initial stages. However now, Artificial intelligence (AI) is no longer the future. It is here and now. It’s realizing its potential in achieving man-like capabilities, so it’s the right time to ask: How can business leaders adapt AI to take advantage of the specific strengths of man and machine?
AI is swiftly becoming the foundational technology in areas as diverse as self-driving cars, financial trading, connected houses etc. Self-learning algorithms are now routinely embedded in mobile and online services. Researchers have leveraged massive gains in processing power and the data streaming from digital devices and connected sensors to improve AI performance. Therefore, the progress in robotics, self driving cars, speech processing, natural language understanding is quite impressive.
But with all the advantages AI can offer, there are still some challenges for the companies who wants to adapt #AI. As AI is a vast domain, lisitng all challenges is quite impossible, yet we’ve listed few generic challenges of Artificial Intelligence here below, such as: AI situated approach in the real-world; Learning process with human intervention; Access to other disciplines; Multitasking; Validation and certification of AI systems.
https://www.xorlogics.com/2017/06/26/challenges-of-artificialintelligence/?utm_content=buffereb35e&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
#هوش_مصنوعی
#چالش
🔵Challenges of #ArtificialIntelligence
Until few years ago, #ArtificialIntelligence (#AI) was similar to nuclear fusion in unfulfilled promise. It had been around a long time but had not reached the spectacular heights foreseen in its initial stages. However now, Artificial intelligence (AI) is no longer the future. It is here and now. It’s realizing its potential in achieving man-like capabilities, so it’s the right time to ask: How can business leaders adapt AI to take advantage of the specific strengths of man and machine?
AI is swiftly becoming the foundational technology in areas as diverse as self-driving cars, financial trading, connected houses etc. Self-learning algorithms are now routinely embedded in mobile and online services. Researchers have leveraged massive gains in processing power and the data streaming from digital devices and connected sensors to improve AI performance. Therefore, the progress in robotics, self driving cars, speech processing, natural language understanding is quite impressive.
But with all the advantages AI can offer, there are still some challenges for the companies who wants to adapt #AI. As AI is a vast domain, lisitng all challenges is quite impossible, yet we’ve listed few generic challenges of Artificial Intelligence here below, such as: AI situated approach in the real-world; Learning process with human intervention; Access to other disciplines; Multitasking; Validation and certification of AI systems.
https://www.xorlogics.com/2017/06/26/challenges-of-artificialintelligence/?utm_content=buffereb35e&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
(https://axnegar.fahares.com/axnegar/t2s8ljz4pzloBl/4954190.jpg)
#یادگیری_عمیق
#هوش_مصنوعی
#شبکه_عصبی
#کورس
🔵Deep Learning: Artificial Neural Networks with Python
This online course is designed to teach you how to create deep learning Algorithms in Python by two expert Machine Learning & Data Science experts. Templates included. This course is split into 32 sections which cover over 179 Artificial Neural Network topics using a video format – receive a certificate of completion at the end of the course. Online learning is very flexible (expiry dates may vary from course to course depending on the course provider).
https://how-to-learn-online.com/artificial-neural-network-with-python
کورس :
https://www.udemy.com/deeplearning/?siteID=9PxUyjpjRL8-WXOxTjtgjAAjSexNfWoxZA&LSNPUBID=9PxUyjpjRL8
#یادگیری_عمیق
#هوش_مصنوعی
#شبکه_عصبی
#کورس
🔵Deep Learning: Artificial Neural Networks with Python
This online course is designed to teach you how to create deep learning Algorithms in Python by two expert Machine Learning & Data Science experts. Templates included. This course is split into 32 sections which cover over 179 Artificial Neural Network topics using a video format – receive a certificate of completion at the end of the course. Online learning is very flexible (expiry dates may vary from course to course depending on the course provider).
https://how-to-learn-online.com/artificial-neural-network-with-python
کورس :
https://www.udemy.com/deeplearning/?siteID=9PxUyjpjRL8-WXOxTjtgjAAjSexNfWoxZA&LSNPUBID=9PxUyjpjRL8
(https://axnegar.fahares.com/axnegar/CbOOpo1ePkVIbw/4954463.jpg)
🔵Detecting Small Signs from Large Images
In the past decade, Convolutional Neural Networks (CNNs) have been demonstrated successful for object detections. However, the size of network input is limited by the amount of memory available on GPUs. Moreover, performance degrades when detecting small objects. To alleviate the memory usage and improve the performance of detecting small traffic signs, we proposed an approach for detecting small traffic signs from large images under real world conditions. In particular, large images are broken into small patches as input to a SmallObject-Sensitive-CNN (SOS-CNN) modified from a Single Shot Multibox Detector (SSD) framework with a VGG-16 network as the base network to produce patch-level object detection results. Scale invariance is achieved by applying the SOS-CNN on an image pyramid. Then, image-level object detection is obtained by projecting all the patch-level detection results to the image at the original scale. Experimental results on a realworld conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.
https://arxiv.org/pdf/1706.08574.pdf
🔵Detecting Small Signs from Large Images
In the past decade, Convolutional Neural Networks (CNNs) have been demonstrated successful for object detections. However, the size of network input is limited by the amount of memory available on GPUs. Moreover, performance degrades when detecting small objects. To alleviate the memory usage and improve the performance of detecting small traffic signs, we proposed an approach for detecting small traffic signs from large images under real world conditions. In particular, large images are broken into small patches as input to a SmallObject-Sensitive-CNN (SOS-CNN) modified from a Single Shot Multibox Detector (SSD) framework with a VGG-16 network as the base network to produce patch-level object detection results. Scale invariance is achieved by applying the SOS-CNN on an image pyramid. Then, image-level object detection is obtained by projecting all the patch-level detection results to the image at the original scale. Experimental results on a realworld conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.
https://arxiv.org/pdf/1706.08574.pdf
(https://axnegar.fahares.com/axnegar/OLp413Uquydc7T/4960789.jpg)
#هوش_مصنوعی
#مقاله
🔵Perceptual Adversarial Networks for Image-to-Image Transformation
Chaoyue Wang, Chang Xu, Chaohui Wang, Dacheng Tao
(Submitted on 28 Jun 2017)
In this paper, we propose a principled Perceptual Adversarial Networks (PAN) for image-to-image transformation tasks. Unlike existing application-specific algorithms, PAN provides a generic framework of learning mapping relationship between paired images (Fig. 1), such as mapping a rainy image to its de-rained counterpart, object edges to its photo, semantic labels to a scenes image, etc. The proposed PAN consists of two feed-forward convolutional neural networks (CNNs), the image transformation network T and the discriminative network D. Through combining the generative adversarial loss and the proposed perceptual adversarial loss, these two networks can be trained alternately to solve image-to-image transformation tasks. Among them, the hidden layers and output of the discriminative network D are upgraded to continually and automatically discover the discrepancy between the transformed image and the corresponding ground-truth. Simultaneously, the image transformation network T is trained to minimize the discrepancy explored by the discriminative network D. Through the adversarial training process, the image transformation network T will continually narrow the gap between transformed images and ground-truth images. Experiments evaluated on several image-to-image transformation tasks (e.g., image de-raining, image inpainting, etc.) show that the proposed PAN outperforms many related state-of-the-art methods.
https://arxiv.org/abs/1706.09138
#هوش_مصنوعی
#مقاله
🔵Perceptual Adversarial Networks for Image-to-Image Transformation
Chaoyue Wang, Chang Xu, Chaohui Wang, Dacheng Tao
(Submitted on 28 Jun 2017)
In this paper, we propose a principled Perceptual Adversarial Networks (PAN) for image-to-image transformation tasks. Unlike existing application-specific algorithms, PAN provides a generic framework of learning mapping relationship between paired images (Fig. 1), such as mapping a rainy image to its de-rained counterpart, object edges to its photo, semantic labels to a scenes image, etc. The proposed PAN consists of two feed-forward convolutional neural networks (CNNs), the image transformation network T and the discriminative network D. Through combining the generative adversarial loss and the proposed perceptual adversarial loss, these two networks can be trained alternately to solve image-to-image transformation tasks. Among them, the hidden layers and output of the discriminative network D are upgraded to continually and automatically discover the discrepancy between the transformed image and the corresponding ground-truth. Simultaneously, the image transformation network T is trained to minimize the discrepancy explored by the discriminative network D. Through the adversarial training process, the image transformation network T will continually narrow the gap between transformed images and ground-truth images. Experiments evaluated on several image-to-image transformation tasks (e.g., image de-raining, image inpainting, etc.) show that the proposed PAN outperforms many related state-of-the-art methods.
https://arxiv.org/abs/1706.09138
(https://axnegar.fahares.com/axnegar/HlEhn8VsrH81nx/4960815.jpg)
🔵Tools for Making Machine Learning Easier and Smoother
Learn new methods for using deep learning to gain actionable insights from rich, complex data.
During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.
https://data-informed.com/tools-for-making-machine-learning-easier-and-smoother/?utm_content=55415932&utm_medium=social&utm_source=twitter
🔵Tools for Making Machine Learning Easier and Smoother
Learn new methods for using deep learning to gain actionable insights from rich, complex data.
During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.
https://data-informed.com/tools-for-making-machine-learning-easier-and-smoother/?utm_content=55415932&utm_medium=social&utm_source=twitter