ArtificialIntelligenceArticles
2.96K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
‍ (https://axnegar.fahares.com/axnegar/SC2x8mg3M7lCUY/4948139.jpg)
#آموزش
#بینایی_ماشین

🔵Analyzing The Papers Behind Facebook's Computer Vision Approach

You know that company called Facebook? Yeah, the one that has 1.6 billion people hooked on their website. Take all of the happy birthday posts, embarrassing pictures of you as a little kid, that one family relative that likes every single one of your statuses, and you have a whole lot of data to analyze.

https://adeshpande3.github.io/adeshpande3.github.io/Analyzing-the-Papers-Behind-Facebook's-Computer-Vision-Approach/
🔵TimeNet: Pre-trained deep recurrent neural network for time series classification

Inspired by the tremendous success of deep Convolutional Neural Networks as generic feature extractors for images, we propose TimeNet: a deep recurrent neural network (RNN) trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series. Rather than relying on data from the problem domain, TimeNet attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously. Once trained, TimeNet can be used as a generic off-the-shelf feature extractor for time series. The representations or embeddings given by a pre-trained TimeNet are found to be useful for time series classification (TSC). For several publicly available datasets from UCR TSC Archive and an industrial telematics sensor data from vehicles, we observe that a classifier learned over the TimeNet embeddings yields significantly better performance compared to (i) a classifier learned over the embeddings given by a domain-specific RNN, as well as (ii) a nearest neighbor classifier based on Dynamic Time Warping.

https://arxiv.org/abs/1706.08838
‍ (https://axnegar.fahares.com/axnegar/UYs2z9cy5SSvU7/4950668.jpg)

#مقاله
#شناسایی_فعالیت

🔵Recurrent Residual Learning for Action Recognition

Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent ConvNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer.

https://arxiv.org/abs/1706.08807
ArtificialIntelligenceArticles
Andrew Ng
👁‍🗨#معرفی_محقیق_هوش_مصنوعی

#معرفی_محققین_هوش_مصنوعی_در_جهان

شما می دانید که هوش مصنوعی چیست و همچنین در همه ی زمینه ها گسترده شده است .اما ممکن است شما با محققان و تکنولوژیست های هوش مصنوعی آشنایی نداشته باشید.در این کانال قصد داریم طبق برنامه هر هفته جمعه ها، با معرفی محققین و چهره های هوش مصنوعی ایران و جهان در کانال مقالات هوش مصنوعی بپردازیم همراه ما باشید.

🔵ANDREW NG

اگر دوره های یادگیری ماشین Coursera را دیده باشید، باید آندرو نگ را بشناسید. او یکی از افراد فعال در پیشبرد علم یادگیری ماشین است و دوره که ذکرش رفت، توانسته تاکنون به 100 هزار نفر مقدمات یادگیری ماشین را یاد بدهد.
حوزه تخصصی او deep learning است و به واسطه همین تخصص توانسته پروژه بزرگ Google Brain شرکت گوگل را به پیش ببرد که در آن الگوریتم پیاده سازی شده توانست بدون هیچ گونه هدایتی مفهوم گربه را یاد بگیرد.
او استاد دانشگاه استنفورد هم هست و در چند سال اخیر در شرکتی که با کمک دوستانش تاسیس کرده، به دنبال انجام کارهای خارق العاده تری در حوزه deep learning است. اسم آن شرکت Baidu Research می باشد.

Andrew Ng is VP & Chief Scientist of Baidu; Co-Chairman and Co-Founder of Coursera; and an Adjunct Professor at Stanford University.

In 2011 he led the development of Stanford University’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class to over 100,000 students, leading to the founding of Coursera. Ng’s goal is to give everyone in the world access to a great education, for free. Today, Coursera partners with some of the top universities in the world to offer high quality online courses, and is the largest MOOC platform in the world.

Ng also works on machine learning with an emphasis on deep learning. He founded and led the “Google Brain” project which developed massive-scale deep learning algorithms. This resulted in the famous “Google cat” result, in which a massive neural network with 1 billion parameters learned from unlabeled YouTube videos to detect cats. More recently, he continues to work on deep learning and its applications to computer vision and speech, including such applications as .

سایت شخصی :
https://www.andrewng.org/

https://online.stanford.edu/instructors/andrew-ng


🔹https://en.wikipedia.org/wiki/Andrew_Ng

🔵سخنرانی ان در زمینه ی هوش مصنوعی الکترسیته است

https://www.youtube.com/watch?v=21EiKfQYZXc

https://tedxboston.us/speaker/ng-2016

https://www.youtube.com/watch?v=AY4ajbu_G3k


آموزش یادگیری ماشین :

https://www.coursera.org/instructor/andrewng

🔹در گوگل اسکولار:
https://scholar.google.nl/scholar?hl=nl&q=Andrew+Ng&btnG=&lr=


لینکدین :
https://www.linkedin.com/in/andrewyng/

تویتر :
https://twitter.com/AndrewYNg
‍ (https://axnegar.fahares.com/axnegar/BuDYfSJbReVMyt/4951821.jpg)

#آموزش
#یادگیری_عمیق


🌎Deep Learning CNN’s in Tensorflow with GPUs

In this tutorial, you’ll learn the architecture of a convolutional neural network (CNN), how to create a CNN in Tensorflow, and provide predictions on labels of images. Finally, you’ll learn how to run the model on a GPU so you can spend your time creating better models, not waiting for them to converge.

https://hackernoon.com/deep-learning-cnns-in-tensorflow-with-gpus-cba6efe0acc2
‍ (https://axnegar.fahares.com/axnegar/ay1PVFtx7OswKm/4954035.jpg)

#هوش_مصنوعی
#چالش


🔵Challenges of #ArtificialIntelligence

Until few years ago, #ArtificialIntelligence (#AI) was similar to nuclear fusion in unfulfilled promise. It had been around a long time but had not reached the spectacular heights foreseen in its initial stages. However now, Artificial intelligence (AI) is no longer the future. It is here and now. It’s realizing its potential in achieving man-like capabilities, so it’s the right time to ask: How can business leaders adapt AI to take advantage of the specific strengths of man and machine?

AI is swiftly becoming the foundational technology in areas as diverse as self-driving cars, financial trading, connected houses etc. Self-learning algorithms are now routinely embedded in mobile and online services. Researchers have leveraged massive gains in processing power and the data streaming from digital devices and connected sensors to improve AI performance. Therefore, the progress in robotics, self driving cars, speech processing, natural language understanding is quite impressive.

But with all the advantages AI can offer, there are still some challenges for the companies who wants to adapt #AI. As AI is a vast domain, lisitng all challenges is quite impossible, yet we’ve listed few generic challenges of Artificial Intelligence here below, such as: AI situated approach in the real-world; Learning process with human intervention; Access to other disciplines; Multitasking; Validation and certification of AI systems.


https://www.xorlogics.com/2017/06/26/challenges-of-artificialintelligence/?utm_content=buffereb35e&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
‍ (https://axnegar.fahares.com/axnegar/t2s8ljz4pzloBl/4954190.jpg)

#یادگیری_عمیق
#هوش_مصنوعی
#شبکه_عصبی
#کورس


🔵Deep Learning: Artificial Neural Networks with Python

This online course is designed to teach you how to create deep learning Algorithms in Python by two expert Machine Learning & Data Science experts. Templates included. This course is split into 32 sections which cover over 179 Artificial Neural Network topics using a video format – receive a certificate of completion at the end of the course. Online learning is very flexible (expiry dates may vary from course to course depending on the course provider).



https://how-to-learn-online.com/artificial-neural-network-with-python

کورس :

https://www.udemy.com/deeplearning/?siteID=9PxUyjpjRL8-WXOxTjtgjAAjSexNfWoxZA&LSNPUBID=9PxUyjpjRL8
‍ (https://axnegar.fahares.com/axnegar/CbOOpo1ePkVIbw/4954463.jpg)

🔵Detecting Small Signs from Large Images


In the past decade, Convolutional Neural Networks (CNNs) have been demonstrated successful for object detections. However, the size of network input is limited by the amount of memory available on GPUs. Moreover, performance degrades when detecting small objects. To alleviate the memory usage and improve the performance of detecting small traffic signs, we proposed an approach for detecting small traffic signs from large images under real world conditions. In particular, large images are broken into small patches as input to a SmallObject-Sensitive-CNN (SOS-CNN) modified from a Single Shot Multibox Detector (SSD) framework with a VGG-16 network as the base network to produce patch-level object detection results. Scale invariance is achieved by applying the SOS-CNN on an image pyramid. Then, image-level object detection is obtained by projecting all the patch-level detection results to the image at the original scale. Experimental results on a realworld conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.

https://arxiv.org/pdf/1706.08574.pdf
‍ (https://axnegar.fahares.com/axnegar/OLp413Uquydc7T/4960789.jpg)
#هوش_مصنوعی
#مقاله


🔵Perceptual Adversarial Networks for Image-to-Image Transformation

Chaoyue Wang, Chang Xu, Chaohui Wang, Dacheng Tao
(Submitted on 28 Jun 2017)
In this paper, we propose a principled Perceptual Adversarial Networks (PAN) for image-to-image transformation tasks. Unlike existing application-specific algorithms, PAN provides a generic framework of learning mapping relationship between paired images (Fig. 1), such as mapping a rainy image to its de-rained counterpart, object edges to its photo, semantic labels to a scenes image, etc. The proposed PAN consists of two feed-forward convolutional neural networks (CNNs), the image transformation network T and the discriminative network D. Through combining the generative adversarial loss and the proposed perceptual adversarial loss, these two networks can be trained alternately to solve image-to-image transformation tasks. Among them, the hidden layers and output of the discriminative network D are upgraded to continually and automatically discover the discrepancy between the transformed image and the corresponding ground-truth. Simultaneously, the image transformation network T is trained to minimize the discrepancy explored by the discriminative network D. Through the adversarial training process, the image transformation network T will continually narrow the gap between transformed images and ground-truth images. Experiments evaluated on several image-to-image transformation tasks (e.g., image de-raining, image inpainting, etc.) show that the proposed PAN outperforms many related state-of-the-art methods.

https://arxiv.org/abs/1706.09138
‍ (https://axnegar.fahares.com/axnegar/HlEhn8VsrH81nx/4960815.jpg)

🔵Tools for Making Machine Learning Easier and Smoother

Learn new methods for using deep learning to gain actionable insights from rich, complex data.

During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.

https://data-informed.com/tools-for-making-machine-learning-easier-and-smoother/?utm_content=55415932&utm_medium=social&utm_source=twitter
Many papers for the YouTube-8M challenge. You can see what methods are commonly-used for video understanding
https://arxiv.org/find/all/1/OR+au:YouTube_8M+all:+EXACT+YouTube_8M/0/1/0/all/0/1
‍ (https://axnegar.fahares.com/axnegar/OK1VSo5Hxn3OzU/4960973.jpg)

#مقاله

🔵A Parameterized Approach to Personalized Variable Length Summarization of Soccer Matches

Mohak Sukhwani, Ravi Kothari
(Submitted on 28 Jun 2017)
We present a parameterized approach to produce personalized variable length summaries of soccer matches. Our approach is based on temporally segmenting the soccer video into 'plays', associating a user-specifiable 'utility' for each type of play and using 'bin-packing' to select a subset of the plays that add up to the desired length while maximizing the overall utility (volume in bin-packing terms). Our approach systematically allows a user to override the default weights assigned to each type of play with individual preferences and thus see a highly personalized variable length summarization of soccer matches. We demonstrate our approach based on the output of an end-to-end pipeline that we are building to produce such summaries.

https://arxiv.org/abs/1706.09193
‍ (https://axnegar.fahares.com/axnegar/wvM2MjxlPzaIf2/4961077.jpg)

#خبر

🔵Consumer-goods giant Unilever has been hiring employees using brain games and artificial intelligence — and it's a huge success


• Unilever has used artificial intelligence to screen all entry-level employees for the past year.

• Candidates play neuroscience-based games to measure inherent traits, then have recorded interviews analyzed by AI.

• The company considers the experiment a big success and will continue it indefinitely.

For the past year, the Dutch-British consumer-goods giant Unilever has been using artificial intelligence to hire entry-level employees, and the company says it has dramatically increased diversity and cost-efficiency.

"We were going to campus the same way I was recruited over 20 years ago," Mike Clementi, VP of human resources for North America, told Business Insider. "Inherently, something didn't feel right."

https://uk.businessinsider.com/unilever-artificial-intelligence-hiring-process-2017-6?r=US&IR=T
‍ (https://axnegar.fahares.com/axnegar/aCXiJywwrTQ7Zu/4962180.jpg)

#خبر


🔵Scientists made an AI that can read minds
This new deep learning algorithm can analyze brain scans to predict thoughts.


Whether it's using AI to help organize a Lego collection or relying on an algorithm to protect our cities, deep learning neural networks seemingly become more impressive and complex each day. Now, however, some scientists are pushing the capabilities of these algorithms to a whole new level - they're trying to use them to read minds.

https://www.engadget.com/2017/06/29/scientists-made-an-ai-that-can-read-minds/
‍ (https://axnegar.fahares.com/axnegar/bSUKcRqot6xEkR/4963861.jpg)

#خبر
#هوش_مصنوعی
#مقاله


🔵Artificially intelligent painters invent new styles of art


Now and then, a painter like Claude Monet or Pablo Picasso comes along and turns the art world on its head. They invent new aesthetic styles, forging movements such as impressionism or abstract expressionism. But could the next big shake-up be the work of a machine?

An artificial intelligence has been developed that produces images in unconventional styles – and much of its output has already been given the thumbs up by members of the public.

The idea is to make art that is “novel, but not too novel”, says Marian Mazzone, an art historian at the College of Charleston in South Carolina who worked on the system.

https://www.newscientist.com/article/2139184-artificially-intelligent-painters-invent-new-styles-of-art/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=news&campaign_id=RSS%7CNSNS-news


مقاله

https://arxiv.org/abs/1706.07068
🔵Proceedings (9 papers) from First International Workshop on Deep Learning and Music 🎶



https://arxiv.org/html/1706.08675 Great stuff there! 😍 #ML #AI
"Bayesian Semisupervised Learning with Deep Generative Models": Toward semi-supervised Bayesian active learning https://arxiv.org/abs/1706.09751
‍ (https://axnegar.fahares.com/axnegar/Z2oBgZuEVZtFWt/4978601.jpg)

#هوش_مصنوعی
#یادگیری_عمیق
#یادگیری_ماشین
#مقاله



🔵راهنمایی برای تشخیص احساس


🔵Recognizing Emotions using Artificial Intelligence


Machine Learning and Deep learning is now being used to detect emotions and facial expressions by analyzing images and videos. Here’s what you need to know.

Machine Learning and Deep Learning are a growing and diverse fields of Artificial Intelligence (AI) which studies algorithms that are capable of automatically learning from data and making predictions based on data. Machine Learning and Deep Learning are two of the most exciting technological areas of AI today. Each week there are new advancements, new technologies, new applications, and new opportunities. It’s inspiring, but also overwhelming. That’s why I created this guide to help you keep pace with all of these exciting developments.

https://blog.produvia.com/recognizing-emotions-using-artificial-intelligence-62b2ea7928a7