ArtificialIntelligenceArticles
2.97K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
‍ (https://axnegar.fahares.com/axnegar/OK1VSo5Hxn3OzU/4960973.jpg)

#مقاله

🔵A Parameterized Approach to Personalized Variable Length Summarization of Soccer Matches

Mohak Sukhwani, Ravi Kothari
(Submitted on 28 Jun 2017)
We present a parameterized approach to produce personalized variable length summaries of soccer matches. Our approach is based on temporally segmenting the soccer video into 'plays', associating a user-specifiable 'utility' for each type of play and using 'bin-packing' to select a subset of the plays that add up to the desired length while maximizing the overall utility (volume in bin-packing terms). Our approach systematically allows a user to override the default weights assigned to each type of play with individual preferences and thus see a highly personalized variable length summarization of soccer matches. We demonstrate our approach based on the output of an end-to-end pipeline that we are building to produce such summaries.

https://arxiv.org/abs/1706.09193
‍ (https://axnegar.fahares.com/axnegar/wvM2MjxlPzaIf2/4961077.jpg)

#خبر

🔵Consumer-goods giant Unilever has been hiring employees using brain games and artificial intelligence — and it's a huge success


• Unilever has used artificial intelligence to screen all entry-level employees for the past year.

• Candidates play neuroscience-based games to measure inherent traits, then have recorded interviews analyzed by AI.

• The company considers the experiment a big success and will continue it indefinitely.

For the past year, the Dutch-British consumer-goods giant Unilever has been using artificial intelligence to hire entry-level employees, and the company says it has dramatically increased diversity and cost-efficiency.

"We were going to campus the same way I was recruited over 20 years ago," Mike Clementi, VP of human resources for North America, told Business Insider. "Inherently, something didn't feel right."

https://uk.businessinsider.com/unilever-artificial-intelligence-hiring-process-2017-6?r=US&IR=T
‍ (https://axnegar.fahares.com/axnegar/aCXiJywwrTQ7Zu/4962180.jpg)

#خبر


🔵Scientists made an AI that can read minds
This new deep learning algorithm can analyze brain scans to predict thoughts.


Whether it's using AI to help organize a Lego collection or relying on an algorithm to protect our cities, deep learning neural networks seemingly become more impressive and complex each day. Now, however, some scientists are pushing the capabilities of these algorithms to a whole new level - they're trying to use them to read minds.

https://www.engadget.com/2017/06/29/scientists-made-an-ai-that-can-read-minds/
‍ (https://axnegar.fahares.com/axnegar/bSUKcRqot6xEkR/4963861.jpg)

#خبر
#هوش_مصنوعی
#مقاله


🔵Artificially intelligent painters invent new styles of art


Now and then, a painter like Claude Monet or Pablo Picasso comes along and turns the art world on its head. They invent new aesthetic styles, forging movements such as impressionism or abstract expressionism. But could the next big shake-up be the work of a machine?

An artificial intelligence has been developed that produces images in unconventional styles – and much of its output has already been given the thumbs up by members of the public.

The idea is to make art that is “novel, but not too novel”, says Marian Mazzone, an art historian at the College of Charleston in South Carolina who worked on the system.

https://www.newscientist.com/article/2139184-artificially-intelligent-painters-invent-new-styles-of-art/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=news&campaign_id=RSS%7CNSNS-news


مقاله

https://arxiv.org/abs/1706.07068
🔵Proceedings (9 papers) from First International Workshop on Deep Learning and Music 🎶



https://arxiv.org/html/1706.08675 Great stuff there! 😍 #ML #AI
"Bayesian Semisupervised Learning with Deep Generative Models": Toward semi-supervised Bayesian active learning https://arxiv.org/abs/1706.09751
‍ (https://axnegar.fahares.com/axnegar/Z2oBgZuEVZtFWt/4978601.jpg)

#هوش_مصنوعی
#یادگیری_عمیق
#یادگیری_ماشین
#مقاله



🔵راهنمایی برای تشخیص احساس


🔵Recognizing Emotions using Artificial Intelligence


Machine Learning and Deep learning is now being used to detect emotions and facial expressions by analyzing images and videos. Here’s what you need to know.

Machine Learning and Deep Learning are a growing and diverse fields of Artificial Intelligence (AI) which studies algorithms that are capable of automatically learning from data and making predictions based on data. Machine Learning and Deep Learning are two of the most exciting technological areas of AI today. Each week there are new advancements, new technologies, new applications, and new opportunities. It’s inspiring, but also overwhelming. That’s why I created this guide to help you keep pace with all of these exciting developments.

https://blog.produvia.com/recognizing-emotions-using-artificial-intelligence-62b2ea7928a7
‍ (https://axnegar.fahares.com/axnegar/8Af9XZLeu9nGMY/4985969.jpg)

#معرفی_کتاب

🔵FREE R MACHINE LEARNING BOOK

Discover nearly 400 pages of in-depth of tutorials, best practices, and more to discover how to use R to its fullest potential in the world of machine learning

Packed with everything you need to understand the world of machine learning and how to break into it with the power of R this FREE 396 page eBook is the perfect guide to transforming how to turn your data into actionable insight that benefits your business today.

Understand the basic terminology of machine learning and how to differentiate among various machine learning approaches
Classify data using nearest neighbor methods
Learn about Bayesian methods for classifying data
Predict values using decision trees, rules, and support vector machines
Model data using neural networks


https://www.packtpub.com/packt/free-ebook/r-machine-learning
‍ (https://axnegar.fahares.com/axnegar/0AXdTuld1FhRRw/4986478.jpg)

#خبر
#یادگیری_ماشین


🔵Top Machine Learning Interview Questions and Answers for 2017

According to a list released by the popular job portal Indeed.com on 30 fastest growing jobs in technology-

Data science and machine learning jobs dominated the list of top tech jobs.
Data scientist job postings saw an increase of 135% while machine learning engineer job postings saw an increase of 191% in 2017.
3 out of the top 10 tech job positions went to AI and data related positions, with machine learning jobs scoring a strong second place in the list.
More than 10% of jobs in UK this year have been tech jobs demanding data science, machine learning and AI skills.

https://www.dezyre.com/article/top-machine-learning-interview-questions-and-answers-for-2017/357
🔵Lecture Collection - Natural Language Processing with #DeepLearning (Winter 2017) [Stanford]

Natural language processing (NLP) deals with the key artificial intelligence technology of understanding complex human language communication. This lecture series provides a thorough introduction to the cutting-edge research in deep learning applied to NLP, an approach that has recently obtained very high performance across many different NLP tasks including question answering and machine translation.

https://www.youtube.com/playlist?list=PL3FW7Lu3i5Jsnh1rnUwq_TcylNr7EkRe6&utm_content=buffer26aab&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
🔵Lost in translation

This week Google South Africa announced major upgrades to its Google Translate function – used in 103 languages by more than 500 million users worldwide. For the first time since Google began in 1998, the Google Translate app will now include full isiZulu, isiXhosa and Kiswahili translation functionality. "Machine learning", Google's artificial intelligence team explained, is also being used to drastically improve existing Google search services. The major announcement, slipped in quite casually at a press conference held at Google's local headquarters in Johannesburg, was included in an explanation of machine learning by Blaise Aguera y Arcas, principal scientist for machine learning and artificial intelligence at Google.

https://m.news24.com/news24/SouthAfrica/News/lost-in-translation-20170701-3
🔵Ray Kurzweil: Our Brain Is a Blueprint for the Master Algorithm


Ray Kurzweil is an inventor, thinker, and futurist famous for forecasting the pace of technology and predicting the world of tomorrow. In this video, Kurzweil suggests the blueprint for the master algorithm--or a single, general purpose learning algorithm--is hidden in the brain. The brain, according to Kurzweil, consists of repeating modules that self-organize into hierarchies that build simple patterns into complex concepts. We don't have a complete understanding of how this process works yet, but Kurzweil believes that as we study the brain more and reverse engineer what we find, we'll learn to write the master algorithm.

https://singularityhub.com/2017/06/30/ray-kurzweil-our-brain-is-a-blueprint-for-the-master-algorithm/
‍ (https://axnegar.fahares.com/axnegar/7yy554GWcq44Gk/4990780.jpg)


🔵Adobe and Stanford just taught AI to edit videos — with impressive results


Just one minute of video typically takes several hours of editing — but Stanford and Adobe researchers have developed an artificial intelligence (AI) program that partially automates the editing process, while still giving the user creative control over the final result. For example, the idiom "avoid jump cuts," can be added to actually avoid them, or negatively to intentionally add jump cuts whenever possible. The editor can drag over multiple idioms to instruct the program on an editing style. To edit the video in a completely different, fast-paced style, the researchers instead dragged over idioms for including jump cuts, using fast performance, and keeping the zoom consistent.

https://www.digitaltrends.com/photography/adobe-stanford-ai-video-editor/amp/?utm_content=buffer7434f&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

مقاله

https://graphics.stanford.edu/papers/roughcut/
‍ (https://axnegar.fahares.com/axnegar/QUoPaGZ1AE8NQM/4990996.jpg)


#هوش_مصنوعی
#خبر
#مقاله


🔵هوشمندتر شدن هوش مصنوعی با سیناپس مصنوعی

یک تیم بین المللی از محققان، با استفاده از مدل شبکه عصبی، نوع جدیدی از سیناپس های مصنوعی را برای سیستم های هوش مصنوعی (artificial intelligence) توسعه داده اند.
در شبکه های عصبی مصنوعی، سیستم های محاسباتی برای تقلید عملکرد مغز انسان طراحی می شوند و با کمک نورون و سیناپس های دیجیتالی، عملکرد همتای بیولوژیکی (مغز) شبیه سازی می شود. سیناپس ها به عنوان گذرگاهی - مصنوعی یا بیولوژیکی - برای نورون ها عمل می کنند تا اطلاعات و سیگنال ها از یک نورون به نورون دیگر منتقل شود.

سیناپس ها، بافت همبند در شبکه های عصبی بیولوژیکی و مصنوعی هستند. تخمین زده می شود که سیستم عصبی انسان متشکل از 100 تریلیون سیناپس باشد.

درحالیکه دانشمندان در مسیر توسعه شبکه های عصبی مصنوعی به موفقیت های چشمگیری دست یافته اند، توسعه سیستم های هوش مصنوعی (AI) با محدودیت های خاصی مواجه شده اند.

در مغز پستانداران، سیناپس ها می توانند به طور همزمان دو نوع سیگنال - مهارکننده و تحریکی - را پردازش کنند؛ اما سیناپس های مصنوعی که از اجزای الکترونیکی نانوسکوپی ساخته می شوند، تنها می توانند یک نوع سیگنال را در آن واحد پردازش کنند. درنتیجه، سیستم های هوش مصنوعی تنها می توانند نیمی از کار (سیناپس های واقعی) را انجام دهند.

محققان چینی و آمریکایی در مطالعه اخیر خود، موفق به توسعه سیناپس مصنوعی شده اند که قادر به پردازش و مدیریت همزمان دو نوع سیگنال است.
هان وانگ، یکی از نویسندگان این مطالعه و از محققان دانشگاه کالیفرنیای جنوبی گفت: این سیناپس های عصبی جدید مانند سیناپس های واقعی، امکان تنظیم حالت های مهارکنندگی و تحریکی را فراهم می کنند؛ امکانی که پیش از این در دستگاه های سیناپتیک مصنوعی امکانپذیر نبود. این انعطاف پذیری عملکردی برای قادر ساختن شبکه عصبی مصنوعی پیچیده، بسیار مهم است.

به گفته هان، در مغز انسان، پاسخ های تحریکی باعث برانگیختگی و هشدار مغز می شود، درحالیکه پاسخ های مهارکننده، مغز را آرام تر می کند.

سیناپس های مصنوعی جدید اجازه عملکردهای مشابه را در سیستم های رایانه ای فراهم می کند. درجایی که سیستم عصبی از سیناپس های بیولوژیی برای پردازش سیگنال های شیمیایی و الکتریکی استفاده می کند، شبکه های عصبی مصنوعی از سیناپس های مصنوعی برای پردازش اطلاعات دیجیتال استفاده می کنند.

تأمین مالی این پروژه از سوی بنیاد ملی علوم و دفتر تحقیقات ارتش آمریکا انجام شد. نتایج این دستاورد در شماره اخیر مجله ACS Nano منتشر شد.

ترجمه معصومه سوهانی


منبع :
https://www.livescience.com/59671-artificial-synapses-could-lead-to-smarter-ai.html


مقاله اصلی
https://pubs.acs.org/doi/abs/10.1021/acsnano.7b03033
‍ (https://axnegar.fahares.com/axnegar/pHmTHItEQeXrjQ/4994978.jpg)

👁‍🗨#معرفی_محقیق_هوش_مصنوعی

#معرفی_محققین_هوش_مصنوعی_در_جهان

شما احتمالا می دانید که هوش مصنوعی چیست و همچنین می دانید که در همه ی زمینه ها گسترده شده است .اما ممکن است شما با محققان و تکنولوژیست های هوش مصنوعی آشنایی نداشته باشید.در این کانال قصد داریم با معرفی محققین و چهره های هوش مصنوعی ایران و جهان در کانال مقالات هوش مصنوعی بپردازیم همراه ما باشید. قبلا با ANDREW NG اشنا شدیم .


2.Andrej Karpathy



Andrej Karpathy is a Research Scientist at OpenAI who likes to, in his words, “train Deep Neural Nets on large datasets,” and is “on a quest to solve intelligence.” In my spare time, I like to watch Santa Clarita Diet.

The OpenAI Blog is super interesting, with articles like “Attacking machine learning with adversarial examples” breaking complex issues down to the point where non-programmers can understand them.

As a CS Ph.D. student at Stanford, Andrej built a Javascript library for training Neural Networks called ConvNetJS.

Follow Andrej on Twitter for AI industry gossip like Alphabet’s Waymo suit against Uber for allegedly stealing self-driving car secrets. Or check out his Github.



لینکهای مرتبط

https://cs.stanford.edu/people/karpathy/

بیو :

I am the Director of AI at Tesla, currently focused on perception for the Autopilot. Previously, I was a Research Scientist at OpenAI working on Deep Learning in Computer Vision, Generative Modeling and Reinforcement Learning. I received my PhD from Stanford, where I worked with Fei-Fei Li on Convolutional/Recurrent Neural Network architectures and their applications in Computer Vision, Natural Language Processing and their intersection. Over the course of my PhD I squeezed in two internships at Google where I worked on large-scale feature learning over YouTube videos, and in 2015 I interned at DeepMind and worked on Deep Reinforcement Learning. Together with Fei-Fei, I designed and taught a new Stanford class on Convolutional Neural Networks for Visual Recognition (CS231n). The class was the first Deep Learning course offering at Stanford and has grown from 150 enrolled in 2015 to 330 students in 2016, and 750 students in 2017.

On a side for fun I blog, tweet, and maintain several Deep Learning libraries written in Javascript (e.g. ConvNetJS, RecurrentJS, REINFORCEjs, t-sneJS). I am also sometimes jokingly referred to as the reference human for ImageNet (post :)). I also recently expanded on this with arxiv-sanity.com, which lets you search and sort through ~30,000 Arxiv papers on Machine Learning over the last 3 years in the same pretty format.


https://cs.stanford.edu/people/karpathy/convnetjs/

https://www.openai.com/

تویتر :/
https://twitter.com/karpathy


لینکدین
https://www.linkedin.com/in/andrej-karpathy-9a650716/de

گیت هاب
https://karpathy.github.io/


کانال یوتیوب
https://www.youtube.com/channel/UCPk8m_r6fkUSYmvgCBwq-sw

https://blog.openai.com/adversarial-example-research/


سخنرانی
یادگیری عمیق برای بینایی ماشین
https://www.youtube.com/watch?v=u6aEYuemt0M&feature=youtu.be
‍ (https://axnegar.fahares.com/axnegar/fo1N3ZIxNgIhqB/4995368.jpg)

#هوش_مصنوعی
#یادگیری_عمیق


7 مرحله برای تبدیل شدن به کارشناس یادگیری عمیق

🔵7 Steps for becoming Deep Learning Expert

یکی از سوالات مکرر ما این است که "از کجا شروع به فراگیری یادگیری عمیق کنیم؟" این مطلب میتونه برای شروع و فراگیری در زمینه ی یادگیری عمیق مفید باشه

One of the frequent questions we get about our work is - "Where to start learning Deep Learning?” Lot of courses and tutorials are available freely online, but it gets overwhelming for the uninitiated. We have curated a few resources below which may help you begin your trip down the Deep Learning rabbit hole.

1. The first step is to understand Machine learning, the best resource for which is Andrew Ngs (Ex-Google, Stanford, Baidu), an online course at coursera. Going through the lectures are enough to understand the basics, but assignments take your understanding to another level.

https://www.coursera.org/learn/machine-learning

2. Next step is to develop intuition for Neural Networks. So go forth, write your first Neural Network and play with it.

https://iamtrask.github.io/2015/07/12/basic-python-network/

3. Understanding Neural networks are important, but simple Neural Networks not sufficient to solve the most interesting problems. A variation - Convolution Neural Networks work really well for visual tasks. Standord lecture notes and slides on the same are here:CS231n Convolutional Neural Networks for Visual Recognition(notes), and CS231n: Convolutional Neural Networks for Visual Recognition (lecture slides). Also here and here https://www.youtube.com/watch?v=bEUX_56Lojc are two great videos on CNNs.

https://cs231n.github.io/

https://cs231n.stanford.edu/syllabus.html


Update: Stanford is releasing video lectures for CS231n - Convolutional Neural Networks for Visual Recognition. Here is the link.
4. Next step is to get following for running your first CNN on your own PC.

Buy GPU and install CUDA
Install Caffe and its GUI wrapper Digit
Install Boinc (This will not help you in Deep Learning, but would let other researchers use your GPU in its idle time, for Science)

5. Digit provides few algorithms such as - Lenet for character recognition and Googlenet for image classification algorithms. You need to download dataset for Lenet and dataset for Googlenet to run these algorithms. You may modify the algorithms and try other fun visual image recognition tasks, like we did here.

6. For various Natural Language Processing (NLP) tasks, RNNs (Recurrent Neural Networks) are really the best. The best place to learn about RNNs is the Stanford lecture videos here https://cs224d.stanford.edu/syllabus.html . You can download Tensorflow and use it for building RNNs.

7. Now go ahead and choose a Deep Learning problem ranging from facial detection to speech recognition to a self-driving car, and solve it.

If you are through with all the above steps - Congratulations


https://www.linkedin.com/pulse/7-steps-becoming-deep-learning-expert-ankit-agarwal
"Learning by Association - A versatile semi-supervised training method for neural networks": Walk & Visit loss https://arxiv.org/abs/1706.00909
‍ (https://axnegar.fahares.com/axnegar/JTANDNqGfq2tfe/4997568.jpg)

#هوش_مصنوعی
#مقاله
#یادگیری_ماشین
#fMRI

#مغز


🔵 ‘Mind reading’ technology identifies complex thoughts, using machine learning and fMRI
CMU aims to map all types of knowledge in the brain

By combining machine-learning algorithms with fMRI brain imaging technology, Carnegie Mellon University (CMU) scientists have discovered, in essense, how to “read minds.”

The researchers used functional magnetic resonance imaging (fMRI) to view how the brain encodes various thoughts (based on blood-flow patterns in the brain). They discovered that the mind’s building blocks for constructing complex thoughts are formed, not by words, but by specific combinations of the brain’s various sub-systems.

Following up on previous research, the findings, published in Human Brain Mapping (open-access preprint here) and funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA), provide new evidence that the neural dimensions of concept representation are universal across people and languages.

منبع :

https://www.kurzweilai.net/mind-reading-technology-identifies-complex-thoughts-using-machine-learning-and-fmri

ژورنال :
https://www.ccbi.cmu.edu/reprints/Wang_Just_HBM-2017_Journal-preprint.pdf