ArtificialIntelligenceArticles
2.97K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
‍ (https://axnegar.fahares.com/axnegar/QUoPaGZ1AE8NQM/4990996.jpg)


#هوش_مصنوعی
#خبر
#مقاله


🔵هوشمندتر شدن هوش مصنوعی با سیناپس مصنوعی

یک تیم بین المللی از محققان، با استفاده از مدل شبکه عصبی، نوع جدیدی از سیناپس های مصنوعی را برای سیستم های هوش مصنوعی (artificial intelligence) توسعه داده اند.
در شبکه های عصبی مصنوعی، سیستم های محاسباتی برای تقلید عملکرد مغز انسان طراحی می شوند و با کمک نورون و سیناپس های دیجیتالی، عملکرد همتای بیولوژیکی (مغز) شبیه سازی می شود. سیناپس ها به عنوان گذرگاهی - مصنوعی یا بیولوژیکی - برای نورون ها عمل می کنند تا اطلاعات و سیگنال ها از یک نورون به نورون دیگر منتقل شود.

سیناپس ها، بافت همبند در شبکه های عصبی بیولوژیکی و مصنوعی هستند. تخمین زده می شود که سیستم عصبی انسان متشکل از 100 تریلیون سیناپس باشد.

درحالیکه دانشمندان در مسیر توسعه شبکه های عصبی مصنوعی به موفقیت های چشمگیری دست یافته اند، توسعه سیستم های هوش مصنوعی (AI) با محدودیت های خاصی مواجه شده اند.

در مغز پستانداران، سیناپس ها می توانند به طور همزمان دو نوع سیگنال - مهارکننده و تحریکی - را پردازش کنند؛ اما سیناپس های مصنوعی که از اجزای الکترونیکی نانوسکوپی ساخته می شوند، تنها می توانند یک نوع سیگنال را در آن واحد پردازش کنند. درنتیجه، سیستم های هوش مصنوعی تنها می توانند نیمی از کار (سیناپس های واقعی) را انجام دهند.

محققان چینی و آمریکایی در مطالعه اخیر خود، موفق به توسعه سیناپس مصنوعی شده اند که قادر به پردازش و مدیریت همزمان دو نوع سیگنال است.
هان وانگ، یکی از نویسندگان این مطالعه و از محققان دانشگاه کالیفرنیای جنوبی گفت: این سیناپس های عصبی جدید مانند سیناپس های واقعی، امکان تنظیم حالت های مهارکنندگی و تحریکی را فراهم می کنند؛ امکانی که پیش از این در دستگاه های سیناپتیک مصنوعی امکانپذیر نبود. این انعطاف پذیری عملکردی برای قادر ساختن شبکه عصبی مصنوعی پیچیده، بسیار مهم است.

به گفته هان، در مغز انسان، پاسخ های تحریکی باعث برانگیختگی و هشدار مغز می شود، درحالیکه پاسخ های مهارکننده، مغز را آرام تر می کند.

سیناپس های مصنوعی جدید اجازه عملکردهای مشابه را در سیستم های رایانه ای فراهم می کند. درجایی که سیستم عصبی از سیناپس های بیولوژیی برای پردازش سیگنال های شیمیایی و الکتریکی استفاده می کند، شبکه های عصبی مصنوعی از سیناپس های مصنوعی برای پردازش اطلاعات دیجیتال استفاده می کنند.

تأمین مالی این پروژه از سوی بنیاد ملی علوم و دفتر تحقیقات ارتش آمریکا انجام شد. نتایج این دستاورد در شماره اخیر مجله ACS Nano منتشر شد.

ترجمه معصومه سوهانی


منبع :
https://www.livescience.com/59671-artificial-synapses-could-lead-to-smarter-ai.html


مقاله اصلی
https://pubs.acs.org/doi/abs/10.1021/acsnano.7b03033
‍ (https://axnegar.fahares.com/axnegar/pHmTHItEQeXrjQ/4994978.jpg)

👁‍🗨#معرفی_محقیق_هوش_مصنوعی

#معرفی_محققین_هوش_مصنوعی_در_جهان

شما احتمالا می دانید که هوش مصنوعی چیست و همچنین می دانید که در همه ی زمینه ها گسترده شده است .اما ممکن است شما با محققان و تکنولوژیست های هوش مصنوعی آشنایی نداشته باشید.در این کانال قصد داریم با معرفی محققین و چهره های هوش مصنوعی ایران و جهان در کانال مقالات هوش مصنوعی بپردازیم همراه ما باشید. قبلا با ANDREW NG اشنا شدیم .


2.Andrej Karpathy



Andrej Karpathy is a Research Scientist at OpenAI who likes to, in his words, “train Deep Neural Nets on large datasets,” and is “on a quest to solve intelligence.” In my spare time, I like to watch Santa Clarita Diet.

The OpenAI Blog is super interesting, with articles like “Attacking machine learning with adversarial examples” breaking complex issues down to the point where non-programmers can understand them.

As a CS Ph.D. student at Stanford, Andrej built a Javascript library for training Neural Networks called ConvNetJS.

Follow Andrej on Twitter for AI industry gossip like Alphabet’s Waymo suit against Uber for allegedly stealing self-driving car secrets. Or check out his Github.



لینکهای مرتبط

https://cs.stanford.edu/people/karpathy/

بیو :

I am the Director of AI at Tesla, currently focused on perception for the Autopilot. Previously, I was a Research Scientist at OpenAI working on Deep Learning in Computer Vision, Generative Modeling and Reinforcement Learning. I received my PhD from Stanford, where I worked with Fei-Fei Li on Convolutional/Recurrent Neural Network architectures and their applications in Computer Vision, Natural Language Processing and their intersection. Over the course of my PhD I squeezed in two internships at Google where I worked on large-scale feature learning over YouTube videos, and in 2015 I interned at DeepMind and worked on Deep Reinforcement Learning. Together with Fei-Fei, I designed and taught a new Stanford class on Convolutional Neural Networks for Visual Recognition (CS231n). The class was the first Deep Learning course offering at Stanford and has grown from 150 enrolled in 2015 to 330 students in 2016, and 750 students in 2017.

On a side for fun I blog, tweet, and maintain several Deep Learning libraries written in Javascript (e.g. ConvNetJS, RecurrentJS, REINFORCEjs, t-sneJS). I am also sometimes jokingly referred to as the reference human for ImageNet (post :)). I also recently expanded on this with arxiv-sanity.com, which lets you search and sort through ~30,000 Arxiv papers on Machine Learning over the last 3 years in the same pretty format.


https://cs.stanford.edu/people/karpathy/convnetjs/

https://www.openai.com/

تویتر :/
https://twitter.com/karpathy


لینکدین
https://www.linkedin.com/in/andrej-karpathy-9a650716/de

گیت هاب
https://karpathy.github.io/


کانال یوتیوب
https://www.youtube.com/channel/UCPk8m_r6fkUSYmvgCBwq-sw

https://blog.openai.com/adversarial-example-research/


سخنرانی
یادگیری عمیق برای بینایی ماشین
https://www.youtube.com/watch?v=u6aEYuemt0M&feature=youtu.be
‍ (https://axnegar.fahares.com/axnegar/fo1N3ZIxNgIhqB/4995368.jpg)

#هوش_مصنوعی
#یادگیری_عمیق


7 مرحله برای تبدیل شدن به کارشناس یادگیری عمیق

🔵7 Steps for becoming Deep Learning Expert

یکی از سوالات مکرر ما این است که "از کجا شروع به فراگیری یادگیری عمیق کنیم؟" این مطلب میتونه برای شروع و فراگیری در زمینه ی یادگیری عمیق مفید باشه

One of the frequent questions we get about our work is - "Where to start learning Deep Learning?” Lot of courses and tutorials are available freely online, but it gets overwhelming for the uninitiated. We have curated a few resources below which may help you begin your trip down the Deep Learning rabbit hole.

1. The first step is to understand Machine learning, the best resource for which is Andrew Ngs (Ex-Google, Stanford, Baidu), an online course at coursera. Going through the lectures are enough to understand the basics, but assignments take your understanding to another level.

https://www.coursera.org/learn/machine-learning

2. Next step is to develop intuition for Neural Networks. So go forth, write your first Neural Network and play with it.

https://iamtrask.github.io/2015/07/12/basic-python-network/

3. Understanding Neural networks are important, but simple Neural Networks not sufficient to solve the most interesting problems. A variation - Convolution Neural Networks work really well for visual tasks. Standord lecture notes and slides on the same are here:CS231n Convolutional Neural Networks for Visual Recognition(notes), and CS231n: Convolutional Neural Networks for Visual Recognition (lecture slides). Also here and here https://www.youtube.com/watch?v=bEUX_56Lojc are two great videos on CNNs.

https://cs231n.github.io/

https://cs231n.stanford.edu/syllabus.html


Update: Stanford is releasing video lectures for CS231n - Convolutional Neural Networks for Visual Recognition. Here is the link.
4. Next step is to get following for running your first CNN on your own PC.

Buy GPU and install CUDA
Install Caffe and its GUI wrapper Digit
Install Boinc (This will not help you in Deep Learning, but would let other researchers use your GPU in its idle time, for Science)

5. Digit provides few algorithms such as - Lenet for character recognition and Googlenet for image classification algorithms. You need to download dataset for Lenet and dataset for Googlenet to run these algorithms. You may modify the algorithms and try other fun visual image recognition tasks, like we did here.

6. For various Natural Language Processing (NLP) tasks, RNNs (Recurrent Neural Networks) are really the best. The best place to learn about RNNs is the Stanford lecture videos here https://cs224d.stanford.edu/syllabus.html . You can download Tensorflow and use it for building RNNs.

7. Now go ahead and choose a Deep Learning problem ranging from facial detection to speech recognition to a self-driving car, and solve it.

If you are through with all the above steps - Congratulations


https://www.linkedin.com/pulse/7-steps-becoming-deep-learning-expert-ankit-agarwal
"Learning by Association - A versatile semi-supervised training method for neural networks": Walk & Visit loss https://arxiv.org/abs/1706.00909
‍ (https://axnegar.fahares.com/axnegar/JTANDNqGfq2tfe/4997568.jpg)

#هوش_مصنوعی
#مقاله
#یادگیری_ماشین
#fMRI

#مغز


🔵 ‘Mind reading’ technology identifies complex thoughts, using machine learning and fMRI
CMU aims to map all types of knowledge in the brain

By combining machine-learning algorithms with fMRI brain imaging technology, Carnegie Mellon University (CMU) scientists have discovered, in essense, how to “read minds.”

The researchers used functional magnetic resonance imaging (fMRI) to view how the brain encodes various thoughts (based on blood-flow patterns in the brain). They discovered that the mind’s building blocks for constructing complex thoughts are formed, not by words, but by specific combinations of the brain’s various sub-systems.

Following up on previous research, the findings, published in Human Brain Mapping (open-access preprint here) and funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA), provide new evidence that the neural dimensions of concept representation are universal across people and languages.

منبع :

https://www.kurzweilai.net/mind-reading-technology-identifies-complex-thoughts-using-machine-learning-and-fmri

ژورنال :
https://www.ccbi.cmu.edu/reprints/Wang_Just_HBM-2017_Journal-preprint.pdf
Great new upgrade to #pix2pix! Perceptual Adversarial Networks for Image-to-Image Transformation https://arxiv.org/abs/1706.09138 #AI #ML
New automated method helps explain the inner workings of neural networks for machine vision: https://bit.ly/2tsueTb
‍ (https://axnegar.fahares.com/axnegar/CPvv0ipERfEIUK/4998552.jpg)

#هوش_مصنوعی
#مغز
#آگاهی
#علوم_شناختی

🔵Ned Block: Why AI Approaches to Cognition Won’t Work for Consciousness


In this Talk at Google, Ned Block talks about how current AI approaches to cognition won’t work for creating consciousness.


https://youtu.be/6lHHxcxurhQ
Video: #DeepLearning and the Future of #ArtificialIntelligence(AI) | Facebook AI Director Yann LeCun https://youtu.be/wbcYG9wOvRc
‍ (https://axnegar.fahares.com/axnegar/ih0K1kfBDK1bqr/4998593.jpg)

#هوش_مصنوعی
#آموزش
#یادگیری_عمیق


🔵Learning Artificial Intelligence(AI) and Tensorflow Without a PhD by Google's Martin Görner

Published on Jun 19, 2017
Google has recently open-sourced its framework for machine learning and neural networks called Tensorflow. With this new tool, deep machine learning transitions from an area of research into mainstream software engineering. In this session, we will teach you how to choose the right neural network for your problem and how to make it behave. Familiarity with differential equations is no longer required. Instead, a couple of lines ofTensorflow Python, and a bag of "tricks of the trade" will do the job. No previous Python knowledge required.

This university session will cover the basics of deep learning, without any assumptions about the level of the participants. Machine learning beginners are welcome. We will cover: - fully connected neural networks - convolutional neural networks - regularization techniques: dropout, learning rate decay, batch normalization - recurrent neural networks - natural language analysis, word embedding - transfer learning - image analysis - image generation - and many examples.

Martin Görner is passionate about science, technology, coding, algorithms and everything in between. He graduated from Mines Paris Tech, enjoyed his first engineering years in the computer architecture group of ST Microlectronics and then spent the next 11 years shaping the nascent eBook market, starting with the Mobipocket startup, which later became the software part of the Amazon Kindle and its mobile variants. He joined Google Developer Relations in 2011 and now focuses on parallel processing and machine learning.

https://www.youtube.com/watch?v=wDxDhvLLNuE
"Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes": cuz loss function & minima https://arxiv.org/abs/1706.10239
DeepMind’s Relational Reasoning Networks - Demystified. https://buff.ly/2sybylc #BigData #DeepLearning #MachineLearning #DataScience #AI
"wrapper AI" replaces portions of analyzed images with white noise to help assess original https://www.newscientist.com/article/2139396-peering-inside-an-ais-brain-will-help-us-trust-its-decisions
ArtificialIntelligenceArticles
"wrapper AI" replaces portions of analyzed images with white noise to help assess original https://www.newscientist.com/article/2139396-peering-inside-an-ais-brain-will-help-us-trust-its-decisions
مقاله ی مطلب بالایی

🔵Latent Attention Networks

Christopher Grimm, Dilip Arumugam, Siddharth Karamcheti, David Abel, Lawson L.S. Wong, Michael L. Littman
(Submitted on 2 Jun 2017)
Deep neural networks are able to solve tasks across a variety of domains and modalities of data. Despite many empirical successes, we lack the ability to clearly understand and interpret the learned internal mechanisms that contribute to such effective behaviors or, more critically, failure modes. In this work, we present a general method for visualizing an arbitrary neural network's inner mechanisms and their power and limitations. Our dataset-centric method produces visualizations of how a trained network attends to components of its inputs.


https://arxiv.org/abs/1706.00536
#هوش_مصنوعی
#مقاله
#شبکه_عصبی_عمیق
#روان_شناسی_شناختی


🔵Interpreting Deep Neural Networks using Cognitive Psychology

Deep neural networks have learnt to do an amazing array of tasks - from recognising and reasoning about objects in images to playing Atari and Go at super-human levels. As these tasks and network architectures become more complex, the solutions that neural networks learn become more difficult to understand.
This is known as the ‘black-box’ problem, and it is becoming increasingly important as neural networks are used in more and more real world applications.

https://deepmind.com/blog/cognitive-psychology/


مقاله
https://arxiv.org/abs/1706.08606
🔵تمام پروژه‌های ایلان ماسک

بیزینس‌ویک زیرشاخهٔ بلومبرگ یک صفحه وب با طراحی تعاملی ساخته که در آن می‌توانید پیشرفت تک تک پروژه‌های ایلان ماسک را دنبال کنید

https://www.bloomberg.com/features/elon-musk-goals/