درخواست برای تغییر مکان برگزاری کنفرانس های علوم کامپیوتر از امریکا به دیگر نقاط دنیا جهت دسترسی همه محققین
در صورت امکان لطفا امضا کنید
https://www.change.org/p/conference-organizers-abandon-united-states-as-the-venue-for-computer-science-conferences-4187daa3-e4f2-42a1-85dc-fe49641c4026?recruiter=false&utm_source=share_petition&utm_medium=twitter&utm_campaign=psf_combo_share_initial&utm_term=share_petition&recruited_by_id=0a1dae20-6dbb-11e9-9fac-5b67a3a1a8bb&share_bandit_exp=initial-15283214-en-US&share_bandit_var=v1
در صورت امکان لطفا امضا کنید
https://www.change.org/p/conference-organizers-abandon-united-states-as-the-venue-for-computer-science-conferences-4187daa3-e4f2-42a1-85dc-fe49641c4026?recruiter=false&utm_source=share_petition&utm_medium=twitter&utm_campaign=psf_combo_share_initial&utm_term=share_petition&recruited_by_id=0a1dae20-6dbb-11e9-9fac-5b67a3a1a8bb&share_bandit_exp=initial-15283214-en-US&share_bandit_var=v1
Change.org
Sign the Petition
Abandon United States as a venue for computer science conferences.
Google-Landmarks-v2: An Improved Dataset for Landmark Recognition & Retrieval
https://ai.googleblog.com/2019/05/announcing-google-landmarks-v2-improved.html
https://ai.googleblog.com/2019/05/announcing-google-landmarks-v2-improved.html
research.google
Announcing Google-Landmarks-v2: An Improved Dataset for Landmark Recognition & R
Posted by Bingyi Cao and Tobias Weyand, Software Engineers, Google AI Last year we released Google-Landmarks, the largest world-wide landmark rec...
ICLR 2019: Overcoming limited data – Towards Data Science
https://towardsdatascience.com/iclr-2019-overcoming-limited-data-382cd19db6d2
https://towardsdatascience.com/iclr-2019-overcoming-limited-data-382cd19db6d2
Medium
ICLR 2019: Overcoming limited data
Deep learning has proven powerful at many tasks, but requires massive amounts of data. However a number of new techniques are emerging
The World’s First All On-Device, Deep Neural Net, and Realtime Video Recognition App
Amazon Alexa, Apple Siri or Google Lens are still not perfect since users data has to travel out from their smartphone to providers’ cloud.
Sensifai launched a real-time, deep-neural, end-to-end, on-device video recognizer using Huawei Kirin980 chipsets that elevates AI capabilities of Huawei flagship smartphones to a whole new level.
https://consumer.huawei.com/en/community/details/?topicId=7444
Amazon Alexa, Apple Siri or Google Lens are still not perfect since users data has to travel out from their smartphone to providers’ cloud.
Sensifai launched a real-time, deep-neural, end-to-end, on-device video recognizer using Huawei Kirin980 chipsets that elevates AI capabilities of Huawei flagship smartphones to a whole new level.
https://consumer.huawei.com/en/community/details/?topicId=7444
The computer vision group at Columbia is looking for a postdoctoral fellow. Come wrangle pixels with us in the big city. More info:
https://www.cs.columbia.edu/~vondrick/postdoc2019.pdf
https://www.cs.columbia.edu/~vondrick/postdoc2019.pdf
Separate the generation and confirmation of hypotheses:
Come up with an exciting research question
Write a paper proposal without confirmatory experiments
After the paper is accepted, run the experiments and report your results
https://preregister.vision/
Come up with an exciting research question
Write a paper proposal without confirmatory experiments
After the paper is accepted, run the experiments and report your results
https://preregister.vision/
Deep learning channel pinned «لینک جدید گروه: https://t.iss.one/joinchat/A3HTSj3_zWMqpGQbN10ivw»
فیلم آموزشی یادگیری عمیق (به همراه کد تخفیف)
تیزر 5 دقیقه ای #فیلم_آموزشی یادگیری عمیق در Python و Keras :
https://aparat.com/v/Cv2fR
دو ساعت و نیم نخست دوره آموزشی Deep Learning برای ارزیابی خریداران:
https://aparat.com/v/0xgm5
مشاهده ی تیز با کیفیت بالاتر
کد تخفیف به مناسبت عید فطر (تا شنبه):
Eid-e-Fetr
اطلاعات بیشتر و خرید:
https://class.vision/deeplearning-keras/
تیزر 5 دقیقه ای #فیلم_آموزشی یادگیری عمیق در Python و Keras :
https://aparat.com/v/Cv2fR
دو ساعت و نیم نخست دوره آموزشی Deep Learning برای ارزیابی خریداران:
https://aparat.com/v/0xgm5
مشاهده ی تیز با کیفیت بالاتر
کد تخفیف به مناسبت عید فطر (تا شنبه):
Eid-e-Fetr
اطلاعات بیشتر و خرید:
https://class.vision/deeplearning-keras/
آپارات - سرویس اشتراک ویدیو
تیزر فیلم آموزشی Deep Learning with Python, TensorFlow and Keras
تیزر دوره آموزشی 12 ساعته یادگیری عمیق❓در این دوره یاد میگیریم چگونه+برنامه ای بنویسیم که کامپیوتر مثل انسان ببیند و اشیاء را با وبکم تشخیص دهیم!+قیمت یک خونه را برای مال حدس بزند؟+ارقام دست نویس فارسی را بخواندو ...برای خرید و مشاهده پیش نمایش و همچنین مشاهده…
FIRST INTERNATIONAL WORKSHOP ON LARGE SCALE HOLISTIC VIDEO UNDERSTANDING
In Conjunction with ICCV 2019, Seoul, Korea
https://holistic-video-understanding.github.io/workshops/iccv2019.html
In the last years, we have seen a tremendous progress in the capabilities of computer systems to classify video clips taken from the Internet or to analyze human actions in videos. There are lots of works in video recognition field focusing on specific video understanding tasks, such as, action recognition, scene understanding, etc. There have been great achievements in such tasks, however, there has not been enough attention toward the holistic video understanding task as a problem to be tackled. Current systems are expert in some specific fields of the general video understanding problem. However, for real world applications, such as, analyzing multiple concepts of a video for video search engines and media monitoring systems or providing an appropriate definition of the surrounding environment of an humanoid robot, a combination of current state-of-the-art methods should be used. Therefore, in this workshop we intend to introduce the holistic video understanding as a new challenge for the video understanding efforts. This challenge focuses on the recognition of scenes, objects, actions, attributes, and events in the real world user generated videos. To be able to address such tasks, we also introduce our new dataset named Holistic Video Understanding~(HVU dataset) that is organized hierarchically in a semantic taxonomy of holistic video understanding. Almost all of real-wold conditioned video datasets are targeting human action or sport recognition. So our new dataset can help the vision community and bring more attention to bring more interesting solutions for holistic video understanding. The workshop is tailored on bringing together ideas around multi-label and multi-task recognition of different semantic concepts in the real world videos. And the research efforts can be tried on our new dataset.
Speakers: Rahul Sukthankar (Google AI), Kristen Grauman (U of Texas at Austin), Carl Vondrick (Columbia U.), Manohar Paluri (Facebook AI)
Organizers: Vivek Sharma, Mohsen Fayyaz, Ali Diba, Luc Van Gool, Juergen Gall, Rainer Stiefelhagen, Manohar Paluri
Sponsors: Facebook AI Research, Sensifai
Holistic Video Understanding is a joint project of the KU Lueven, University of Bonn, KIT, ETH, and the HVU team.
In Conjunction with ICCV 2019, Seoul, Korea
https://holistic-video-understanding.github.io/workshops/iccv2019.html
In the last years, we have seen a tremendous progress in the capabilities of computer systems to classify video clips taken from the Internet or to analyze human actions in videos. There are lots of works in video recognition field focusing on specific video understanding tasks, such as, action recognition, scene understanding, etc. There have been great achievements in such tasks, however, there has not been enough attention toward the holistic video understanding task as a problem to be tackled. Current systems are expert in some specific fields of the general video understanding problem. However, for real world applications, such as, analyzing multiple concepts of a video for video search engines and media monitoring systems or providing an appropriate definition of the surrounding environment of an humanoid robot, a combination of current state-of-the-art methods should be used. Therefore, in this workshop we intend to introduce the holistic video understanding as a new challenge for the video understanding efforts. This challenge focuses on the recognition of scenes, objects, actions, attributes, and events in the real world user generated videos. To be able to address such tasks, we also introduce our new dataset named Holistic Video Understanding~(HVU dataset) that is organized hierarchically in a semantic taxonomy of holistic video understanding. Almost all of real-wold conditioned video datasets are targeting human action or sport recognition. So our new dataset can help the vision community and bring more attention to bring more interesting solutions for holistic video understanding. The workshop is tailored on bringing together ideas around multi-label and multi-task recognition of different semantic concepts in the real world videos. And the research efforts can be tried on our new dataset.
Speakers: Rahul Sukthankar (Google AI), Kristen Grauman (U of Texas at Austin), Carl Vondrick (Columbia U.), Manohar Paluri (Facebook AI)
Organizers: Vivek Sharma, Mohsen Fayyaz, Ali Diba, Luc Van Gool, Juergen Gall, Rainer Stiefelhagen, Manohar Paluri
Sponsors: Facebook AI Research, Sensifai
Holistic Video Understanding is a joint project of the KU Lueven, University of Bonn, KIT, ETH, and the HVU team.
Google Research
Rahul Sukthankar – Google Research
AAISS Application Form.docx
33.3 KB
فراخوان دعوت به ارائه بحث و/یا کارگاه
دانشگاه صنعتی امیرکبیر در نظر دارد سمپوزیوم تابستانه هوش مصنوعی در زمینه های مختلف مرتبط با این حوزه را در هفته آخر تیر و هفته اول مرداد 1398 برگزار کند.
لذا از آن دسته از عزیزانی که مایل به ارائه بحث و/یا کارگاه هستند دعوت می شود پیشنهاد خود را در قالب فرم ارائه شده و به همراه رزومه ارائه دهنده(ها) حداکثر تا تاریخ 24 خرداد 1398 با عنوان ایمیل AAISS Application به آدرس [email protected] ارسال نمایند.
#Symposium #AI
#Tehran #AUT
دانشگاه صنعتی امیرکبیر در نظر دارد سمپوزیوم تابستانه هوش مصنوعی در زمینه های مختلف مرتبط با این حوزه را در هفته آخر تیر و هفته اول مرداد 1398 برگزار کند.
لذا از آن دسته از عزیزانی که مایل به ارائه بحث و/یا کارگاه هستند دعوت می شود پیشنهاد خود را در قالب فرم ارائه شده و به همراه رزومه ارائه دهنده(ها) حداکثر تا تاریخ 24 خرداد 1398 با عنوان ایمیل AAISS Application به آدرس [email protected] ارسال نمایند.
#Symposium #AI
#Tehran #AUT
A very efficient neural network architecture can be designed based on the Fast Fourier Transform (FFT) algorithm.
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
https://arxiv.org/pdf/1906.02256.pdf
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
https://arxiv.org/pdf/1906.02256.pdf
Deep learning channel pinned «لینک جدید گروه: https://t.iss.one/joinchat/A3HTSj3_zWNXN0oApJf8rg»
PyRobot is a framework and ecosystem that enables AI researchers and students to get up and running with a robot in just a few hours, without specialized knowledge of the hardware or of details such as device drivers, control, and planning. PyRobot will help Facebook AI advance our long-term robotics research, which aims to develop embodied AI systems that can learn efficiently by interacting with the physical world. We are now open-sourcing PyRobot to help others in the AI and robotics community as well.
https://ai.facebook.com/blog/open-sourcing-pyrobot-to-accelerate-ai-robotics-research/
https://github.com/facebookresearch/pyrobot
https://www.pyrobot.org/
https://ai.facebook.com/blog/open-sourcing-pyrobot-to-accelerate-ai-robotics-research/
https://github.com/facebookresearch/pyrobot
https://www.pyrobot.org/
Meta
Open-sourcing PyRobot to accelerate AI robotics research
Facebook AI is open-sourcing PyRobot, a lightweight, high-level interface that lets AI researchers get up and running with robotics experiments in just hours, with no specialized robotics expertise.
List of the program committee of the Holistic Video Understanding Workshop in ICCV 2019:
Cees Snoek (UvA)
Mubarak Shah (UCF)
Jan van Gemert (TU Delft)
Ivan Laptev (INRIA)
Dima Damen (University of Bristol)
Du Tran (Facebook Research)
Hilde Kuehne (MIT-IBM Watson Lab)
Angela Yao (National University of Singapore)
Jakub Tomczak (Qualcomm AI Research)
Hakan Bilen (University of Edinburgh )
Noureldien Hussein (UvA)
Silvia-Laura Pintea (TU Delft)
Jack Valmadre (University of Oxford)
Suman Shah (ETH Zürich)
Efstratios Gavves (UvA)
Hamed Pirsiavash (UMBC)
Christoph Feichtenhofer (Facebook Research)
Chen Huang (Apple)
Makarand Tapaswi (INRIA)
Limin Wang (Nanjing University)
Tinne Tuytelaars (KU Leuven)
Saquib Sarfraz (KIT)
Yale Song (Microsoft Cloud & AI)
Miguel Angel Bautista (Apple)
David Ross (Google Research)
Sourish Chaudhuri (Google Research)
Chen Sun (Google Research)
Joao Carreira (Google Deepmind)
Andrew Owens (UC Berkley)
Basura Fernando (A*STAR Singapore)
Matt Feiszli (Facebook Research)
Philippe Weinzaepfel (NAVER Labs)
Josh Susskind (Apple)
Ross Goroshin (Google Brain)
https://holistic-video-understanding.github.io/workshops/iccv2019.html
Cees Snoek (UvA)
Mubarak Shah (UCF)
Jan van Gemert (TU Delft)
Ivan Laptev (INRIA)
Dima Damen (University of Bristol)
Du Tran (Facebook Research)
Hilde Kuehne (MIT-IBM Watson Lab)
Angela Yao (National University of Singapore)
Jakub Tomczak (Qualcomm AI Research)
Hakan Bilen (University of Edinburgh )
Noureldien Hussein (UvA)
Silvia-Laura Pintea (TU Delft)
Jack Valmadre (University of Oxford)
Suman Shah (ETH Zürich)
Efstratios Gavves (UvA)
Hamed Pirsiavash (UMBC)
Christoph Feichtenhofer (Facebook Research)
Chen Huang (Apple)
Makarand Tapaswi (INRIA)
Limin Wang (Nanjing University)
Tinne Tuytelaars (KU Leuven)
Saquib Sarfraz (KIT)
Yale Song (Microsoft Cloud & AI)
Miguel Angel Bautista (Apple)
David Ross (Google Research)
Sourish Chaudhuri (Google Research)
Chen Sun (Google Research)
Joao Carreira (Google Deepmind)
Andrew Owens (UC Berkley)
Basura Fernando (A*STAR Singapore)
Matt Feiszli (Facebook Research)
Philippe Weinzaepfel (NAVER Labs)
Josh Susskind (Apple)
Ross Goroshin (Google Brain)
https://holistic-video-understanding.github.io/workshops/iccv2019.html
A short note on "VideoBERT: A Joint Model for Video and Language Representation Learning"
By Chen Sun, Ausin Myers, @cvondrick, Kevin Murphy and Cordelia Schmid
https://twitter.com/yassersouri/status/1144283614953267201?s=19
By Chen Sun, Ausin Myers, @cvondrick, Kevin Murphy and Cordelia Schmid
https://twitter.com/yassersouri/status/1144283614953267201?s=19
Twitter
Yasser Souri
A short note on "VideoBERT: A Joint Model for Video and Language Representation Learning". https://t.co/fkKz2Fj9hC By Chen Sun, Ausin Myers, @cvondrick, Kevin Murphy and Cordelia Schmid (1)
A short note on "Deep Set Prediction Networks"
By: Yan Zhang et. al.
https://twitter.com/yassersouri/status/1146066000669896705
By: Yan Zhang et. al.
https://twitter.com/yassersouri/status/1146066000669896705
Twitter
Yasser Souri
Recently I read "Deep Set Prediction Networks" by Yan Zhang (@Cyanogenoid) et. al. https://t.co/8VW3yGhpOR This paper has received a lot of attention (deservedly) in the short time it has been published. (also @karpathy tweeted about it!) (1/14)