When ImageNet: A large-scale hierarchical image database was published in 2009, it showed how large-scale datasets could transform neural network algorithms. Now, its author & HAI co-director Dr Fei-Fei li has won the #cvpr2019 award for the retrospective most impactful paper. #AI
✴️ @AI_Python_EN
  ✴️ @AI_Python_EN
the #CVPR2019 Low-Power Image Recognition Challenge (LPIIRC) winning teams from Amazon, Alibaba, Expasoft, Tsinghua, MIT and Qualcomm. Learn more about the challenge at 
https://rebootingcomputing.ieee.org/lpirc .
✴️ @AI_Python_EN
  https://rebootingcomputing.ieee.org/lpirc .
✴️ @AI_Python_EN
#CVPR2019 presenting Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks (RCAN). 
✴️ @AI_Python_EN
  ✴️ @AI_Python_EN
have released the code and data for our #CVPR2019 paper on hand-object reconstruction. 
https://www.di.ens.fr/willow/research/obman/
✴️ @AI_Python_EN
  https://www.di.ens.fr/willow/research/obman/
✴️ @AI_Python_EN
the #CVPR2019 Google Booth will host demos featuring work on Increasing AR Realism Using Lighting 
https://goo.gle/2KwK5ce
and teaching people how to dance with the Dance Like app.
https://goo.gle/2X18ddS .
✴️ @AI_Python_EN
  
  https://goo.gle/2KwK5ce
and teaching people how to dance with the Dance Like app.
https://goo.gle/2X18ddS .
✴️ @AI_Python_EN
arXiv.org
  
  DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality
  We present a learning-based method to infer plausible high dynamic range
(HDR), omnidirectional illumination given an unconstrained, low dynamic range
(LDR) image from a mobile phone camera with a...
  (HDR), omnidirectional illumination given an unconstrained, low dynamic range
(LDR) image from a mobile phone camera with a...
Waymo just announced the release of large open dataset at #CVPR2019 
https://waymo.com/open
✴️ @AI_Python_EN
  https://waymo.com/open
✴️ @AI_Python_EN
Artificial Intelligence can write creative & convincingly human-like captions for any image. Great work by IBM Research at #cvpr2019 In order to ensure the generated captions did not sound too unnatural, the work employed conditional GAN training Read 
https://arxiv.org/pdf/1805.00063.pdf
✴️ @AI_Python_EN
  https://arxiv.org/pdf/1805.00063.pdf
✴️ @AI_Python_EN
This is incredible. This paper from MIT Computer Science & Artificial Intelligence Lab presented at #cvpr2019 shows how to reconstruct a face from speech patterns. 
https://speech2face.github.io
✴️ @AI_Python_EN
  https://speech2face.github.io
✴️ @AI_Python_EN
All the datasets (there are a lot) released at #cvpr2019 are now indexed in 
https://visualdata.io . Check them out!
#computervision #machinelearning #dataset
✴️ @AI_Python_EN
  https://visualdata.io . Check them out!
#computervision #machinelearning #dataset
✴️ @AI_Python_EN
This media is not supported in your browser
    VIEW IN TELEGRAM
  Prof. Chris Manning, Director of StanfordAILab & founder of Stanfordnlp, shared inspiring thoughts on research trends and challenges in #computervision and #NLP at #CVPR2019. View full interview:
 
https://bit.ly/2KR21hO
✴️ @AI_Python_EN
  https://bit.ly/2KR21hO
✴️ @AI_Python_EN
