ODSC is bringing you Blockbuster workshop in Quantitative Finance+ Data Science absolutely FREE. The workshop has three presenters from diverse domains coming together to deliver it to you on June 29th ..Hurry Up!!! Limited seats Only.
Pankaj is Quantitative Finance researcher for State Street who is also one CFA level 2 candidate
Abinash Panda is CEO and Founder of Prodios is the Founding member of the famous pgmpy package. He has also written two books for Pakt publications in Probabilistic Graphical Models and Markov Models
Usha Rengaraju is an expert in Quantitative Finance and Bayesian Networks.
The workshop will also be followed by the AMA session by Swiggy Data Science Leaders.
RSVP here : https://bit.ly/2IiAzGc
#datascience #odsc #openai #neuralnetworks #ml #deeplearning #analytics #machinelearning #ai #artificialintelligence
@Machine_learn
Pankaj is Quantitative Finance researcher for State Street who is also one CFA level 2 candidate
Abinash Panda is CEO and Founder of Prodios is the Founding member of the famous pgmpy package. He has also written two books for Pakt publications in Probabilistic Graphical Models and Markov Models
Usha Rengaraju is an expert in Quantitative Finance and Bayesian Networks.
The workshop will also be followed by the AMA session by Swiggy Data Science Leaders.
RSVP here : https://bit.ly/2IiAzGc
#datascience #odsc #openai #neuralnetworks #ml #deeplearning #analytics #machinelearning #ai #artificialintelligence
@Machine_learn
Meetup
Login to Meetup | Meetup
Not a Meetup member yet? Log in and find groups that host online or in person events and meet people in your local community who share your interests.
Title: Rebooting AI : building artificial intelligence we can trust / Gary Marcus and Ernest Davis.
#book #AI
@Machine_learn
#book #AI
@Machine_learn
4_5823350061324568102.pdf
16.6 MB
Title: Rebooting AI : building artificial intelligence we can trust / Gary Marcus and Ernest Davis.
#book #AI
@Machine_learn
#book #AI
@Machine_learn
Apress.Explainable.AI.Recipes.pdf
8.2 MB
Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python (2023)
Author: Pradeepta Mishra
#XAI #Ai #DL #Python
#2023
@Machine_learn
Author: Pradeepta Mishra
#XAI #Ai #DL #Python
#2023
@Machine_learn
❤5
Artificial Intelligence Class 10 (2023).pdf
20.8 MB
Book: ARTIFICIAL INTELLIGENCE (SUBJECT CODE 417) CLASS – 3
Authors: Orange Education Pvt Ltd
ISBN: Null
year: 2023
pages: 619
Tags:#AI
@Machine_learn
Authors: Orange Education Pvt Ltd
ISBN: Null
year: 2023
pages: 619
Tags:#AI
@Machine_learn
👍8🔥1
Wiley_Artificial_Intelligence_Programming_with_Python_From_Zero.pdf
37.2 MB
Book: ArtificialIntelligence Programming
withPython F R O MZ E R OT OH E R O
Authors: Perry Xiao
ISBN: 978-1-119-82094-9 (ebk)
year: 2022
pages: 716
Tags:#AI #DL
@Machine_learn
withPython F R O MZ E R OT OH E R O
Authors: Perry Xiao
ISBN: 978-1-119-82094-9 (ebk)
year: 2022
pages: 716
Tags:#AI #DL
@Machine_learn
🔥7👍2
Introduction to Generative AI.pdf
12.5 MB
Book: 📚Introduction to Generative AI
Authors: Numa Dhamani and Maggie Engler
ISBN: Null
year: 2023
pages: 318
Tags: #AI
@Machine_learn
Authors: Numa Dhamani and Maggie Engler
ISBN: Null
year: 2023
pages: 318
Tags: #AI
@Machine_learn
❤7
MiniCPM-V: A GPT-4V Level MLLM on Your Phone
The recent surge of Multimodal Large Language Models (MLLMs) has fundamentally reshaped the landscape of #AI research and industry, shedding light on a promising path toward the next AI milestone. However, significant challenges remain preventing MLLMs from being practical in real-world applications. The most notable challenge comes from the huge cost of running an MLLM with a massive number of parameters and extensive computation. As a result, most MLLMs need to be deployed on high-performing cloud servers, which greatly limits their application scopes such as mobile, offline, energy-sensitive, and privacy-protective scenarios. In this work, we present MiniCPM-V, a series of efficient #MLLMs deployable on end-side devices. By integrating the latest MLLM techniques in architecture, pretraining and alignment, the latest MiniCPM-Llama3-V 2.5 has several notable features: (1) Strong performance, outperforming GPT-4V-1106, Gemini Pro and Claude 3 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks, (2) strong #OCR capability and 1.8M pixel high-resolution #image perception at any aspect ratio, (3) trustworthy behavior with low hallucination rates, (4) multilingual support for 30+ languages, and (5) efficient deployment on mobile phones. More importantly, MiniCPM-V can be viewed as a representative example of a promising trend: The model sizes for achieving usable (e.g., GPT-4V) level performance are rapidly decreasing, along with the fast growth of end-side computation capacity. This jointly shows that GPT-4V level MLLMs deployed on end devices are becoming increasingly possible, unlocking a wider spectrum of real-world AI applications in the near future.
Paper: https://arxiv.org/pdf/2408.01800v1.pdf
Codes:
https://github.com/OpenBMB/MiniCPM-o
https://github.com/openbmb/minicpm-v
Datasets: Video-MME
@Machine_learn
The recent surge of Multimodal Large Language Models (MLLMs) has fundamentally reshaped the landscape of #AI research and industry, shedding light on a promising path toward the next AI milestone. However, significant challenges remain preventing MLLMs from being practical in real-world applications. The most notable challenge comes from the huge cost of running an MLLM with a massive number of parameters and extensive computation. As a result, most MLLMs need to be deployed on high-performing cloud servers, which greatly limits their application scopes such as mobile, offline, energy-sensitive, and privacy-protective scenarios. In this work, we present MiniCPM-V, a series of efficient #MLLMs deployable on end-side devices. By integrating the latest MLLM techniques in architecture, pretraining and alignment, the latest MiniCPM-Llama3-V 2.5 has several notable features: (1) Strong performance, outperforming GPT-4V-1106, Gemini Pro and Claude 3 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks, (2) strong #OCR capability and 1.8M pixel high-resolution #image perception at any aspect ratio, (3) trustworthy behavior with low hallucination rates, (4) multilingual support for 30+ languages, and (5) efficient deployment on mobile phones. More importantly, MiniCPM-V can be viewed as a representative example of a promising trend: The model sizes for achieving usable (e.g., GPT-4V) level performance are rapidly decreasing, along with the fast growth of end-side computation capacity. This jointly shows that GPT-4V level MLLMs deployed on end devices are becoming increasingly possible, unlocking a wider spectrum of real-world AI applications in the near future.
Paper: https://arxiv.org/pdf/2408.01800v1.pdf
Codes:
https://github.com/OpenBMB/MiniCPM-o
https://github.com/openbmb/minicpm-v
Datasets: Video-MME
@Machine_learn
👍4