AI, Python, Cognitive Neuroscience
3.8K subscribers
1.09K photos
46 videos
78 files
891 links
Download Telegram
Credit Risk Analysis Using #MachineLearning and #DeepLearning Models

Lovely paper by Peter Martey Addo, Dominique Guegan and Bertrand Hassani

Code on #Github (it's in #R)

https://github.com/brainy749/CreditRiskPaper

✴️ @AI_Python_EN
Very interesting paper where they solved the three-body problem using deep neural networks in a tremendously more computationally efficient manner. While there is a lot of talk about current deep learning not leading towards human-like intelligence, one must think more deeply as to all the fantastic areas, applications, and fields that current deep learning can be game-changing right now and can lead to a new era of human-machine collaboration.
#deeplearning
#solvingproblems

https://arxiv.org/abs/1910.07291

✴️ @AI_Python_EN
Facebook: Pushing state-of-the-art in 3D content understanding

#DataScience #MachineLearning #ArtificialIntelligence

https://bit.ly/2JBaYK5

✴️ @AI_Python_EN
A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models

https://arxiv.org/abs/1910.13148

#MachineLearning #neurips, #NeurIPS2019

✴️ @AI_Python_EN
categorical data analysis I've found very helpful:
- Generalized Linear Models and Extensions (Hardin and Hilbe)
- Categorical Data Analysis (Agresti)
- Analyzing Categorical Data (Simonoff)
- Regression Models for Categorical Dependent Variables (Long and Freese)
- Applied Logistic Regression (Hosmer and Lemeshow)
- Logistic Regression Models (Hilbe)
- Analysis of Ordinal Categorical Data (Agresti)
- Applied Ordinal Logistic Regression (Liu)
Surveys also collect data in the form of counts ("How many times have you...").
Analyzing count data with methods designed for continuous data is usually unwise, and here are a couple of excellent books on that topic:
- Modeling Count Data (Hilbe)
- Negative Binomial Regression (Hilbe)
There are numerous "machine learners" I have also used over the years but, in general, they tell us less about the Why, essential in consumer research. Though predictive analytics is not typically the purpose of consumer surveys, used competently, statistical methods are also highly competitive with machine learning in terms of predictive accuracy.

❇️ @AI_Python_EN
1) TensorFlow World 2019 Keynote
https://www.youtube.com/watch?v=MunFeX-0MD8

2) Introduction to TensorFlow 2.0: Easier for beginners, and more powerful for experts (TF World '19)
https://www.youtube.com/watch?v=5ECD8J3dvDQ&t=966s

3) Swift for TensorFlow (TF World '19)
https://www.youtube.com/watch?v=9FWsSGD6V8Q

4) Building models with tf.text (TF World '19)
https://www.youtube.com/watch?v=iu_OSAg5slY&t=1s

5) Performant, scalable models in TensorFlow 2 with tf.data, tf.function & tf.distribute (TF World '19)
https://www.youtube.com/watch?v=yH1cF7GnoIo&t=5s

6) Getting involved in the TensorFlow Community (TF World '19)
https://www.youtube.com/watch?v=UbWGYcTUPyI&t=16s

7) TensorFlow World 2019 | Day 1 Livestream
https://www.youtube.com/watch?v=MgrTRK5bbsg

8) Great TensorFlow Research Cloud projects from around the world (TF World '19)
https://www.youtube.com/watch?v=rkqukapSmwQ&t=13s

9) TensorFlow Lite: Solution for running ML on-device (TF World '19)
https://www.youtube.com/watch?v=0SpZy7iouFU

10) TensorFlow Model Optimization: Quantization and Pruning (TF World '19)
https://www.youtube.com/watch?v=3JWRVx1OKQQ&t=1s

11) TFX: Production ML Pipelines with TensorFlow (TF World '19)
https://www.youtube.com/watch?v=TA5kbFgeUlk&t=1452s

12) TensorFlow World 2019 | Day 2 Livestream PM
https://www.youtube.com/watch?v=gy6v-Vc_P0U

13) Unlocking the power of ML for your JavaScript applications with TensorFlow.js (TF World '19)
https://www.youtube.com/watch?v=kKp7HLnPDxc

14) Day 2 Keynote (TF World '19)
https://www.youtube.com/watch?v=zxd3Q2gdArY

❇️ @AI_Python_EN
Spiking Neural Network (SNN) with PyTorch: towards bridging the gap between deep learning and the human brain

https://guillaume-chevalier.com/spiking-neural-network-snn-with-pytorch-where-backpropagation-engenders-stdp-hebbian-learning/


❇️ @AI_Python_EN
If you have any technical skills in machine learning, data science, natural language processing, deep learning, etc. and are interested in paid (remote) mini-projects and gigs on the side,
then this is a good opportunity to get compensated while further sharpening your skills on your own schedule.

IMHO also useful if you're a grad student, have student loans, or just want to build up your portfolio.

If you're interested, please opt in here:

Feel free to email [email protected] for any questions.
ICCV 2019 | Best Paper Award: SinGAN: Learning a Generative Model from a Single Natural Image
https://lnkd.in/fS3ZBAP

ICCV 2019 | Best Student Paper Award: PLMP — Point-Line Minimal Problems in Complete Multi-View Visibility
https://lnkd.in/f7CDuq2

ICCV 2019 | Best Paper Honorable Mentions
Paper: Asynchronous Single-Photon 3D Imaging
https://lnkd.in/fMpQPCj

Paper: Specifying Object Attributes and Relations in Interactive Scene Generation
https://lnkd.in/fmjk9eZ

You can find all papers on the ICCV 2019 open access website:
https://lnkd.in/gaBwvS4

Source: Synced

#machinelearning #deeplearning #computervision #iccv2019

❇️ @AI_Python_EN
20 Popular Machine Learning Metrics. Part 1: Classification & Regression Evaluation Metrics
#DataScience #MachineLearning #ArtificialIntelligence

https://bit.ly/34aNEKN

❇️ @AI_Python_EN
What's the purpose of statistics?

"Do you think the purpose of existence is to pass out of existence is the purpose of existence?" - Ray Manzarek

The former Doors organist poses some fundamental questions to which definitive answers remain elusive. Happily, the purpose of statistics is easier to fathom since humans are its creator. Put simply, it is to enhance decision making.

These decisions could be those made by scientists, businesspeople, politicians and other government officials, by medical and legal professionals, or even by religious authorities. In informal ways, ordinary folks also use statistics to help make better decisions.

How does it do this?

One way is by providing basic information, such as how many, how much and how often. Stat in statistics is derived from the word state, as in nation state and, as it emerged as a formal discipline, describing nations quantitatively (e.g., population size, number of citizens working in manufacturing) became a fundamental purpose. Frequencies, means, medians and standard deviations are now familiar to anyone.

Often we must rely on samples to make inferences about our population of interest. From a consumer survey, for example, we might estimate mean annual household expenditures on snack foods. This is known as inferential statistics, and confidence intervals will be familiar to anyone who has taken an introductory course in statistics. So will methods such as t-tests and chi-squared tests which can be used to make population inferences about groups (e.g., are males more likely than females to eat pretzels?).

Another way statistics helps us make decisions is by exploring relationships among variables through the use of cross tabulations, correlations and data visualizations. Exploratory data analysis (EDA) can also take on more complex forms and draw upon methods such as principal components analysis, regression and cluster analysis. EDA is often used to develop hypotheses which will be assessed more rigorously in subsequent research.

These hypotheses are often causal in nature, for example, why some people avoid snacks. Randomized experiments are generally considered the best approach in causal analysis but are not always possible or appropriate; see Why experiment? for some more thoughts on this subject. Hypotheses can be further developed and refined, not simply tested through Null Hypothesis Significance Testing, though this has been traditionally frowned upon since we are using the same data for multiple purposes.

Many statisticians are actively involved in designing research, not merely using secondary data. This is a large subject but briefly summarized in Preaching About Primary Research.

Making classifications, predictions and forecasts is another traditional role of statistics. In a data science context, the first two are often called predictive analytics and employ methods such as random forests and standard (OLS) regression. Forecasting sales for the next year is a different matter and normally requires the use of time-series analysis. There is also unsupervised learning, which aims to find previously unknown patterns in unlabeled data. Using K-means clustering to partition consumer survey respondents into segments based on their attitudes is an example of this.

Quality control, operations research, what-if simulations and risk assessment are other areas where statistics play a key role. There are many others, as this page illustrates.

The fuzzy buzzy term analytics is frequently used interchangeably with statistics, an offense to which I also plead guilty.

"The best thing about being a statistician is that you get to play in everyone's backyard." - John Tukey

#ai #artificialintelligence #ml #statistics #bigdata #machinelearning
#datascience

❇️ @AI_Python_EN