๐ฅ 5 useful Python automation scripts
1. Download Youtube videos
2. Automate WhatsApp messages
3. Google search with Python
4. Download Instagram posts
5. Extract audio from video files
https://t.iss.one/DataScienceT
1. Download Youtube videos
pip install pytube
from pytube import YouTube
# Specify the URL of the YouTube video
video_url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
# Create a YouTube object
yt = YouTube(video_url)
# Select the highest resolution stream
stream = yt.streams.get_highest_resolution()
# Define the output path for the downloaded video
output_path = "path/to/output/directory/"
# Download the video
stream.download(output_path)
print("Video downloaded successfully!")
2. Automate WhatsApp messages
pip install pywhatkit
import pywhatkit
# Set the target phone number (with country code) and the message
phone_number = "+1234567890"
message = "Hello, this is an automated WhatsApp message!"
# Schedule the message to be sent at a specific time (24-hour format)
hour = 13
minute = 30
# Send the scheduled message
pywhatkit.sendwhatmsg(phone_number, message, hour, minute)
3. Google search with Python
pip install googlesearch-python
from googlesearch import search
# Define the query you want to search
query = "Python programming"
# Specify the number of search results you want to retrieve
num_results = 5
# Perform the search and retrieve the results
search_results = search(query, num_results=num_results, lang='en')
# Print the search results
for result in search_results:
print(result)
4. Download Instagram posts
pip install instaloader
import instaloader
# Create an instance of Instaloader
loader = instaloader.Instaloader()
# Define the target Instagram profile
target_profile = "instagram"
# Download posts from the profile
loader.download_profile(target_profile, profile_pic=False, fast_update=True)
print("Posts downloaded successfully!")
5. Extract audio from video files
pip install moviepy
from moviepy.editor import VideoFileClip
# Define the path to the video file
video_path = "path/to/video/file.mp4"
# Create a VideoFileClip object
video_clip = VideoFileClip(video_path)
# Extract the audio from the video
audio_clip = video_clip.audio
# Define the output audio file path
output_audio_path = "path/to/output/audio/file.mp3"
# Write the audio to the output file
audio_clip.write_audiofile(output_audio_path)
# Close the clips
video_clip.close()
audio_clip.close()
print("Audio extracted successfully!")
https://t.iss.one/DataScienceT
โคโ๐ฅ6๐5โค2
๐ NAUTILUS: boosting Bayesian importance nested sampling with deep learning
A novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning.
Install:
๐ฅ Github: https://github.com/johannesulf/nautilus
โญ๏ธ Docs: https://nautilus-sampler.readthedocs.io/
๐ Paper: https://arxiv.org/abs/2306.16923v1
https://t.iss.one/DataScienceT
A novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning.
Install:
pip install nautilus-sampler
import corner
import numpy as np
from nautilus import Prior, Sampler
from scipy.stats import multivariate_normal
prior = Prior()
for key in 'abc':
prior.add_parameter(key)
def likelihood(param_dict):
x = [param_dict[key] for key in 'abc']
return multivariate_normal.logpdf(x, mean=[0.4, 0.5, 0.6], cov=0.01)
sampler = Sampler(prior, likelihood)
sampler.run(verbose=True)
points, log_w, log_l = sampler.posterior()
corner.corner(points, weights=np.exp(log_w), labels='abc')
๐ฅ Github: https://github.com/johannesulf/nautilus
โญ๏ธ Docs: https://nautilus-sampler.readthedocs.io/
๐ Paper: https://arxiv.org/abs/2306.16923v1
https://t.iss.one/DataScienceT
โค6
๐๏ธ GlOttal-flow LPC Filter (GOLF)
A DDSP-based neural vocoder.
๐ฅ Github: https://github.com/yoyololicon/golf
๐ Paper: https://arxiv.org/abs/2306.17252v1
๐Demo: https://yoyololicon.github.io/golf-demo/
https://t.iss.one/DataScienceT
A DDSP-based neural vocoder.
๐ฅ Github: https://github.com/yoyololicon/golf
๐ Paper: https://arxiv.org/abs/2306.17252v1
๐Demo: https://yoyololicon.github.io/golf-demo/
https://t.iss.one/DataScienceT
โคโ๐ฅ3โค1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฎ SAM-PT: Segment Anything + Tracking ๐ฎ
โญ๏ธ SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS).
๐ Review https://t.ly/QLMG
๐ Paper arxiv.org/pdf/2307.01197.pdf
๐ Project www.vis.xyz/pub/sam-pt/
๐ Code github.com/SysCV/sam-pt
https://t.iss.one/DataScienceT
โญ๏ธ SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS).
๐ Review https://t.ly/QLMG
๐ Paper arxiv.org/pdf/2307.01197.pdf
๐ Project www.vis.xyz/pub/sam-pt/
๐ Code github.com/SysCV/sam-pt
https://t.iss.one/DataScienceT
โคโ๐ฅ1โค1๐1
๐ธThe Drunkardโs Odometry: Estimating Camera Motion in Deforming Scenes
๐ฅ Github: https://github.com/UZ-SLAMLab/DrunkardsOdometry
โฉ Paper: https://arxiv.org/pdf/2306.16917v1.pdf
๐จ Dataset: https://paperswithcode.com/dataset/drunkard-s-dataset
https://t.iss.one/DataScienceT
๐ฅ Github: https://github.com/UZ-SLAMLab/DrunkardsOdometry
โฉ Paper: https://arxiv.org/pdf/2306.16917v1.pdf
๐จ Dataset: https://paperswithcode.com/dataset/drunkard-s-dataset
https://t.iss.one/DataScienceT
โคโ๐ฅ2
This media is not supported in your browser
VIEW IN TELEGRAM
๐ช Making a web app generator with open ML models
๐ฅ Github: https://github.com/huggingface/blog/blob/main/text-to-webapp.md
๐ HuggingFace: https://huggingface.co/blog/text-to-webapp
๐Demo: https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder
https://t.iss.one/DataScienceT
๐ฅ Github: https://github.com/huggingface/blog/blob/main/text-to-webapp.md
๐ HuggingFace: https://huggingface.co/blog/text-to-webapp
๐Demo: https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder
https://t.iss.one/DataScienceT
โคโ๐ฅ3๐2
๐คณFiltered-Guided Diffusion
๐ฅ Github: https://github.com/jaclyngu/filteredguideddiffusion
โฉ Paper: https://arxiv.org/pdf/2306.17141v1.pdf
๐จ Dataset: https://paperswithcode.com/dataset/afhq
https://t.iss.one/DataScienceT
๐ฅ Github: https://github.com/jaclyngu/filteredguideddiffusion
โฉ Paper: https://arxiv.org/pdf/2306.17141v1.pdf
๐จ Dataset: https://paperswithcode.com/dataset/afhq
https://t.iss.one/DataScienceT
โคโ๐ฅ1โค1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ชฉ DISCO: Human Dance Generation
โญ๏ธ NTU (+ #Microsoft) unveils DISCO: a big step towards the Human Dance Generation.
๐ Review https://t.ly/cNGX
๐ Paper arxiv.org/pdf/2307.00040.pdf
๐Project: disco-dance.github.io/
๐ Code github.com/Wangt-CN/DisCo
https://t.iss.one/DataScienceT
โญ๏ธ NTU (+ #Microsoft) unveils DISCO: a big step towards the Human Dance Generation.
๐ Review https://t.ly/cNGX
๐ Paper arxiv.org/pdf/2307.00040.pdf
๐Project: disco-dance.github.io/
๐ Code github.com/Wangt-CN/DisCo
https://t.iss.one/DataScienceT
๐3โค1
Building an Image Recognition API using Flask.
Step 1: Set up the project environment
1. Create a new directory for your project and navigate to it.
2. Create a virtual environment (optional but recommended):
(Image 1.)
3. Install the necessary libraries (image 2.)
Step 2: Create a Flask Web Application
Create a new file called app.py in the project directory (image 3.)
Step 3: Launch the Flask Application
Save the changes and run the Flask application (image 4.)
Step 4: Test the API
Your API is now up and running and you can send images to /predict via HTTP POST requests.
You can use tools such as curl or Postman to test the API.
โข An example of using curl (image 5.)
โข An example using Python queries (image 6.)
https://t.iss.one/DataScienceT
Step 1: Set up the project environment
1. Create a new directory for your project and navigate to it.
2. Create a virtual environment (optional but recommended):
(Image 1.)
3. Install the necessary libraries (image 2.)
Step 2: Create a Flask Web Application
Create a new file called app.py in the project directory (image 3.)
Step 3: Launch the Flask Application
Save the changes and run the Flask application (image 4.)
Step 4: Test the API
Your API is now up and running and you can send images to /predict via HTTP POST requests.
You can use tools such as curl or Postman to test the API.
โข An example of using curl (image 5.)
โข An example using Python queries (image 6.)
https://t.iss.one/DataScienceT
โคโ๐ฅ2๐2๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ Hierarchical Open-vocabulary Universal Image Segmentation
Decoupled text-image fusion mechanism and representation learning modules for both "things" and "stuff".
๐ฅ Github: https://github.com/berkeley-hipie/hipie
๐ Paper: https://arxiv.org/abs/2307.00764v1
๐Project: https://people.eecs.berkeley.edu/~xdwang/projects/HIPIE/
๐ Dataset: https://paperswithcode.com/dataset/pascal-panoptic-parts
https://t.iss.one/DataScienceT
Decoupled text-image fusion mechanism and representation learning modules for both "things" and "stuff".
๐ฅ Github: https://github.com/berkeley-hipie/hipie
๐ Paper: https://arxiv.org/abs/2307.00764v1
๐Project: https://people.eecs.berkeley.edu/~xdwang/projects/HIPIE/
๐ Dataset: https://paperswithcode.com/dataset/pascal-panoptic-parts
https://t.iss.one/DataScienceT
โคโ๐ฅ1
๐Foundation Model for Endoscopy Video Analysis
๐ฅ Github: https://github.com/med-air/endo-fm
โฉ Paper: https://arxiv.org/pdf/2306.16741v1.pdf
๐จ Dataset: https://paperswithcode.com/dataset/kumc
https://t.iss.one/DataScienceT
๐ฅ Github: https://github.com/med-air/endo-fm
โฉ Paper: https://arxiv.org/pdf/2306.16741v1.pdf
๐จ Dataset: https://paperswithcode.com/dataset/kumc
https://t.iss.one/DataScienceT
โคโ๐ฅ3
We launched a special bot some time ago to download all scientific, software and mathematics books The bot contains more than thirty million books, and new books are downloaded first, In addition to the possibility of downloading all articles and scientific papers for free
To request a subscription: talk to @Hussein_Sheikho
To request a subscription: talk to @Hussein_Sheikho
โค3๐3
This media is not supported in your browser
VIEW IN TELEGRAM
๐จ Making ML-powered web games with Transformers.js
The goal of this tutorial is to show you how easy it is to make your own ML-powered web game.
๐ฅ Github: https://github.com/xenova/doodle-dash
๐ค Hugging face: https://huggingface.co/blog/ml-web-games
โญ๏ธ Code: https://github.com/xenova/doodle-dash
๐Demo: https://huggingface.co/spaces/Xenova/doodle-dash
๐ Dataset: https://huggingface.co/datasets/Xenova/quickdraw-small
https://t.iss.one/DataScienceT
The goal of this tutorial is to show you how easy it is to make your own ML-powered web game.
๐ฅ Github: https://github.com/xenova/doodle-dash
๐ค Hugging face: https://huggingface.co/blog/ml-web-games
โญ๏ธ Code: https://github.com/xenova/doodle-dash
๐Demo: https://huggingface.co/spaces/Xenova/doodle-dash
๐ Dataset: https://huggingface.co/datasets/Xenova/quickdraw-small
https://t.iss.one/DataScienceT
โค1๐1
๐ฆ Focused Transformer: Contrastive Training for Context Scaling
LongLLaMA, a large language model capable of handling long contexts of 256k tokens or even more.
๐ฅ Github: https://github.com/cstankonrad/long_llama
๐ Paper: https://arxiv.org/abs/2307.03170v1
๐ฅ Colab: https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_colab.ipynb
๐ Dataset: https://paperswithcode.com/dataset/pg-19
https://t.iss.one/DataScienceT
LongLLaMA, a large language model capable of handling long contexts of 256k tokens or even more.
๐ฅ Github: https://github.com/cstankonrad/long_llama
๐ Paper: https://arxiv.org/abs/2307.03170v1
๐ฅ Colab: https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_colab.ipynb
๐ Dataset: https://paperswithcode.com/dataset/pg-19
https://t.iss.one/DataScienceT
๐3โค1โคโ๐ฅ1