🚶♂️ MotionGPT: Human Motion
as Foreign Language
MotionGPT consists of a motion tokenizer responsible for converting raw motion data into discrete motion tokens, as well as a motion-aware language model that learns to understand the motion tokens from large language pre-training models by corresponding textual descriptions.
⏩ Project: https://motion-gpt.github.io/
🖥 Github: https://github.com/openmotionlab/motiongpt
📕 Paper: https://arxiv.org/pdf/2306.14795.pdf
🔗Dataset: https://paperswithcode.com/dataset/amass
https://t.iss.one/DataScienceT
as Foreign Language
MotionGPT consists of a motion tokenizer responsible for converting raw motion data into discrete motion tokens, as well as a motion-aware language model that learns to understand the motion tokens from large language pre-training models by corresponding textual descriptions.
⏩ Project: https://motion-gpt.github.io/
🖥 Github: https://github.com/openmotionlab/motiongpt
📕 Paper: https://arxiv.org/pdf/2306.14795.pdf
🔗Dataset: https://paperswithcode.com/dataset/amass
https://t.iss.one/DataScienceT
❤🔥2❤2👍1
🖥 Free Courses on Large Language Models
▪ChatGPT Prompt Engineering for Developers
▪LangChain for LLM Application Development
▪Building Systems with the ChatGPT API
▪Google Cloud Generative AI Learning Path
▪Introduction to Large Language Models with Google Cloud
▪LLM University
▪Full Stack LLM Bootcamp
https://t.iss.one/DataScienceT
▪ChatGPT Prompt Engineering for Developers
▪LangChain for LLM Application Development
▪Building Systems with the ChatGPT API
▪Google Cloud Generative AI Learning Path
▪Introduction to Large Language Models with Google Cloud
▪LLM University
▪Full Stack LLM Bootcamp
https://t.iss.one/DataScienceT
❤🔥5❤3👍1
PANet: LiDAR Panoptic Segmentation with Sparse Instance Proposal and Aggregation
🖥 Github: https://github.com/jieqianyu/panet
⏩ Paper: https://arxiv.org/pdf/2306.15348v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/kitti
https://t.iss.one/DataScienceT
🖥 Github: https://github.com/jieqianyu/panet
⏩ Paper: https://arxiv.org/pdf/2306.15348v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/kitti
https://t.iss.one/DataScienceT
👍3❤🔥1❤1
💬 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement
A large-scale speech corpus to facilitate the research of speech representation
🖥 Github: https://github.com/alibaba-damo-academy/3D-Speaker
📕 Paper: https://arxiv.org/abs/2306.15354v1
🔗Dataset: https://3dspeaker.github.io/
https://t.iss.one/DataScienceT
A large-scale speech corpus to facilitate the research of speech representation
🖥 Github: https://github.com/alibaba-damo-academy/3D-Speaker
📕 Paper: https://arxiv.org/abs/2306.15354v1
🔗Dataset: https://3dspeaker.github.io/
https://t.iss.one/DataScienceT
❤1❤🔥1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
What the Brain Sees
How a text-to-image model generates images from brain scans
https://www.deeplearning.ai/the-batch/how-a-text-to-image-model-generates-images-from-brain-scans/
https://t.iss.one/DataScienceT
How a text-to-image model generates images from brain scans
https://www.deeplearning.ai/the-batch/how-a-text-to-image-model-generates-images-from-brain-scans/
https://t.iss.one/DataScienceT
❤1❤🔥1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
The source code for DragGAN has been released! 🔥🔥🔥
We can finally play with that marvel!
🔗 GitHub repository: https://github.com/XingangPan/DragGAN
https://t.iss.one/DataScienceT
We can finally play with that marvel!
🔗 GitHub repository: https://github.com/XingangPan/DragGAN
https://t.iss.one/DataScienceT
❤🔥4👍1
📕 Constrained-Text-Generation-Studio
AI writing assistant for recreational linguists, poets, creative writers, and/or researchers to use and study the ability of large-scale language models.
🖥 Github: https://github.com/hellisotherpeople/constrained-text-generation-studio
📕 Paper: https://arxiv.org/abs/2306.15926v1
🔗Dataset: https://huggingface.co/datasets/Hellisotherpeople/Lipogram-e
https://t.iss.one/DataScienceT
AI writing assistant for recreational linguists, poets, creative writers, and/or researchers to use and study the ability of large-scale language models.
🖥 Github: https://github.com/hellisotherpeople/constrained-text-generation-studio
📕 Paper: https://arxiv.org/abs/2306.15926v1
🔗Dataset: https://huggingface.co/datasets/Hellisotherpeople/Lipogram-e
https://t.iss.one/DataScienceT
👍3
CellViT: Vision Transformers for Precise Cell Segmentation and Classification
🖥 Github: https://github.com/tio-ikim/cellvit
⏩ Paper: https://arxiv.org/pdf/2306.15350v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/pannuke
https://t.iss.one/DataScienceT
🖥 Github: https://github.com/tio-ikim/cellvit
⏩ Paper: https://arxiv.org/pdf/2306.15350v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/pannuke
https://t.iss.one/DataScienceT
❤🔥4👍3
A special and important channel to download the most important books to learn programming and data science
t.iss.one/DataScienceM
t.iss.one/DataScienceM
Telegram
Data Science Machine Learning Data Analysis
This channel is for Programmers, Coders, Software Engineers.
1- Data Science
2- Machine Learning
3- Data Visualization
4- Artificial Intelligence
5- Data Analysis
6- Statistics
7- Deep Learning
Cross promotion and ads: @hussein_sheikho
1- Data Science
2- Machine Learning
3- Data Visualization
4- Artificial Intelligence
5- Data Analysis
6- Statistics
7- Deep Learning
Cross promotion and ads: @hussein_sheikho
👍2❤🔥1
💬 GLIGEN: Open-Set Grounded Text-to-Image Generation
GLIGEN’s zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin. Code comming soon.
⭐️ Project: https://gligen.github.io/
⭐️ Demo: https://aka.ms/gligen
✅️ Paper: https://arxiv.org/abs/2301.07093
🖥 Github: https://github.com/gligen/GLIGEN
https://t.iss.one/DataScienceT
GLIGEN’s zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin. Code comming soon.
⭐️ Project: https://gligen.github.io/
⭐️ Demo: https://aka.ms/gligen
✅️ Paper: https://arxiv.org/abs/2301.07093
🖥 Github: https://github.com/gligen/GLIGEN
https://t.iss.one/DataScienceT
👍2❤🔥1🏆1
🧍♂ BEDLAM: Bodies Exhibiting Detailed Lifelike Animated Motion
BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes.
🖥 Github: https://github.com/pixelite1201/BEDLAM
📕 Paper: https://bedlam.is.tuebingen.mpg.de/media/upload/BEDLAM_CVPR2023.pdf
🔗Render code: https://github.com/PerceivingSystems/bedlam_render
🎞 Video: https://youtu.be/OBttHFwdtfI
👑 Dataset: https://paperswithcode.com/dataset/bedlam
https://t.iss.one/DataScienceT
BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes.
🖥 Github: https://github.com/pixelite1201/BEDLAM
📕 Paper: https://bedlam.is.tuebingen.mpg.de/media/upload/BEDLAM_CVPR2023.pdf
🔗Render code: https://github.com/PerceivingSystems/bedlam_render
🎞 Video: https://youtu.be/OBttHFwdtfI
👑 Dataset: https://paperswithcode.com/dataset/bedlam
https://t.iss.one/DataScienceT
❤1❤🔥1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
⭐️ ManimML: Communicating Machine Learning Architectures with Animation
An open-source Python library for easily generating animations of ML algorithms directly from code.
🖥 Github: https://github.com/helblazer811/manimml
📕 Paper: https://arxiv.org/abs/2306.17108v1
📌 Project: https://www.manim.community/
https://t.iss.one/DataScienceT
An open-source Python library for easily generating animations of ML algorithms directly from code.
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, FeedForwardLayer
# Make nn
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32, activation_function="ReLU"),
FeedForwardLayer(3, activation_function="Sigmoid"),
],
layer_spacing=0.25,
)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
🖥 Github: https://github.com/helblazer811/manimml
📕 Paper: https://arxiv.org/abs/2306.17108v1
📌 Project: https://www.manim.community/
https://t.iss.one/DataScienceT
❤🔥3👍1
🧬NeuralFuse
🖥 Github: https://github.com/ibm/neuralfuse
⏩ Paper: https://arxiv.org/pdf/2306.16869v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/imagenet
https://t.iss.one/DataScienceT
🖥 Github: https://github.com/ibm/neuralfuse
⏩ Paper: https://arxiv.org/pdf/2306.16869v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/imagenet
https://t.iss.one/DataScienceT
👍2
🖥 10 Advanced Python Scripts For Everyday Programming
1. SpeedTest with Python
2. Search on Google
3. Make Web Bot
4. Fetch Song Lyrics
5. Get Exif Data of Photos
6. OCR Text from Image
7. Convert Photo into Cartonize
8. Empty Recycle Bin
9. Python Image Enhancement
10. Get Window Version
https://t.iss.one/DataScienceT
1. SpeedTest with Python
# pip install pyspeedtest
# pip install speedtest
# pip install speedtest-cli
#method 1
import speedtest
speedTest = speedtest.Speedtest()
print(speedTest.get_best_server())
#Check download speed
print(speedTest.download())
#Check upload speed
print(speedTest.upload())
# Method 2
import pyspeedtest
st = pyspeedtest.SpeedTest()
st.ping()
st.download()
st.upload()
2. Search on Google
# pip install google
from googlesearch import search
query = "Medium.com"
for url in search(query):
print(url)
3. Make Web Bot
# pip install selenium
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
bot = webdriver.Chrome("chromedriver.exe")
bot.get('[https://www.google.com'](https://www.google.com'))
search = bot.find_element_by_name('q')
search.send_keys("@codedev101")
search.send_keys(Keys.RETURN)
time.sleep(5)
bot.quit()
4. Fetch Song Lyrics
# pip install lyricsgenius
import lyricsgenius
api_key = "xxxxxxxxxxxxxxxxxxxxx"
genius = lyricsgenius.Genius(api_key)
artist = genius.search_artist("Pop Smoke", max_songs=5,sort="title")
song = artist.song("100k On a Coupe")
print(song.lyrics)
5. Get Exif Data of Photos
# Get Exif of Photo
# Method 1
# pip install pillow
import PIL.Image
import PIL.ExifTags
img = PIL.Image.open("Img.jpg")
exif_data =
{
PIL.ExifTags.TAGS[i]: j
for i, j in img._getexif().items()
if i in PIL.ExifTags.TAGS
}
print(exif_data)
# Method 2
# pip install ExifRead
import exifread
filename = open(path_name, 'rb')
tags = exifread.process_file(filename)
print(tags)
6. OCR Text from Image
# pip install pytesseract
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
t=Image.open("img.png")
text = pytesseract.image_to_string(t, config='')
print(text)
7. Convert Photo into Cartonize
# pip install opencv-python
import cv2
img = cv2.imread('img.jpg')
grayimg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
grayimg = cv2.medianBlur(grayimg, 5)
edges = cv2.Laplacian(grayimg , cv2.CV_8U, ksize=5)
r,mask =cv2.threshold(edges,100,255,cv2.THRESH_BINARY_INV)
img2 = cv2.bitwise_and(img, img, mask=mask)
img2 = cv2.medianBlur(img2, 5)
cv2.imwrite("cartooned.jpg", mask)
8. Empty Recycle Bin
# pip install winshell
import winshell
try:
winshell.recycle_bin().empty(confirm=False, /show_progress=False, sound=True)
print("Recycle bin is emptied Now")
except:
print("Recycle bin already empty")
9. Python Image Enhancement
# pip install pillow
from PIL import Image,ImageFilter
from PIL import ImageEnhance
im = Image.open('img.jpg')
# Choose your filter
# add Hastag at start if you don't want to any filter below
en = ImageEnhance.Color(im)
en = ImageEnhance.Contrast(im)
en = ImageEnhance.Brightness(im)
en = ImageEnhance.Sharpness(im)
# result
en.enhance(1.5).show("enhanced")
10. Get Window Version
# Window Version
import wmi
data = wmi.WMI()
for os_name in data.Win32_OperatingSystem():
print(os_name.Caption) # Microsoft Windows 11 Home
https://t.iss.one/DataScienceT
❤13👍7
📶 Extract Saved WiFi Passwords in Python
https://t.iss.one/DataScienceT
import subprocess
import os
import re
from collections import namedtuple
import configparser
def get_linux_saved_wifi_passwords(verbose=1):
network_connections_path = "/etc/NetworkManager/system-connections/"
fields = ["ssid", "auth-alg", "key-mgmt", "psk"]
Profile = namedtuple("Profile", [f.replace("-", "_") for f in fields])
profiles = []
for file in os.listdir(network_connections_path):
data = { k.replace("-", "_"): None for k in fields }
config = configparser.ConfigParser()
config.read(os.path.join(network_connections_path, file))
for _, section in config.items():
for k, v in section.items():
if k in fields:
data[k.replace("-", "_")] = v
profile = Profile(**data)
if verbose >= 1:
print_linux_profile(profile)
profiles.append(profile)
return profiles
def print_linux_profiles(verbose):
"""Prints all extracted SSIDs along with Key (PSK) on Linux"""
print("SSID AUTH KEY-MGMT PSK")
print("-"*50)
get_linux_saved_wifi_passwords(verbose)
https://t.iss.one/DataScienceT
❤5👍3
🖥 5 useful Python automation scripts
1. Download Youtube videos
2. Automate WhatsApp messages
3. Google search with Python
4. Download Instagram posts
5. Extract audio from video files
https://t.iss.one/DataScienceT
1. Download Youtube videos
pip install pytube
from pytube import YouTube
# Specify the URL of the YouTube video
video_url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
# Create a YouTube object
yt = YouTube(video_url)
# Select the highest resolution stream
stream = yt.streams.get_highest_resolution()
# Define the output path for the downloaded video
output_path = "path/to/output/directory/"
# Download the video
stream.download(output_path)
print("Video downloaded successfully!")
2. Automate WhatsApp messages
pip install pywhatkit
import pywhatkit
# Set the target phone number (with country code) and the message
phone_number = "+1234567890"
message = "Hello, this is an automated WhatsApp message!"
# Schedule the message to be sent at a specific time (24-hour format)
hour = 13
minute = 30
# Send the scheduled message
pywhatkit.sendwhatmsg(phone_number, message, hour, minute)
3. Google search with Python
pip install googlesearch-python
from googlesearch import search
# Define the query you want to search
query = "Python programming"
# Specify the number of search results you want to retrieve
num_results = 5
# Perform the search and retrieve the results
search_results = search(query, num_results=num_results, lang='en')
# Print the search results
for result in search_results:
print(result)
4. Download Instagram posts
pip install instaloader
import instaloader
# Create an instance of Instaloader
loader = instaloader.Instaloader()
# Define the target Instagram profile
target_profile = "instagram"
# Download posts from the profile
loader.download_profile(target_profile, profile_pic=False, fast_update=True)
print("Posts downloaded successfully!")
5. Extract audio from video files
pip install moviepy
from moviepy.editor import VideoFileClip
# Define the path to the video file
video_path = "path/to/video/file.mp4"
# Create a VideoFileClip object
video_clip = VideoFileClip(video_path)
# Extract the audio from the video
audio_clip = video_clip.audio
# Define the output audio file path
output_audio_path = "path/to/output/audio/file.mp3"
# Write the audio to the output file
audio_clip.write_audiofile(output_audio_path)
# Close the clips
video_clip.close()
audio_clip.close()
print("Audio extracted successfully!")
https://t.iss.one/DataScienceT
❤🔥6👍5❤2
🚀 NAUTILUS: boosting Bayesian importance nested sampling with deep learning
A novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning.
Install:
🖥 Github: https://github.com/johannesulf/nautilus
⭐️ Docs: https://nautilus-sampler.readthedocs.io/
📕 Paper: https://arxiv.org/abs/2306.16923v1
https://t.iss.one/DataScienceT
A novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning.
Install:
pip install nautilus-sampler
import corner
import numpy as np
from nautilus import Prior, Sampler
from scipy.stats import multivariate_normal
prior = Prior()
for key in 'abc':
prior.add_parameter(key)
def likelihood(param_dict):
x = [param_dict[key] for key in 'abc']
return multivariate_normal.logpdf(x, mean=[0.4, 0.5, 0.6], cov=0.01)
sampler = Sampler(prior, likelihood)
sampler.run(verbose=True)
points, log_w, log_l = sampler.posterior()
corner.corner(points, weights=np.exp(log_w), labels='abc')
🖥 Github: https://github.com/johannesulf/nautilus
⭐️ Docs: https://nautilus-sampler.readthedocs.io/
📕 Paper: https://arxiv.org/abs/2306.16923v1
https://t.iss.one/DataScienceT
❤6
🏌️ GlOttal-flow LPC Filter (GOLF)
A DDSP-based neural vocoder.
🖥 Github: https://github.com/yoyololicon/golf
📕 Paper: https://arxiv.org/abs/2306.17252v1
🔗Demo: https://yoyololicon.github.io/golf-demo/
https://t.iss.one/DataScienceT
A DDSP-based neural vocoder.
🖥 Github: https://github.com/yoyololicon/golf
📕 Paper: https://arxiv.org/abs/2306.17252v1
🔗Demo: https://yoyololicon.github.io/golf-demo/
https://t.iss.one/DataScienceT
❤🔥3❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔮 SAM-PT: Segment Anything + Tracking 🔮
⭐️ SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS).
🌐 Review https://t.ly/QLMG
🌐 Paper arxiv.org/pdf/2307.01197.pdf
🌐 Project www.vis.xyz/pub/sam-pt/
🌐 Code github.com/SysCV/sam-pt
https://t.iss.one/DataScienceT
⭐️ SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS).
🌐 Review https://t.ly/QLMG
🌐 Paper arxiv.org/pdf/2307.01197.pdf
🌐 Project www.vis.xyz/pub/sam-pt/
🌐 Code github.com/SysCV/sam-pt
https://t.iss.one/DataScienceT
❤🔥1❤1👍1