Data Science | Machine Learning with Python for Researchers
32.5K subscribers
3.09K photos
107 videos
22 files
3.31K links
ads: @HusseinSheikho

The Data Science and Python channel is for researchers and advanced programmers

Buy ads: https://telega.io/c/dataScienceT
Download Telegram
Hawk: Learning to Understand Open-World Video Anomalies

27 May 2024 Β· Jiaqi Tang, Hao Lu, Ruizheng Wu, Xiaogang Xu, Ke Ma, Cheng Fang, Bin Guo, Jiangbo Lu, Qifeng Chen, Ying-Cong Chen Β·

Video Anomaly Detection (#VAD) systems can autonomously monitor and identify disturbances, reducing the need for manual labor and associated costs. However, current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction. Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios. In this paper, we introduce Hawk, a novel framework that leverages interactive large Visual Language Models (#VLM) to interpret video anomalies precisely. Recognizing the difference in motion information between abnormal and normal videos, Hawk explicitly integrates motion modality to enhance anomaly identification. To reinforce motion attention, we construct an auxiliary consistency loss within the motion and video space, guiding the video branch to focus on the motion modality. Moreover, to improve the interpretation of motion-to-language, we establish a clear supervisory relationship between motion and its linguistic representation. Furthermore, we have annotated over 8,000 anomaly videos with language descriptions, enabling effective training across diverse open-world scenarios, and also created 8,000 question-answering pairs for users' open-world questions. The final results demonstrate that #Hawk achieves SOTA performance, surpassing existing baselines in both video description generation and question-answering. Our codes/dataset/demo will be released at https://github.com/jqtangust/hawk.


Paper: https://arxiv.org/pdf/2405.16886v1.pdf

Code: https://github.com/jqtangust/hawk

Dataset: Hawk Annotation Dataset

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents

https://t.iss.one/DataScienceT
πŸ‘4
PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data

20 Feb 2025 Β· Shijie Huang, Yiren Song, Yuxuan Zhang, Hailong Guo, Xueyin Wang, Mike Zheng Shou, Jiaming Liu Β·

We introduce PhotoDoodle, a novel image editing framework designed to facilitate photo doodling by enabling artists to overlay decorative elements onto photographs. Photo doodling is challenging because the inserted elements must appear seamlessly integrated with the background, requiring realistic blending, perspective alignment, and contextual coherence. Additionally, the background must be preserved without distortion, and the artist's unique style must be captured efficiently from limited training data. These requirements are not addressed by previous methods that primarily focus on global style transfer or regional inpainting. The proposed method, PhotoDoodle, employs a two-stage training strategy. Initially, we train a general-purpose image editing model, OmniEditor, using large-scale data. Subsequently, we fine-tune this model with EditLoRA using a small, artist-curated dataset of before-and-after image pairs to capture distinct editing styles and techniques. To enhance consistency in the generated results, we introduce a positional encoding reuse mechanism. Additionally, we release a PhotoDoodle dataset featuring six high-quality styles. Extensive experiments demonstrate the advanced performance and robustness of our method in customized image editing, opening new possibilities for artistic creation.


Paper: https://arxiv.org/pdf/2502.14397v1.pdf

Code: https://github.com/showlab/PhotoDoodle

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents

https://t.iss.one/DataScienceT
πŸ‘5
Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator

26 Feb 2025 Β· Xiankang He, Dongyan Guo, Hongji Li, Ruibo Li, Ying Cui, Chi Zhang Β·

Monocular depth estimation (#MDE) aims to predict scene depth from a single RGB image and plays a crucial role in 3D scene understanding. Recent advances in zero-shot MDE leverage normalized depth representations and distillation-based learning to improve generalization across diverse scenes. However, current depth normalization methods for distillation, relying on global normalization, can amplify noisy pseudo-labels, reducing distillation effectiveness. In this paper, we systematically analyze the impact of different depth normalization strategies on pseudo-label distillation. Based on our findings, we propose Cross-Context Distillation, which integrates global and local depth cues to enhance pseudo-label quality. Additionally, we introduce a multi-teacher distillation framework that leverages complementary strengths of different depth estimation models, leading to more robust and accurate depth predictions. Extensive experiments on benchmark datasets demonstrate that our approach significantly outperforms state-of-the-art methods, both quantitatively and qualitatively.

Paper: https://arxiv.org/pdf/2502.19204v1.pdf

Code: https://github.com/Westlake-AGI-Lab/Distill-Any-Depth

Datasets: ScanNet - NYUv2 - ETH3D

Note: Ranked #1 on Depth Estimation on ScanNetV2

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents

https://t.iss.one/DataScienceT
πŸ‘4
Escaping The Big Data Paradigm in Self-Supervised Representation Learning

25 Feb 2025 Β· Carlos VΓ©lez GarcΓ­a, Miguel Cazorla, Jorge Pomares Β·

The reliance on large-scale datasets and extensive computational resources has become a major barrier to advancing representation learning in vision, especially in data-scarce domains. In this paper, we address the critical question: Can we escape the big data paradigm in self-supervised representation learning from images? We introduce #SCOTT (Sparse Convolutional Tokenizer for Transformers), a shallow tokenization architecture that is compatible with Masked Image Modeling (MIM) tasks. SCOTT injects convolutional inductive biases into Vision Transformers (ViTs), enhancing their efficacy in small-scale data regimes. Alongside, we propose to train on a Joint-Embedding Predictive Architecture within a MIM framework (MIM-JEPA), operating in latent representation space to capture more semantic features. Our approach enables ViTs to be trained from scratch on datasets orders of magnitude smaller than traditionally required --without relying on massive external datasets for pretraining. We validate our method on three small-size, standard-resoultion, fine-grained datasets: Oxford Flowers-102, Oxford IIIT Pets-37, and ImageNet-100. Despite the challenges of limited data and high intra-class similarity, frozen SCOTT models pretrained with MIM-JEPA significantly outperform fully supervised methods and achieve competitive results with SOTA approaches that rely on large-scale pretraining, complex image augmentations and bigger model sizes. By demonstrating that robust off-the-shelf representations can be learned with limited data, compute, and model sizes, our work paves the way for computer applications in resource constrained environments such as medical imaging or robotics. Our findings challenge the prevailing notion that vast amounts of data are indispensable for effective representation learning in vision, offering a new pathway toward more accessible and inclusive advancements in the field.


Paper: https://arxiv.org/pdf/2502.18056v1.pdf

Code: https://github.com/inescopresearch/scott

Datasets: Oxford 102 Flower - Oxford-IIIT Pets - Imagenet100

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents #GPT4

https://t.iss.one/DataScienceT
πŸ‘3
A-MEM: Agentic Memory for LLM Agents

17 Feb 2025 Β· Wujiang Xu, Zujie Liang, Kai Mei, Hang Gao, Juntao Tan, Yongfeng Zhang Β·

While large language model (LLM) agents can effectively use external tools for complex real-world tasks, they require memory systems to leverage historical experiences. Current memory systems enable basic storage and retrieval but lack sophisticated memory organization, despite recent attempts to incorporate graph databases. Moreover, these systems' fixed operations and structures limit their adaptability across diverse tasks. To address this limitation, this paper proposes a novel agentic memory system for LLM agents that can dynamically organize memories in an agentic way. Following the basic principles of the Zettelkasten method, we designed our memory system to create interconnected knowledge networks through dynamic indexing and linking. When a new memory is added, we generate a comprehensive note containing multiple structured attributes, including contextual descriptions, keywords, and tags. The system then analyzes historical memories to identify relevant connections, establishing links where meaningful similarities exist. Additionally, this process enables memory evolution - as new memories are integrated, they can trigger updates to the contextual representations and attributes of existing historical memories, allowing the memory network to continuously refine its understanding. Our approach combines the structured organization principles of Zettelkasten with the flexibility of agent-driven decision making, allowing for more adaptive and context-aware memory management. Empirical experiments on six foundation models show superior improvement against existing SOTA baselines. The source code for evaluating performance is available at https://github.com/WujiangXu/AgenticMemory, while the source code of agentic memory system is available at https://github.com/agiresearch/A-mem.


Paper: https://arxiv.org/pdf/2502.12110v3.pdf

Code: https://github.com/wujiangxu/agenticmemory

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #RAG #Agents #GPT4

https://t.iss.one/DataScienceT
πŸ‘3
Know You First and Be You Better: Modeling Human-Like User Simulators via Implicit Profiles

26 Feb 2025 Β· Kuang Wang, Xianfei Li, Shenghao Yang, Li Zhou, Feng Jiang, Haizhou Li Β·

User simulators are crucial for replicating human interactions with dialogue systems, supporting both collaborative training and automatic evaluation, especially for large language models (LLMs). However, existing simulators often rely solely on text utterances, missing implicit user traits such as personality, speaking style, and goals. In contrast, persona-based methods lack generalizability, as they depend on predefined profiles of famous individuals or archetypes. To address these challenges, we propose User Simulator with implicit Profiles (#USP), a framework that infers implicit user profiles from human-machine conversations and uses them to generate more personalized and realistic dialogues. We first develop an LLM-driven extractor with a comprehensive profile schema. Then, we refine the simulation through conditional supervised fine-tuning and reinforcement learning with cycle consistency, optimizing it at both the utterance and conversation levels. Finally, we adopt a diverse profile sampler to capture the distribution of real-world user profiles. Experimental results demonstrate that USP outperforms strong baselines in terms of authenticity and diversity while achieving comparable performance in consistency. Furthermore, dynamic multi-turn evaluations based on USP strongly align with mainstream benchmarks, demonstrating its effectiveness in real-world applications

.
Paper: https://arxiv.org/pdf/2502.18968v1.pdf

Code: https://github.com/wangkevin02/USP

Dataset: LMSYS-USP

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents #GPT4

https://t.iss.one/DataScienceT
πŸ‘3
Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens

3 Mar 2025 Β· Xinsheng Wang, Mingqi Jiang, Ziyang Ma, Ziyu Zhang, Songxiang Liu, Linqin Li, Zheng Liang, Qixi Zheng, Rui Wang, Xiaoqin Feng, Weizhen Bian, Zhen Ye, Sitong Cheng, Ruibin Yuan, Zhixian Zhao, Xinfa Zhu, Jiahao Pan, Liumeng Xue, Pengcheng Zhu, Yunlin Chen, Zhifei Li, Xie Chen, Lei Xie, Yike Guo, Wei Xue Β·


Recent advancements in large language models (LLMs) have driven significant progress in zero-shot text-to-speech (TTS) synthesis. However, existing foundation models rely on multi-stage processing or complex architectures for predicting multiple codebooks, limiting efficiency and integration flexibility. To overcome these challenges, we introduce Spark-TTS, a novel system powered by BiCodec, a single-stream speech codec that decomposes speech into two complementary token types: low-bitrate semantic tokens for linguistic content and fixed-length global tokens for speaker attributes. This disentangled representation, combined with the Qwen2.5 LLM and a chain-of-thought (CoT) generation approach, enables both coarse-grained control (e.g., gender, speaking style) and fine-grained adjustments (e.g., precise pitch values, speaking rate). To facilitate research in controllable TTS, we introduce VoxBox, a meticulously curated 100,000-hour dataset with comprehensive attribute annotations. Extensive experiments demonstrate that Spark-TTS not only achieves state-of-the-art zero-shot voice cloning but also generates highly customizable voices that surpass the limitations of reference-based synthesis. Source code, pre-trained models, and audio samples are available at https://github.com/SparkAudio/Spark-TTS.


Paper: https://arxiv.org/pdf/2503.01710v1.pdf

Code: https://github.com/sparkaudio/spark-tts

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents #GPT4

https://t.iss.one/DataScienceT
πŸ‘6
Open Deep Search: Democratizing Search with Open-source Reasoning Agents

26 Mar 2025 Β· Salaheddin Alzubi, Creston Brooks, Purva Chiniya, Edoardo Contente, Chiara von Gerlach, Lucas Irwin, Yihan Jiang, Arda Kaz, Windsor Nguyen, Sewoong Oh, Himanshu Tyagi, Pramod Viswanath Β·

We introduce Open Deep Search (ODS) to close the increasing gap between the proprietary search AI solutions, such as Perplexity's Sonar Reasoning Pro and OpenAI's GPT-4o Search Preview, and their open-source counterparts. The main innovation introduced in ODS is to augment the reasoning capabilities of the latest open-source LLMs with reasoning agents that can judiciously use web search tools to answer queries. Concretely, ODS consists of two components that work with a base LLM chosen by the user: Open Search Tool and Open Reasoning Agent. Open Reasoning Agent interprets the given task and completes it by orchestrating a sequence of actions that includes calling tools, one of which is the Open Search Tool. Open Search Tool is a novel web search tool that outperforms proprietary counterparts. Together with powerful open-source reasoning LLMs, such as DeepSeek-R1, ODS nearly matches and sometimes surpasses the existing state-of-the-art baselines on two benchmarks: SimpleQA and FRAMES. For example, on the FRAMES evaluation benchmark, ODS improves the best existing baseline of the recently released GPT-4o Search Preview by 9.7% in accuracy. ODS is a general framework for seamlessly augmenting any LLMs -- for example, DeepSeek-R1 that achieves 82.4% on SimpleQA and 30.1% on FRAMES -- with search and reasoning capabilities to achieve state-of-the-art performance: 88.3% on SimpleQA and 75.3% on FRAMES.


Paper: https://arxiv.org/pdf/2503.20201v1.pdf

Code: https://github.com/sentient-agi/opendeepsearch

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents #GPT4

https://t.iss.one/DataScienceT
πŸ‘4
TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models

10 Feb 2025 Β· Yangguang Li, Zi-Xin Zou, Zexiang Liu, Dehu Wang, Yuan Liang, Zhipeng Yu, Xingchao Liu, Yuan-Chen Guo, Ding Liang, Wanli Ouyang, Yan-Pei Cao Β·

Recent advancements in diffusion techniques have propelled image and video generation to unprecedented levels of quality, significantly accelerating the deployment and application of generative AI. However, 3D shape generation technology has so far lagged behind, constrained by limitations in 3D data scale, complexity of 3D data processing, and insufficient exploration of advanced techniques in the 3D domain. Current approaches to 3D shape generation face substantial challenges in terms of output quality, generalization capability, and alignment with input conditions. We present TripoSG, a new streamlined shape diffusion paradigm capable of generating high-fidelity 3D meshes with precise correspondence to input images. Specifically, we propose: 1) A large-scale rectified flow transformer for 3D shape generation, achieving state-of-the-art fidelity through training on extensive, high-quality data. 2) A hybrid supervised training strategy combining SDF, normal, and eikonal losses for 3D VAE, achieving high-quality 3D reconstruction performance. 3) A data processing pipeline to generate 2 million high-quality 3D samples, highlighting the crucial rules for data quality and quantity in training 3D generative models. Through comprehensive experiments, we have validated the effectiveness of each component in our new framework. The seamless integration of these parts has enabled TripoSG to achieve state-of-the-art performance in 3D shape generation. The resulting 3D shapes exhibit enhanced detail due to high-resolution capabilities and demonstrate exceptional fidelity to input images. Moreover, TripoSG demonstrates improved versatility in generating 3D models from diverse image styles and contents, showcasing strong generalization capabilities. To foster progress and innovation in the field of 3D generation, we will make our model publicly available.


Paper: https://arxiv.org/pdf/2502.06608v3.pdf

Codes:
https://github.com/VAST-AI-Research/TripoSG
https://github.com/tencent/flashvdm

Dataset: 100poisonMpts

#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek #RAG #Agents #GPT4

https://t.iss.one/DataScienceT
πŸ‘3
πŸ“š Become a professional data scientist with these 17 resources!



1️⃣ Python libraries for machine learning

◀️ Introducing the best Python tools and packages for building ML models.

βž–βž–βž–

2️⃣ Deep Learning Interactive Book

◀️ Learn deep learning concepts by combining text, math, code, and images.

βž–βž–βž–

3️⃣ Anthology of Data Science Learning Resources

◀️ The best courses, books, and tools for learning data science.

βž–βž–βž–

4️⃣ Implementing algorithms from scratch

◀️ Coding popular ML algorithms from scratch

βž–βž–βž–

5️⃣ Machine Learning Interview Guide

◀️ Fully prepared for job interviews

βž–βž–βž–

6️⃣ Real-world machine learning projects

◀️ Learning how to build and deploy models.

βž–βž–βž–

7️⃣ Designing machine learning systems

◀️ How to design a scalable and stable ML system.

βž–βž–βž–

8️⃣ Machine Learning Mathematics

◀️ Basic mathematical concepts necessary to understand machine learning.

βž–βž–βž–

9️⃣ Introduction to Statistical Learning

◀️ Learn algorithms with practical examples.

βž–βž–βž–

1️⃣ Machine learning with a probabilistic approach

◀️ Better understanding modeling and uncertainty with a statistical perspective.

βž–βž–βž–

1️⃣ UBC Machine Learning

◀️ Deep understanding of machine learning concepts with conceptual teaching from one of the leading professors in the field of ML,

βž–βž–βž–

1️⃣ Deep Learning with Andrew Ng

◀️ A strong start in the world of neural networks, CNNs and RNNs.

βž–βž–βž–

1️⃣ Linear Algebra with 3Blue1Brown

◀️ Intuitive and visual teaching of linear algebra concepts.

βž–βž–βž–

πŸ”΄ Machine Learning Course

◀️ A combination of theory and practical training to strengthen ML skills.

βž–βž–βž–

1️⃣ Mathematical Optimization with Python

◀️ You will learn the basic concepts of optimization with Python code.

βž–βž–βž–

1️⃣ Explainable models in machine learning

◀️ Making complex models understandable.

βž–βž–βž–

⚫️ Data Analysis with Python

◀️ Data analysis skills using Pandas and NumPy libraries.


#DataScience #MachineLearning #DeepLearning #Python #AI #MLProjects #DataAnalysis #ExplainableAI #100DaysOfCode #TechEducation #MLInterviewPrep #NeuralNetworks #MathForML #Statistics #Coding #AIForEveryone #PythonForDataScience



⚑️ BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
Please open Telegram to view this post
VIEW IN TELEGRAM
πŸ‘10❀2
πŸš€ Master the Transformer Architecture with PyTorch! 🧠

Dive deep into the world of Transformers with this comprehensive PyTorch implementation guide. Whether you're a seasoned ML engineer or just starting out, this resource breaks down the complexities of the Transformer model, inspired by the groundbreaking paper "Attention Is All You Need".

πŸ”— Check it out here:
https://www.k-a.in/pyt-transformer.html

This guide offers:

🌟 Detailed explanations of each component of the Transformer architecture.

🌟 Step-by-step code implementations in PyTorch.

🌟 Insights into the self-attention mechanism and positional encoding.

By following along, you'll gain a solid understanding of how Transformers work and how to implement them from scratch.

#MachineLearning #DeepLearning #PyTorch #Transformer #AI #NLP #AttentionIsAllYouNeed #Coding #DataScience #NeuralNetworks
ο»Ώ

πŸ’― BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟

πŸ§ πŸ’»πŸ“Š
Please open Telegram to view this post
VIEW IN TELEGRAM
πŸ‘1
πŸ€–πŸ§  Master Machine Learning: Explore the Ultimate β€œMachine-Learning-Tutorials” Repository

πŸ—“οΈ 23 Oct 2025
πŸ“š AI News & Trends

In today’s data-driven world, Machine Learning (ML) has become the cornerstone of modern technology from intelligent chatbots to predictive analytics and recommendation systems. However, mastering ML isn’t just about coding, it requires a structured understanding of algorithms, statistics, optimization techniques and real-world problem-solving. That’s where Ujjwal Karn’s Machine-Learning-Tutorials GitHub repository stands out. This open-source, topic-wise ...

#MachineLearning #MLTutorials #ArtificialIntelligence #DataScience #OpenSource #AIEducation
πŸ€–πŸ§  PandasAI: Transforming Data Analysis with Conversational Artificial Intelligence

πŸ—“οΈ 28 Oct 2025
πŸ“š AI News & Trends

In a world dominated by data, the ability to analyze and interpret information efficiently has become a core competitive advantage. From business intelligence dashboards to large-scale machine learning models, data-driven decision-making fuels innovation across industries. Yet, for most people, data analysis remains a technical challenge requiring coding expertise, statistical knowledge and familiarity with libraries like ...

#PandasAI #ConversationalAI #DataAnalysis #ArtificialIntelligence #DataScience #MachineLearning
πŸ€–πŸ§  Microsoft Data Formulator: Revolutionizing AI-Powered Data Visualization

πŸ—“οΈ 28 Oct 2025
πŸ“š AI News & Trends

In today’s data-driven world, visualization is everything. Whether you’re a business analyst, data scientist or researcher, the ability to convert raw data into meaningful visuals can define the success of your decisions. That’s where Microsoft’s Data Formulator steps in a cutting-edge, open-source platform designed to empower analysts to create rich, AI-assisted visualizations effortlessly. Developed by ...

#Microsoft #DataVisualization #AI #DataScience #OpenSource #Analytics
πŸ€–πŸ§  MLOps Basics: A Complete Guide to Building, Deploying and Monitoring Machine Learning Models

πŸ—“οΈ 30 Oct 2025
πŸ“š AI News & Trends

Machine Learning models are powerful but building them is only half the story. The true challenge lies in deploying, scaling and maintaining these models in production environments – a process that requires collaboration between data scientists, developers and operations teams. This is where MLOps (Machine Learning Operations) comes in. MLOps combines the principles of DevOps ...

#MLOps #MachineLearning #DevOps #ModelDeployment #DataScience #ProductionAI
Top 100 Data Analyst Interview Questions & Answers

#DataAnalysis #InterviewQuestions #SQL #Python #Statistics #CaseStudy #DataScience

Part 1: SQL Questions (Q1-30)

#1. What is the difference between DELETE, TRUNCATE, and DROP?
A:
β€’ DELETE is a DML command that removes rows from a table based on a WHERE clause. It is slower as it logs each row deletion and can be rolled back.
β€’ TRUNCATE is a DDL command that quickly removes all rows from a table. It is faster, cannot be rolled back, and resets table identity.
β€’ DROP is a DDL command that removes the entire table, including its structure, data, and indexes.

#2. Select all unique departments from the employees table.
A: Use the DISTINCT keyword.

SELECT DISTINCT department
FROM employees;


#3. Find the top 5 highest-paid employees.
A: Use ORDER BY and LIMIT.

SELECT name, salary
FROM employees
ORDER BY salary DESC
LIMIT 5;


#4. What is the difference between WHERE and HAVING?
A:
β€’ WHERE is used to filter records before any groupings are made (i.e., it operates on individual rows).
β€’ HAVING is used to filter groups after aggregations (GROUP BY) have been performed.

-- Find departments with more than 10 employees
SELECT department, COUNT(employee_id)
FROM employees
GROUP BY department
HAVING COUNT(employee_id) > 10;


#5. What are the different types of SQL joins?
A:
β€’ (INNER) JOIN: Returns records that have matching values in both tables.
β€’ LEFT (OUTER) JOIN: Returns all records from the left table, and the matched records from the right table.
β€’ RIGHT (OUTER) JOIN: Returns all records from the right table, and the matched records from the left table.
β€’ FULL (OUTER) JOIN: Returns all records when there is a match in either the left or right table.
β€’ SELF JOIN: A regular join, but the table is joined with itself.

#6. Write a query to find the second-highest salary.
A: Use OFFSET or a subquery.

-- Method 1: Using OFFSET
SELECT salary
FROM employees
ORDER BY salary DESC
LIMIT 1 OFFSET 1;

-- Method 2: Using a Subquery
SELECT MAX(salary)
FROM employees
WHERE salary < (SELECT MAX(salary) FROM employees);


#7. Find duplicate emails in a customers table.
A: Group by the email column and use HAVING to find groups with a count greater than 1.

SELECT email, COUNT(email)
FROM customers
GROUP BY email
HAVING COUNT(email) > 1;


#8. What is a primary key vs. a foreign key?
A:
β€’ A Primary Key is a constraint that uniquely identifies each record in a table. It must contain unique values and cannot contain NULL values.
β€’ A Foreign Key is a key used to link two tables together. It is a field (or collection of fields) in one table that refers to the Primary Key in another table.

#9. Explain Window Functions. Give an example.
A: Window functions perform a calculation across a set of table rows that are somehow related to the current row. Unlike aggregate functions, they do not collapse rows.

-- Rank employees by salary within each department
SELECT
name,
department,
salary,
RANK() OVER (PARTITION BY department ORDER BY salary DESC) as dept_rank
FROM employees;


#10. What is a CTE (Common Table Expression)?
A: A CTE is a temporary, named result set that you can reference within a SELECT, INSERT, UPDATE, or DELETE statement. It helps improve readability and break down complex queries.
❀1
✨DeepAnalyze: Agentic Large Language Models for Autonomous Data Science

πŸ“ Summary:
DeepAnalyze-8B is an agentic LLM that autonomously completes the entire data science pipeline, from raw data to research reports. It employs curriculum-based training and data-grounded trajectory synthesis, outperforming larger, workflow-based agents. This open-source model advances autonomous da...

πŸ”Ή Publication Date: Published on Oct 19

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxivexplained.com/papers/deepanalyze-agentic-large-language-models-for-autonomous-data-science
β€’ PDF: https://arxiv.org/pdf/2510.16872
β€’ Project Page: https://ruc-deepanalyze.github.io/
β€’ Github: https://github.com/ruc-datalab/DeepAnalyze

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/RUC-DataLab/DeepAnalyze-8B

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/RUC-DataLab/DataScience-Instruct-500K
β€’ https://huggingface.co/datasets/fantos/DataScience-Instruct-500K

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLM #DataScience #AgenticAI #AutonomousAI #AI
✨MinerU: An Open-Source Solution for Precise Document Content Extraction

πŸ“ Summary:
MinerU is an open-source tool that provides high-precision document content extraction. It uses fine-tuned models and pre/postprocessing rules to consistently achieve high performance across diverse document types.

πŸ”Ή Publication Date: Published on Sep 27, 2024

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/pdf/2409.18839
β€’ PDF: https://huggingface.co/spaces/Echo9k/PDF_reader
β€’ Github: https://github.com/opendatalab/MinerU

✨ Spaces citing this paper:
β€’ https://huggingface.co/spaces/opendatalab/MinerU
β€’ https://huggingface.co/spaces/xiaoye-winters/MinerU-API
β€’ https://huggingface.co/spaces/ApeAITW/MinerU_2.5_Test

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#DocumentExtraction #OpenSource #DataScience #NLP #AI
✨TabDSR: Decompose, Sanitize, and Reason for Complex Numerical Reasoning in Tabular Data

πŸ“ Summary:
TabDSR improves LLM performance on complex tabular numerical reasoning by decomposing queries, sanitizing tables, and using program-of-thoughts reasoning. It achieves state-of-the-art accuracy, consistently outperforming existing methods.

πŸ”Ή Publication Date: Published on Nov 4

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.02219
β€’ PDF: https://arxiv.org/pdf/2511.02219

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLM #TabularData #NumericalReasoning #DataScience #AI
✨TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models

πŸ“ Summary:
TabTune is a unified library that standardizes the workflow for tabular foundation models. It provides consistent access to state-of-the-art models, diverse adaptation strategies, and integrated evaluation for performance, calibration, and fairness.

πŸ”Ή Publication Date: Published on Nov 4

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.02802
β€’ PDF: https://arxiv.org/pdf/2511.02802
β€’ Github: https://github.com/Lexsi-Labs/TabTune

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#TabularData #FoundationModels #MachineLearning #DataScience #AIResearch
❀1