Data Engineers
8.91K subscribers
344 photos
74 files
338 links
Free Data Engineering Ebooks & Courses
Download Telegram
20 recently asked ๐—ž๐—”๐—™๐—ž๐—” interview questions.

- How do you create a topic in Kafka using the Confluent CLI?
- Explain the role of the Schema Registry in Kafka.
- How do you register a new schema in the Schema Registry?
- What is the importance of key-value messages in Kafka?
- Describe a scenario where using a random key for messages is beneficial.
- Provide an example where using a constant key for messages is necessary.
- Write a simple Kafka producer code that sends JSON messages to a topic.
- How do you serialize a custom object before sending it to a Kafka topic?
- Describe how you can handle serialization errors in Kafka producers.
- Write a Kafka consumer code that reads messages from a topic and deserializes them from JSON.
- How do you handle deserialization errors in Kafka consumers?
- Explain the process of deserializing messages into custom objects.
- What is a consumer group in Kafka, and why is it important?
- Describe a scenario where multiple consumer groups are used for a single topic.
- How does Kafka ensure load balancing among consumers in a group?
- How do you send JSON data to a Kafka topic and ensure it is properly serialized?
- Describe the process of consuming JSON data from a Kafka topic and converting it to a usable format.
- Explain how you can work with CSV data in Kafka, including serialization and deserialization.
- Write a Kafka producer code snippet that sends CSV data to a topic.
- Write a Kafka consumer code snippet that reads and processes CSV data from a topic.

Here, you can find Data Engineering Resources ๐Ÿ‘‡
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C

All the best ๐Ÿ‘๐Ÿ‘
๐Ÿ‘2
ETL vs ELT
โค11๐Ÿ‘5
๐— ๐—ฎ๐˜€๐˜๐—ฒ๐—ฟ ๐—ฆ๐—ผ๐—ณ๐˜ ๐—ฆ๐—ธ๐—ถ๐—น๐—น๐˜€ ๐—ณ๐—ผ๐—ฟ ๐—–๐—ฎ๐—ฟ๐—ฒ๐—ฒ๐—ฟ ๐—ฆ๐˜‚๐—ฐ๐—ฐ๐—ฒ๐˜€๐˜€!๐Ÿ˜

Want to stand out in your career?

Soft skills are just as important as technical expertise! ๐ŸŒŸ

Here are 3 FREE courses to help you communicate, negotiate, and present with confidence

๐‹๐ข๐ง๐ค๐Ÿ‘‡:-

https://pdlink.in/41V1Yqi

Tag someone who needs this boost! ๐Ÿš€
๐Ÿ‘1
SQL Interview Ques & ANS ๐Ÿ’ฅ
๐Ÿ‘4
๐— ๐—ฎ๐˜€๐˜๐—ฒ๐—ฟ ๐—ฆ๐—ค๐—Ÿ ๐—ณ๐—ผ๐—ฟ ๐——๐—ฎ๐˜๐—ฎ ๐—”๐—ป๐—ฎ๐—น๐˜†๐˜๐—ถ๐—ฐ๐˜€ ๐—ถ๐—ป ๐—๐˜‚๐˜€๐˜ ๐Ÿญ๐Ÿฐ ๐——๐—ฎ๐˜†๐˜€!๐Ÿ˜

Want to become a SQL pro in just 2 weeks?

SQL is a must-have skill for data analysts! ๐ŸŽฏ

This step-by-step roadmap will take you from beginner to advanced ๐Ÿ“

๐‹๐ข๐ง๐ค๐Ÿ‘‡:-

https://pdlink.in/3XOlgwf

๐Ÿ“Œ Follow this roadmap, practice daily, and take your SQL skills to the next level!
Python for Data Engineering role ๐Ÿ‘‡

โžŠ List Comprehensions and Dict Comprehensions
โ†ณ Optimize iteration with one-liners
โ†ณ Fast filtering and transformations
โ†ณ O(n) time complexity

โž‹ Lambda Functions
โ†ณ Anonymous functions for concise operations
โ†ณ Used in map(), filter(), and sort()
โ†ณ Key for functional programming

โžŒ Functional Programming (map, filter, reduce)
โ†ณ Apply transformations efficiently
โ†ณ Reduce dataset size dynamically
โ†ณ Avoid unnecessary loops

โž Iterators and Generators
โ†ณ Efficient memory handling with yield
โ†ณ Streaming large datasets
โ†ณ Lazy evaluation for performance

โžŽ Error Handling with Try-Except
โ†ณ Graceful failure handling
โ†ณ Preventing crashes in pipelines
โ†ณ Custom exception classes

โž Regex for Data Cleaning
โ†ณ Extract structured data from unstructured text
โ†ณ Pattern matching for text processing
โ†ณ Optimized with re.compile()

โž File Handling (CSV, JSON, Parquet)
โ†ณ Read and write structured data efficiently
โ†ณ pandas.read_csv(), json.load(), pyarrow
โ†ณ Handling large files in chunks

โž‘ Handling Missing Data
โ†ณ .fillna(), .dropna(), .interpolate()
โ†ณ Imputing missing values
โ†ณ Reducing nulls for better analytics

โž’ Pandas Operations
โ†ณ DataFrame filtering and aggregations
โ†ณ .groupby(), .pivot_table(), .merge()
โ†ณ Handling large structured datasets

โž“ SQL Queries in Python
โ†ณ Using sqlalchemy and pandas.read_sql()
โ†ณ Writing optimized queries
โ†ณ Connecting to databases

โ“ซ Working with APIs
โ†ณ Fetching data with requests and httpx
โ†ณ Handling rate limits and retries
โ†ณ Parsing JSON/XML responses

โ“ฌ Cloud Data Handling (AWS S3, Google Cloud, Azure)
โ†ณ Upload/download data from cloud storage
โ†ณ boto3, gcsfs, azure-storage
โ†ณ Handling large-scale data ingestion

๐“๐ก๐ž ๐›๐ž๐ฌ๐ญ ๐ฐ๐š๐ฒ ๐ญ๐จ ๐ฅ๐ž๐š๐ซ๐ง ๐๐ฒ๐ญ๐ก๐จ๐ง ๐ข๐ฌ ๐ง๐จ๐ญ ๐ฃ๐ฎ๐ฌ๐ญ ๐›๐ฒ ๐ฌ๐ญ๐ฎ๐๐ฒ๐ข๐ง๐ , ๐›๐ฎ๐ญ ๐›๐ฒ ๐ข๐ฆ๐ฉ๐ฅ๐ž๐ฆ๐ž๐ง๐ญ๐ข๐ง๐  ๐ข๐ญ

Join for more data engineering resources: https://t.iss.one/sql_engineer
๐Ÿ‘3
๐Ÿณ ๐—™๐—ฅ๐—˜๐—˜ ๐— ๐—ถ๐—ฐ๐—ฟ๐—ผ๐˜€๐—ผ๐—ณ๐˜ ๐—–๐—ฒ๐—ฟ๐˜๐—ถ๐—ณ๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—–๐—ผ๐˜‚๐—ฟ๐˜€๐—ฒ๐˜€๐Ÿ˜

Master Data Analytics in 2025!

These 7 FREE courses will help you master Power BI, Excel, SQL, and Data Fundamentals!
 
๐‹๐ข๐ง๐ค ๐Ÿ‘‡:-

https://pdlink.in/4iMlJXZ

Enroll For FREE & Get Certified ๐ŸŽ“
5 frequently Asked SQL Interview Questions with Answers in Data Engineering interviews:
๐ƒ๐ข๐Ÿ๐Ÿ๐ข๐œ๐ฎ๐ฅ๐ญ๐ฒ - ๐Œ๐ž๐๐ข๐ฎ๐ฆ

โšซ๏ธDetermine the Top 5 Products with the Highest Revenue in Each Category.
Schema: Products (ProductID, Name, CategoryID), Sales (SaleID, ProductID, Amount)

WITH ProductRevenue AS (
SELECT p.ProductID,
p.Name,
p.CategoryID,
SUM(s.Amount) AS TotalRevenue,
RANK() OVER (PARTITION BY p.CategoryID ORDER BY SUM(s.Amount) DESC) AS RevenueRank
FROM Products p
JOIN Sales s ON p.ProductID = s.ProductID
GROUP BY p.ProductID, p.Name, p.CategoryID
)
SELECT ProductID, Name, CategoryID, TotalRevenue
FROM ProductRevenue
WHERE RevenueRank <= 5;

โšซ๏ธ Identify Employees with Increasing Sales for Four Consecutive Quarters.
Schema: Sales (EmployeeID, SaleDate, Amount)

WITH QuarterlySales AS (
SELECT EmployeeID,
DATE_TRUNC('quarter', SaleDate) AS Quarter,
SUM(Amount) AS QuarterlyAmount
FROM Sales
GROUP BY EmployeeID, DATE_TRUNC('quarter', SaleDate)
),
SalesTrend AS (
SELECT EmployeeID,
Quarter,
QuarterlyAmount,
LAG(QuarterlyAmount, 1) OVER (PARTITION BY EmployeeID ORDER BY Quarter) AS PrevQuarter1,
LAG(QuarterlyAmount, 2) OVER (PARTITION BY EmployeeID ORDER BY Quarter) AS PrevQuarter2,
LAG(QuarterlyAmount, 3) OVER (PARTITION BY EmployeeID ORDER BY Quarter) AS PrevQuarter3
FROM QuarterlySales
)
SELECT EmployeeID, Quarter, QuarterlyAmount
FROM SalesTrend
WHERE QuarterlyAmount > PrevQuarter1 AND PrevQuarter1 > PrevQuarter2 AND PrevQuarter2 > PrevQuarter3;

โšซ๏ธ List Customers Who Made Purchases in Each of the Last Three Years.
Schema: Orders (OrderID, CustomerID, OrderDate)

WITH YearlyOrders AS (
SELECT CustomerID,
EXTRACT(YEAR FROM OrderDate) AS OrderYear
FROM Orders
GROUP BY CustomerID, EXTRACT(YEAR FROM OrderDate)
),
RecentYears AS (
SELECT DISTINCT OrderYear
FROM Orders
WHERE OrderDate >= CURRENT_DATE - INTERVAL '3 years'
),
CustomerYearlyOrders AS (
SELECT CustomerID,
COUNT(DISTINCT OrderYear) AS YearCount
FROM YearlyOrders
WHERE OrderYear IN (SELECT OrderYear FROM RecentYears)
GROUP BY CustomerID
)
SELECT CustomerID
FROM CustomerYearlyOrders
WHERE YearCount = 3;


โšซ๏ธ Find the Third Lowest Price for Each Product Category.
Schema: Products (ProductID, Name, CategoryID, Price)

WITH RankedPrices AS (
SELECT CategoryID,
Price,
DENSE_RANK() OVER (PARTITION BY CategoryID ORDER BY Price ASC) AS PriceRank
FROM Products
)
SELECT CategoryID, Price
FROM RankedPrices
WHERE PriceRank = 3;

โšซ๏ธ Identify Products with Total Sales Exceeding a Specified Threshold Over the Last 30 Days.
Schema: Sales (SaleID, ProductID, SaleDate, Amount)

WITH RecentSales AS (
SELECT ProductID,
SUM(Amount) AS TotalSales
FROM Sales
WHERE SaleDate >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY ProductID
)
SELECT ProductID, TotalSales
FROM RecentSales
WHERE TotalSales > 200;

Here you can find essential SQL Interview Resources๐Ÿ‘‡
https://whatsapp.com/channel/0029VanC5rODzgT6TiTGoa1v

Like this post if you need more ๐Ÿ‘โค๏ธ

Hope it helps :)
Prepare for GATE: The Right Time is NOW!

GeeksforGeeks brings you everything you need to crack GATE 2026 โ€“ 900+ live hours, 300+ recorded sessions, and expert mentorship to keep you on track.

Whatโ€™s inside?

โœ” Live & recorded classes with Indiaโ€™s top educators
โœ” 200+ mock tests to track your progress
โœ” Study materials - PYQs, workbooks, formula book & more
โœ” 1:1 mentorship & AI doubt resolution for instant support
โœ” Interview prep for IITs & PSUs to help you land opportunities

Learn from Experts Like:

Satish Kumar Yadav โ€“ Trained 20K+ students
Dr. Khaleel โ€“ Ph.D. in CS, 29+ years of experience
Chandan Jha โ€“ Ex-ISRO, AIR 23 in GATE
Vijay Kumar Agarwal โ€“ M.Tech (NIT), 13+ years of experience
Sakshi Singhal โ€“ IIT Roorkee, AIR 56 CSIR-NET
Shailendra Singh โ€“ GATE 99.24 percentile
Devasane Mallesham โ€“ IIT Bombay, 13+ years of experience

Use code UPSKILL30 to get an extra 30% OFF (Limited time only)

๐Ÿ“Œ Enroll for a free counseling session now:
https://gfgcdn.com/tu/UI2/
Important Data Engineering Concepts for Interviews

1. ETL Processes: Understand the ETL (Extract, Transform, Load) process, including how to design and implement efficient pipelines to move data from various sources to a data warehouse or data lake. Familiarize yourself with tools like Apache NiFi, Talend, and AWS Glue.

2. Data Warehousing: Know the fundamentals of data warehousing, including the star schema, snowflake schema, and how to design a data warehouse that supports efficient querying and reporting. Learn about popular data warehousing solutions like Amazon Redshift, Google BigQuery, and Snowflake.

3. Data Modeling: Master data modeling concepts, including normalization and denormalization, to design databases that are optimized for both read and write operations. Understand entity-relationship (ER) diagrams and how to use them to model data relationships.

4. Big Data Technologies: Gain expertise in big data frameworks like Apache Hadoop and Apache Spark for processing large datasets. Understand the roles of HDFS, MapReduce, Hive, and Pig in the Hadoop ecosystem, and how Sparkโ€™s in-memory processing can accelerate data processing.

5. Data Lakes: Learn about data lakes as a storage solution for raw, unstructured, and semi-structured data. Understand the key differences between data lakes and data warehouses, and how to use tools like Apache Hudi and Delta Lake to manage data lakes efficiently.

6. SQL and NoSQL Databases: Be proficient in SQL for querying and managing relational databases like MySQL, PostgreSQL, and Oracle. Also, understand when and how to use NoSQL databases like MongoDB, Cassandra, and DynamoDB for storing and querying unstructured or semi-structured data.

7. Data Pipelines: Learn how to design, build, and manage data pipelines that automate the flow of data from source systems to target destinations. Familiarize yourself with orchestration tools like Apache Airflow, Luigi, and Prefect for managing complex workflows.

8. APIs and Data Integration: Understand how to integrate data from various APIs and third-party services into your data pipelines. Learn about RESTful APIs, GraphQL, and how to handle data ingestion from external sources securely and efficiently.

9. Data Streaming: Gain knowledge of real-time data processing using streaming technologies like Apache Kafka, Apache Flink, and Amazon Kinesis. Learn how to build systems that can process and analyze data in real time as it flows through the system.

10. Cloud Platforms: Get familiar with cloud-based data engineering services offered by AWS, Azure, and Google Cloud. Understand how to use services like AWS S3, Azure Data Lake, Google Cloud Storage, AWS Redshift, and BigQuery for data storage, processing, and analysis.

11. Data Governance and Security: Learn best practices for data governance, including how to implement data quality checks, lineage tracking, and metadata management. Understand data security concepts like encryption, access control, and GDPR compliance to protect sensitive data.

12. Automation and Scripting: Be proficient in scripting languages like Python, Bash, or PowerShell to automate repetitive tasks, manage data pipelines, and perform ad-hoc data processing.

13. Data Versioning and Lineage: Understand the importance of data versioning and lineage for tracking changes to data over time. Learn how to use tools like Apache Atlas or DataHub for managing metadata and ensuring traceability in your data pipelines.

14. Containerization and Orchestration: Learn how to deploy and manage data engineering workloads using containerization tools like Docker and orchestration platforms like Kubernetes. Understand the benefits of using containers for scaling and maintaining consistency across environments.

15. Monitoring and Logging: Implement logging for data pipelines to ensure they run smoothly and efficiently. Familiarize yourself with tools like Prometheus, Grafana, etc. for real-time monitoring and troubleshooting.
๐Ÿ‘3โค1
Pyspark Interview Questions!!


Interviewer: "How would you remove duplicates from a large dataset in PySpark?"

Candidate: "To remove duplicates from a large dataset in PySpark, I would follow these steps:

Step 1: Load the dataset into a DataFrame
df = spark.read.csv("path/to/data.csv", header=True, inferSchema=True)

Step 2: Check for duplicates
duplicate_count = df.count() - df.dropDuplicates().count()
print(f"Number of duplicates: {duplicate_count}")

Step 3: Partition the data to optimize performance
df_repartitioned = df.repartition(100)
Step 4: Remove duplicates using the dropDuplicates() method
df_no_duplicates = df_repartitioned.dropDuplicates()
Step 5: Cache the resulting DataFrame to avoid recomputing
df_no_duplicates.cache()
Step 6: Save the cleaned dataset
df_no_duplicates.write.csv("path/to/cleaned/data.csv", header=True)

Interviewer: "That's correct! Can you explain why you partitioned the data in Step 3?"

Candidate: "Yes, partitioning the data helps to distribute the computation across multiple nodes, making the process more efficient and scalable."

Interviewer: "Great answer! Can you also explain why you cached the resulting DataFrame in Step 5?"

Candidate: "Caching the DataFrame avoids recomputing the entire dataset when saving the cleaned data, which can significantly improve performance."

Interviewer: "Excellent! You have demonstrated a clear understanding of optimizing duplicate removal in PySpark."
๐Ÿ‘2