Data Engineers
8.94K subscribers
353 photos
74 files
339 links
Free Data Engineering Ebooks & Courses
Download Telegram
Planning for Data Science or Data Engineering Interview.

Focus on SQL & Python first. Here are some important questions which you should know.

𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐒𝐐𝐋 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬

1- Find out nth Order/Salary from the tables.
2- Find the no of output records in each join from given Table 1 & Table 2
3- YOY,MOM Growth related questions.
4- Find out Employee ,Manager Hierarchy (Self join related question) or
Employees who are earning more than managers.
5- RANK,DENSERANK related questions
6- Some row level scanning medium to complex questions using CTE or recursive CTE, like (Missing no /Missing Item from the list etc.)
7- No of matches played by every team or Source to Destination flight combination using CROSS JOIN.
8-Use window functions to perform advanced analytical tasks, such as calculating moving averages or detecting outliers.
9- Implement logic to handle hierarchical data, such as finding all descendants of a given node in a tree structure.
10-Identify and remove duplicate records from a table.

𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐏𝐲𝐭𝐡𝐨𝐧 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬

1- Reversing a String using an Extended Slicing techniques.
2- Count Vowels from Given words .
3- Find the highest occurrences of each word from string and sort them in order.
4- Remove Duplicates from List.
5-Sort a List without using Sort keyword.
6-Find the pair of numbers in this list whose sum is n no.
7-Find the max and min no in the list without using inbuilt functions.
8-Calculate the Intersection of Two Lists without using Built-in Functions
9-Write Python code to make API requests to a public API (e.g., weather API) and process the JSON response.
10-Implement a function to fetch data from a database table, perform data manipulation, and update the database.

Python Interview Resources: https://whatsapp.com/channel/0029VaiM08SDuMRaGKd9Wv0L

Join for more: https://t.iss.one/datasciencefun

ENJOY LEARNING 👍👍
👍1
Forwarded from Generative AI
𝗝𝗣 𝗠𝗼𝗿𝗴𝗮𝗻 𝗙𝗥𝗘𝗘 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗜𝗻𝘁𝗲𝗿𝗻𝘀𝗵𝗶𝗽 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝘀😍

JPMorgan offers free virtual internships to help you develop industry-specific tech, finance, and research skills. 

- Software Engineering Internship
- Investment Banking Program
- Quantitative Research Internship
 
𝐋𝐢𝐧𝐤 👇:- 

https://pdlink.in/4gHGofl

Enroll For FREE & Get Certified 🎓
10 Data Engineering architectures asked in Interviews.

1. Hadoop
2. Hive
3. Hbase
4. Kafka
5. Spark
6. Airflow
7. Bigquery
8. Snowflake
9. Databricks
10. MongoDB

Data Engineering Interview Preparation Resources: https://whatsapp.com/channel/0029VaiM08SDuMRaGKd9Wv0L

All the best 👍👍
𝗧𝗼𝗽 𝗠𝗡𝗖𝘀 𝗛𝗶𝗿𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘀𝘁𝘀 😍

Mercedes :- https://pdlink.in/3RPLXNM

TechM :- https://pdlink.in/4cws0oN

SE :- https://pdlink.in/42feu5D

Siemens :- https://pdlink.in/4jxhzDR

Dxc :- https://pdlink.in/4ctIeis

EY:- https://pdlink.in/4lwMQZo

Apply before the link expires 💫
Complete topics & subtopics of #SQL for Data Engineer role:-

𝟭. 𝗕𝗮𝘀𝗶𝗰 𝗦𝗤𝗟 𝗦𝘆𝗻𝘁𝗮𝘅:
SQL keywords
Data types
Operators
SQL statements (SELECT, INSERT, UPDATE, DELETE)

𝟮. 𝗗𝗮𝘁𝗮 𝗗𝗲𝗳𝗶𝗻𝗶𝘁𝗶𝗼𝗻 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 (𝗗𝗗𝗟):
CREATE TABLE
ALTER TABLE
DROP TABLE
Truncate table

𝟯. 𝗗𝗮𝘁𝗮 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 (𝗗𝗠𝗟):
SELECT statement (SELECT, FROM, WHERE, ORDER BY, GROUP BY, HAVING, JOINs)
INSERT statement
UPDATE statement
DELETE statement

𝟰. 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗲 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀:
SUM, AVG, COUNT, MIN, MAX
GROUP BY clause
HAVING clause

𝟱. 𝗗𝗮𝘁𝗮 𝗖𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀:
Primary Key
Foreign Key
Unique
NOT NULL
CHECK

𝟲. 𝗝𝗼𝗶𝗻𝘀:
INNER JOIN
LEFT JOIN
RIGHT JOIN
FULL OUTER JOIN
Self Join
Cross Join

𝟳. 𝗦𝘂𝗯𝗾𝘂𝗲𝗿𝗶𝗲𝘀:
Types of subqueries (scalar, column, row, table)
Nested subqueries
Correlated subqueries

𝟴. 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗦𝗤𝗟 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀:
String functions (CONCAT, LENGTH, SUBSTRING, REPLACE, UPPER, LOWER)
Date and time functions (DATE, TIME, TIMESTAMP, DATEPART, DATEADD)
Numeric functions (ROUND, CEILING, FLOOR, ABS, MOD)
Conditional functions (CASE, COALESCE, NULLIF)

𝟵. 𝗩𝗶𝗲𝘄𝘀:
Creating views
Modifying views
Dropping views

𝟭𝟬. 𝗜𝗻𝗱𝗲𝘅𝗲𝘀:
Creating indexes
Using indexes for query optimization

𝟭𝟭. 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀:
ACID properties
Transaction management (BEGIN, COMMIT, ROLLBACK, SAVEPOINT)
Transaction isolation levels

𝟭𝟮. 𝗗𝗮𝘁𝗮 𝗜𝗻𝘁𝗲𝗴𝗿𝗶𝘁𝘆 𝗮𝗻𝗱 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆:
Data integrity constraints (referential integrity, entity integrity)
GRANT and REVOKE statements (granting and revoking permissions)
Database security best practices

𝟭𝟯. 𝗦𝘁𝗼𝗿𝗲𝗱 𝗣𝗿𝗼𝗰𝗲𝗱𝘂𝗿𝗲𝘀 𝗮𝗻𝗱 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀:
Creating stored procedures
Executing stored procedures
Creating functions
Using functions in queries

𝟭𝟰. 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻:
Query optimization techniques (using indexes, optimizing joins, reducing subqueries)
Performance tuning best practices

𝟭𝟱. 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗦𝗤𝗟 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀:
Recursive queries
Pivot and unpivot operations
Window functions (Row_number, rank, dense_rank, lead & lag)
CTEs (Common Table Expressions)
Dynamic SQL

Here you can find quick SQL Revision Notes👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C

Like for more

Hope it helps :)
👍21
20 𝐫𝐞𝐚𝐥-𝐭𝐢𝐦𝐞 𝐬𝐜𝐞𝐧𝐚𝐫𝐢𝐨-𝐛𝐚𝐬𝐞𝐝 𝐢𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬

Here are few Interview questions that are often asked in PySpark interviews to evaluate if candidates have hands-on experience or not !!

𝐋𝐞𝐭𝐬 𝐝𝐢𝐯𝐢𝐝𝐞 𝐭𝐡𝐞 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐢𝐧 4 𝐩𝐚𝐫𝐭𝐬

1. Data Processing and Transformation
2. Performance Tuning and Optimization
3. Data Pipeline Development
4. Debugging and Error Handling

𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐜𝐞𝐬𝐬𝐢𝐧𝐠 𝐚𝐧𝐝 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧:

1. Explain how you would handle large datasets in PySpark. How do you optimize a PySpark job for performance?
2. How would you join two large datasets (say 100GB each) in PySpark efficiently?
3. Given a dataset with millions of records, how would you identify and remove duplicate rows using PySpark?
4. You are given a DataFrame with nested JSON. How would you flatten the JSON structure in PySpark?
5. How do you handle missing or null values in a DataFrame? What strategies would you use in different scenarios?

𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐓𝐮𝐧𝐢𝐧𝐠 𝐚𝐧𝐝 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧:

6. How do you debug and optimize PySpark jobs that are taking too long to complete?
7. Explain what a shuffle operation is in PySpark and how you can minimize its impact on performance.
8. Describe a situation where you had to handle data skew in PySpark. What steps did you take?
9. How do you handle and optimize PySpark jobs in a YARN cluster environment?
10. Explain the difference between repartition() and coalesce() in PySpark. When would you use each?

𝐃𝐚𝐭𝐚 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭:

11. Describe how you would implement an ETL pipeline in PySpark for processing streaming data.
12. How do you ensure data consistency and fault tolerance in a PySpark job?
13. You need to aggregate data from multiple sources and save it as a partitioned Parquet file. How would you do this in PySpark?
14. How would you orchestrate and manage a complex PySpark job with multiple stages?
15. Explain how you would handle schema evolution in PySpark while reading and writing data.

𝐃𝐞𝐛𝐮𝐠𝐠𝐢𝐧𝐠 𝐚𝐧𝐝 𝐄𝐫𝐫𝐨𝐫 𝐇𝐚𝐧𝐝𝐥𝐢𝐧𝐠:

16. Have you encountered out-of-memory errors in PySpark? How did you resolve them?
17. What steps would you take if a PySpark job fails midway through execution? How do you recover from it?
18. You encounter a Spark task that fails repeatedly due to data corruption in one of the partitions. How would you handle this?
19. Explain a situation where you used custom UDFs (User Defined Functions) in PySpark. What challenges did you face, and how did you overcome them?
20. Have you had to debug a PySpark (Python + Apache Spark) job that was producing incorrect results?

Here, you can find Data Engineering Resources 👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C

All the best 👍👍
👍2
Want to build your first AI agent?

Join a live hands-on session by GeeksforGeeks & Salesforce for working professionals

- Build with Agent Builder

- Assign real actions

- Get a free certificate of participation

Registeration link:👇
https://gfgcdn.com/tu/V4t/
SQL Interview Ques & ANS 💥
🔥21👍1
Tips to become a Data Engineer 👇👇

1. Data Engineering Basics: At its core, it's about efficiently moving and reshaping data from one place/format to another.
2. Be Curious: The field is vast. Dive deep, ask questions, and always be in the mode of learning and experimenting.
3. Master Data: Understand the intricacies of data types, where they originate, and how they're structured.
4. Programming: Grasping a language is crucial. If you're unsure, start with Python – it's versatile and widely used in the industry.
5. SQL: A timeless tool for querying databases. Mastering SQL will empower you to work with data across various platforms.
6. Command Line: Familiarizing yourself with command line operations can save a lot of time, especially for quick and repetitive tasks.
7. Know Computers: A basic understanding of how computers communicate and process information can guide better data engineering decisions.
8. Personal Projects: Practical experience is invaluable. Start projects, learn from them, and showcase your work on platforms like GitHub.
9. APIs and JSON: Many modern data sources are API-based. Understanding how to extract and manipulate JSON data will be a daily task.
10. Tools Mastery: Get proficient with your primary tools, but stay updated with emerging technologies and platforms.
11. Data Storage Basics: Know the difference and use-cases for Databases, Data Lakes, and Data Warehouses. Understand the distinction between OLTP (online transaction processing) and OLAP (online analytical processing).
12. Cloud Platforms: The cloud is the future. AWS, Azure, and GCP offer free tiers to start experimenting.
13. Business Acumen: A data engineer who understands business metrics and their implications can offer more value.
14. Data Grain: Dive deep into datasets to understand their finest level of detail. It aids in more precise querying and analytics.
15. Data Formats: Recognizing main data formats (like JSON, XML, CSV, SQLite, Database) will help you navigate different datasets with ease.
👍51
Kavitha's Journey to become a Data Engineer 👇👇

1. Startup to Dream Job Journey:
- Started at a startup in India, transitioned to Infosys, then grabbed UK opportunity.
- Shifted from legacy Mainframe to AWS Cloud, pursued Master's from illinoisstateu, and secured dream job at Statefarm.
2. Learn Fundamentals:
- Assess skills, understand role.
- Gain proficiency in Python, SQL.
- Learn data technologies.
3. Database and Modeling Skills:
- Understand databases, gain proficiency.
- Learn data modeling principles.
4. Master ETL, Warehousing, and Visualization:
- Understand ETL, data warehousing.
- Gain experience in building warehouses.
- Familiarize with visualization tools.
- Got Certified as AWS Solutions Architect.
5. Utilize LinkedIn for Job Search:
- Network and connect with professionals.
- Showcase skills and achievements.
- Utilize job search feature, leading to dream job at Statefarm.

Data Engineering Interview Preparation Resources: https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
👍2
Here's what the average data engineering interview looks like in 2025:

- 1 hour algorithms in Python
Here you will be asked irrelevant questions about dynamic programming, linked lists, and inverting trees

- 1 hour SQL
Here you will be asked niche questions about recursive CTEs that you've used once in your ten year career

- 1 hour data architecture
Here you will be asked about CAP theorem, lambda vs kappa, and a bunch of other things that ChatGPT probably could answer in a heartbeat

- 1 hour behavioral
Here you will be asked about how to play nicely with your coworkers. This is the most relevant interview in my opinion

- 1 hour project deep dive
Here you will be asked to make up a story about something you did or did not do in the past that was a technical marvel

- 4 hour take home assignment
Here you will be asked to build their entire data engineering stack from scratch over a weekend because why hire data engineers when you can submit them to tests?
👍1