Python Data Science Jobs & Interviews
20.3K subscribers
188 photos
4 videos
25 files
326 links
Your go-to hub for Python and Data Science—featuring questions, answers, quizzes, and interview tips to sharpen your skills and boost your career in the data-driven world.

Admin: @Hussein_Sheikho
Download Telegram
⁉️ Interview question
How does Python handle memory when processing large datasets using generators versus list comprehensions, and what are the implications for performance and garbage collection?

Simpson:
When you use a **list comprehension**, Python evaluates the entire expression immediately and stores all items in memory, which can lead to high memory usage and slower garbage collection cycles if the dataset is very large. In contrast, a **generator** produces values on-the-fly using lazy evaluation, meaning only one item is kept in memory at a time. This significantly reduces memory footprint but may slow down access if you need to iterate multiple times over the same data. Additionally, because generators don’t hold references to intermediate results, they allow earlier garbage collection of unused objects, improving overall memory efficiency. However, if you convert a generator to a list (e.g., via `list(generator)`), you lose the memory advantage. The key trade-off lies in **memory vs. speed**: lists offer faster repeated access, while generators favor memory conservation.

#️⃣ tags: #Python #AdvancedPython #DataProcessing #MemoryManagement #Generators #ListComprehension #Performance #GarbageCollection #InterviewQuestion

By: t.iss.one/DataScienceQ 🚀
⁉️ Interview question
In Python, what happens when a class inherits from multiple classes that have a method with the same name, and how does the Method Resolution Order (MRO) determine which method gets called?

Simpson:
When a class inherits from multiple parent classes with a method of the same name, Python uses the **Method Resolution Order (MRO)** to decide which method is invoked. The MRO follows the **C3 linearization algorithm**, which ensures a consistent and deterministic order based on the inheritance hierarchy. This means that if you call the method, Python traverses the classes in a specific sequence defined by the MRO, starting from the child class and moving through parents in a depth-first, left-to-right order. If a method is found in one of the parent classes before others, it will be used, even if other parents also define the same method. The MRO can be inspected using `ClassName.mro()` or `help(ClassName)`. However, if there’s an ambiguity in the inheritance structure—such as a diamond pattern without proper resolution—the C3 algorithm still resolves it, but unexpected behavior may occur if not carefully designed. This makes understanding MRO crucial for complex inheritance scenarios.

#️⃣ tags: #Python #AdvancedPython #Inheritance #MethodResolutionOrder #MRO #OOP #ObjectOrientedProgramming #InterviewQuestion

By: t.iss.one/DataScienceQ 🚀
⁉️ Interview question 
What happens when you perform arithmetic operations between a NumPy array and a scalar value, and how does NumPy handle the broadcasting mechanism in such cases?

The operation is applied element-wise, and the scalar is broadcasted to match the shape of the array, enabling efficient computation without explicit loops.

#️⃣ tags: #numpy #python #arrayoperations #broadcasting #interviewquestion

By: t.iss.one/DataScienceQ 🚀
⁉️ Interview question 
Given the following NumPy code snippet, what will be the output and why?

import numpy as np

arr = np.array([[1, 2], [3, 4]])
result = arr + 5
print(result)

The output will be a 2x2 array where each element is incremented by 5: [[6, 7], [8, 9]]. This happens because NumPy automatically broadcasts the scalar value 5 to match the shape of the array, performing element-wise addition.

#️⃣ tags: #numpy #python #arrayaddition #broadcasting #interviewquestion #programming

By: t.iss.one/DataScienceQ 🚀
⁉️ Interview question
What will be the output of the following NumPy code snippet?

import numpy as np

arr = np.array([1, 2, 3, 4, 5])
result = arr[1:4:2] + arr[::2]
print(result)


<details><summary>Click to reveal</summary>Answer: [3 5]</details>

#️⃣ tags: #numpy #python #interviewquestion #arrayoperations #slicing #broadcasting

By: @DataScienceQ 🚀
⁉️ Interview question
What does the following NumPy code return?

import numpy as np

a = np.arange(6).reshape(2, 3)
b = np.array([[1, 2, 3], [4, 5, 6]])
result = np.dot(a, b.T)
print(result)


<details><summary>Click to reveal</summary>Answer: [[ 8 20] [17 47]]</details>

#️⃣ tags: #numpy #python #interviewquestion #arrayoperations #matrixmultiplication #dotproduct

By: @DataScienceQ 🚀
⁉️ Interview question
What happens when you call `plt.plot()` without specifying a figure or axes, and then immediately call `plt.show()`?

The function `plt.plot()` automatically creates a new figure and axes if none exist, and `plt.show()` displays the current figure. However, if multiple plots are created without clearing the figure, they may overlap or appear in unexpected orders due to matplotlib's internal state management. This behavior can lead to confusion, especially when working with loops or subplots.

#️⃣ tags: #matplotlib #python #datavisualization #plotting #beginner #codingchallenge

By: @DataScienceQ 🚀
⁉️ Interview question
How does `plt.subplot()` differ from `plt.subplots()` when creating a grid of plots?

`plt.subplot()` creates a single subplot in a grid by specifying row and column indices, requiring separate calls for each subplot. In contrast, `plt.subplots()` creates the entire grid at once, returning both the figure and an array of axes objects, making it more efficient for managing multiple subplots. However, using `plt.subplot()` can lead to overlapping or misaligned plots if not carefully managed, especially when adding elements like titles or labels.

#️⃣ tags: #matplotlib #python #plotting #subplots #datavisualization #beginner #codingchallenge

By: @DataScienceQ 🚀
⁉️ Interview question
What is the purpose of `scipy.integrate.quad()` and how does it handle functions with singularities?

`scipy.integrate.quad()` computes definite integrals using adaptive quadrature, which recursively subdivides intervals to improve accuracy. When dealing with functions that have singularities (e.g., discontinuities or infinite values), it may fail or return inaccurate results unless the integration limits are adjusted or the singularity is isolated. In such cases, splitting the integral at the singularity point or using specialized methods like `quad` with `points` parameter can help achieve better convergence, though improper handling might lead to warnings or unexpected outputs.

#️⃣ tags: #scipy #python #numericalintegration #scientificcomputing #mathematics #codingchallenge #beginner

By: @DataScienceQ 🚀
Please open Telegram to view this post
VIEW IN TELEGRAM
⁉️ Interview question
How does `scipy.optimize.minimize()` choose between different optimization algorithms, and what happens if the initial guess is far from the minimum?

`scipy.optimize.minimize()` selects an algorithm based on the `method` parameter (e.g., 'BFGS', 'Nelder-Mead', 'COBYLA'), each suited for specific problem types. If the initial guess is far from the true minimum, some methods may converge slowly or get stuck in local minima, especially for non-convex functions. The function also allows passing bounds and constraints to guide the search, but poor initialization can lead to suboptimal results or failure to converge, particularly when using gradient-based methods without proper scaling or preprocessing of input data.

#️⃣ tags: #scipy #python #optimization #scientificcomputing #numericalanalysis #machinelearning #codingchallenge #beginner

By: @DataScienceQ 🚀
1
#️⃣ CNN Basics Quiz

What is the primary purpose of a Convolutional Neural Network (CNN)?
A CNN is designed to process data with a grid-like topology, such as images, by using convolutional layers to automatically and adaptively learn spatial hierarchies of features.

What does the term "convolution" refer to in CNNs?
It refers to the mathematical operation where a filter (or kernel) slides over the input image to produce a feature map that highlights specific patterns like edges or textures.

Which layer in a CNN is responsible for reducing the spatial dimensions of the feature maps?
The **pooling layer**, especially **max pooling**, reduces dimensionality while retaining important information.

What is the role of the ReLU activation function in CNNs?
It introduces non-linearity by outputting the input directly if it's positive, otherwise zero, helping the network learn complex patterns.

Why are stride and padding important in convolutional layers?
Stride controls how much the filter moves at each step, while padding allows the output size to match the input size when needed.

What is feature extraction in the context of CNNs?
It’s the process by which CNNs identify and isolate relevant patterns (like shapes or textures) from raw input data through successive convolutional layers.

How does dropout help in CNN training?
It randomly deactivates neurons during training to prevent overfitting and improve generalization.

What is backpropagation used for in CNNs?
It computes gradients of the loss function with respect to each weight, enabling the network to update parameters and minimize error.

What is the main advantage of weight sharing in CNNs?
It reduces the number of parameters by allowing the same filter to be used across different regions of the image, improving efficiency.

What is a kernel in the context of CNNs?
A small matrix that slides over the input image to detect specific features, such as corners or lines.

Which layer typically follows the convolutional layers in a CNN architecture?
The **fully connected layer**, which combines all features into a final prediction.

What is overfitting in neural networks?
It occurs when a model learns the training data too well, including noise, leading to poor performance on new data.

What is data augmentation and why is it useful in CNNs?
It involves applying transformations like rotation or flipping to training images to increase dataset diversity and improve model robustness.

What is the purpose of batch normalization in CNNs?
It normalizes the inputs of each layer to stabilize and accelerate training by reducing internal covariate shift.

What is transfer learning in the context of CNNs?
It involves using a pre-trained CNN model and fine-tuning it for a new task, saving time and computational resources.

Which activation function is commonly used in the final layer of a classification CNN?
The **softmax function**, which converts raw scores into probabilities summing to one.

What is zero-padding in convolutional layers?
Adding zeros around the borders of the input image to maintain the spatial dimensions after convolution.

What is the difference between local receptive fields and global receptive fields?
Local receptive fields cover only a small region of the input, while global receptive fields capture broader patterns across the entire image.

What is dilation in convolutional layers?
It increases the spacing between kernel elements without increasing the number of parameters, allowing the network to capture larger contexts.

What is the significance of filter size in CNNs?
It determines the spatial extent of the pattern the filter can detect; smaller filters capture fine details, larger ones detect broader structures.

#️⃣ #CNN #DeepLearning #NeuralNetworks #ComputerVision #MachineLearning #ArtificialIntelligence #ImageRecognition #AI

By: @DataScienceQ 🚀
1
#numpy #python #programming #question #array #basic

Write a Python code snippet using NumPy to create a 2D array of shape (3, 4) filled with zeros. Then, modify the element at position (1, 2) to be 5. Print the resulting array.

import numpy as np

# Create a 2D array of zeros with shape (3, 4)
arr = np.zeros((3, 4))

# Modify the element at position (1, 2) to be 5
arr[1, 2] = 5

# Print the resulting array
print(arr)

Output:
[[0. 0. 0. 0.]
[0. 0. 5. 0.]
[0. 0. 0. 0.]]

By: @DataScienceQ 🚀
2
#numpy #python #programming #question #array #intermediate

Write a Python program using NumPy to perform the following tasks:

1. Create a 1D array of integers from 1 to 10.
2. Reshape it into a 2D array of shape (2, 5).
3. Compute the sum of each row and store it in a new array.
4. Find the indices of elements greater than 7 in the original 1D array.
5. Print the resulting 2D array, the row sums, and the indices.

import numpy as np

# 1. Create a 1D array from 1 to 10
arr_1d = np.arange(1, 11)

# 2. Reshape into a 2D array of shape (2, 5)
arr_2d = arr_1d.reshape(2, 5)

# 3. Compute the sum of each row
row_sums = np.sum(arr_2d, axis=1)

# 4. Find indices of elements greater than 7 in the original 1D array
indices_greater_than_7 = np.where(arr_1d > 7)[0]

# 5. Print results
print("2D Array:\n", arr_2d)
print("Row sums:", row_sums)
print("Indices of elements > 7:", indices_greater_than_7)

Output:
2D Array:
[[ 1 2 3 4 5]
[ 6 7 8 9 10]]
Row sums: [15 40]
Indices of elements > 7: [7 8 9]

By: @DataScienceQ 🚀
4
#pandas #python #programming #question #dataframe #intermediate

Write a Python program using pandas to perform the following tasks:

1. Create a DataFrame from a dictionary with columns: 'Product', 'Category', 'Price', and 'Quantity' containing:
- Product: ['Laptop', 'Mouse', 'Keyboard', 'Monitor', 'Headphones']
- Category: ['Electronics', 'Accessories', 'Accessories', 'Electronics', 'Accessories']
- Price: [1200, 25, 80, 300, 100]
- Quantity: [10, 50, 30, 20, 40]

2. Add a new column 'Total_Value' that is the product of 'Price' and 'Quantity'.

3. Calculate the total value for each category and print it.

4. Find the product with the highest total value and print its details.

5. Filter the DataFrame to show only products in the 'Electronics' category with a price greater than 200.

import pandas as pd

# 1. Create the DataFrame
data = {
'Product': ['Laptop', 'Mouse', 'Keyboard', 'Monitor', 'Headphones'],
'Category': ['Electronics', 'Accessories', 'Accessories', 'Electronics', 'Accessories'],
'Price': [1200, 25, 80, 300, 100],
'Quantity': [10, 50, 30, 20, 40]
}
df = pd.DataFrame(data)

# 2. Add Total_Value column
df['Total_Value'] = df['Price'] * df['Quantity']

# 3. Calculate total value by category
total_by_category = df.groupby('Category')['Total_Value'].sum()

# 4. Find product with highest total value
highest_value_product = df.loc[df['Total_Value'].idxmax()]

# 5. Filter electronics with price > 200
electronics_high_price = df[(df['Category'] == 'Electronics') & (df['Price'] > 200)]

# Print results
print("Original DataFrame:")
print(df)
print("\nTotal Value by Category:")
print(total_by_category)
print("\nProduct with Highest Total Value:")
print(highest_value_product)
print("\nElectronics Products with Price > 200:")
print(electronics_high_price)

Output:
Original DataFrame:
Product Category Price Quantity Total_Value
0 Laptop Electronics 1200 10 12000
1 Mouse Accessories 25 50 1250
2 Keyboard Accessories 80 30 2400
3 Monitor Electronics 300 20 6000
4 Headphones Accessories 100 40 4000

Total Value by Category:
Category
Accessories 7650
Electronics 18000
dtype: int64

Product with Highest Total Value:
Product Laptop
Category Electronics
Price 1200
Quantity 10
Total_Value 12000
Name: 0, dtype: object

Electronics Products with Price > 200:
Product Category Price Quantity Total_Value
0 Laptop Electronics 1200 10 12000

By: @DataScienceQ 🚀
#opencv #python #programming #question #imageprocessing #intermediate

Write a Python program using OpenCV to perform the following tasks:

1. Load an image from a file named 'image.jpg' in grayscale mode.
2. Apply Gaussian blur with a kernel size of (5, 5).
3. Detect edges using Canny edge detection with thresholds of 100 and 200.
4. Find contours in the edge-detected image.
5. Draw all detected contours on the original blurred image in red color with thickness 2.
6. Save the resulting image as 'output_image.jpg'.

import cv2
import numpy as np

# 1. Load image in grayscale
img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)
if img is None:
raise FileNotFoundError("Image file not found")

# 2. Apply Gaussian blur
blurred = cv2.GaussianBlur(img, (5, 5), 0)

# 3. Apply Canny edge detection
edges = cv2.Canny(blurred, 100, 200)

# 4. Find contours
contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# 5. Create a copy of the blurred image to draw contours
result_img = cv2.cvtColor(blurred, cv2.COLOR_GRAY2BGR) # Convert to BGR for color drawing
cv2.drawContours(result_img, contours, -1, (0, 0, 255), 2) # Draw contours in red

# 6. Save the output image
cv2.imwrite('output_image.jpg', result_img)

print("Processing complete. Output saved as 'output_image.jpg'")

Note: This code assumes that 'image.jpg' exists in the working directory. The output will be a colored image with red contours drawn over the blurred grayscale image.

By: @DataScienceQ 🚀
#imageprocessing #python #programming #question #dataset #intermediate

Write a Python program to process a dataset of images stored in a folder named 'images'. Perform the following tasks:

1. Load all images from the 'images' folder and convert them to grayscale.
2. Resize each image to 100x100 pixels.
3. Calculate the average pixel value for each image.
4. Store the average values in a list.
5. Find the image with the highest average pixel value and print its filename.
6. Save the processed grayscale images to a new folder named 'processed_images'.

import os
import cv2
import numpy as np

# 1. Define paths
input_folder = 'images'
output_folder = 'processed_images'

# Create output folder if it doesn't exist
os.makedirs(output_folder, exist_ok=True)

# List to store average pixel values
avg_values = []

# 2. Process each image in the input folder
for filename in os.listdir(input_folder):
if filename.lower().endswith(('.png', '.jpg', '.jpeg')):
img_path = os.path.join(input_folder, filename)

# Load image in grayscale
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)

# Resize to 100x100 pixels
resized_img = cv2.resize(img, (100, 100))

# Calculate average pixel value
avg_value = np.mean(resized_img)
avg_values.append((filename, avg_value))

# Save processed image
output_path = os.path.join(output_folder, filename)
cv2.imwrite(output_path, resized_img)

# 3. Find image with highest average pixel value
max_avg_image = max(avg_values, key=lambda x: x[1])
print(f"Image with highest average pixel value: {max_avg_image[0]}")
print(f"Average value: {max_avg_image[1]:.2f}")

print("All images processed and saved to 'processed_images' folder.")

Note: This code assumes that the 'images' folder exists and contains valid image files. It processes all PNG, JPG, and JPEG files in the folder, resizes them, calculates their average pixel intensity, and saves the processed images to a new folder.

By: @DataScienceQ 🚀
#matplotlib #python #programming #question #visualization #intermediate

Write a Python program using matplotlib to perform the following tasks:

1. Generate two arrays: x from 0 to 10 with 100 points, and y = sin(x) + 0.5 * cos(2x).
2. Create a figure with two subplots arranged vertically.
3. In the first subplot, plot y vs x as a line graph with red color and marker 'o'.
4. In the second subplot, create a histogram of the y values with 20 bins.
5. Add titles, labels, and grid to both subplots.
6. Adjust the layout and save the figure as 'output_plot.png'.

import numpy as np
import matplotlib.pyplot as plt

# 1. Generate data
x = np.linspace(0, 10, 100)
y = np.sin(x) + 0.5 * np.cos(2 * x)

# 2. Create figure with two subplots
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 10))

# 3. First subplot - line plot
ax1.plot(x, y, color='red', marker='o', linestyle='-', linewidth=2)
ax1.set_title('sin(x) + 0.5*cos(2x)')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.grid(True)

# 4. Second subplot - histogram
ax2.hist(y, bins=20, color='blue', alpha=0.7)
ax2.set_title('Histogram of y values')
ax2.set_xlabel('y')
ax2.set_ylabel('Frequency')
ax2.grid(True)

# 5. Adjust layout
plt.tight_layout()

# 6. Save the figure
plt.savefig('output_plot.png')

print("Plot saved as 'output_plot.png'")

Note: This code generates a sine wave with an added cosine component, creates a line plot and histogram of the data in separate subplots, adds appropriate labels and grids, and saves the resulting visualization.

By: @DataScienceQ 🚀
#scipy #python #programming #question #scientificcomputing #intermediate

Write a Python program using SciPy to perform the following tasks:

1. Generate a random dataset of 1000 samples from a normal distribution with mean=5 and standard deviation=2.
2. Use SciPy's stats module to calculate the mean, median, standard deviation, and skewness of the dataset.
3. Perform a one-sample t-test to test if the sample mean is significantly different from 5 (null hypothesis).
4. Use SciPy's optimize module to find the minimum of the function f(x) = x^2 + 3x + 2.
5. Print all results including the test statistic, p-value, and the minimum point.

import numpy as np
from scipy import stats
from scipy.optimize import minimize_scalar

# 1. Generate random dataset
np.random.seed(42)
data = np.random.normal(loc=5, scale=2, size=1000)

# 2. Calculate descriptive statistics
mean = np.mean(data)
median = np.median(data)
std_dev = np.std(data)
skewness = stats.skew(data)

# 3. Perform one-sample t-test
t_stat, p_value = stats.ttest_1samp(data, popmean=5)

# 4. Find minimum of function f(x) = x^2 + 3x + 2
def objective_function(x):
return x**2 + 3*x + 2

result = minimize_scalar(objective_function)

# 5. Print all results
print("Descriptive Statistics:")
print(f"Mean: {mean:.4f}")
print(f"Median: {median:.4f}")
print(f"Standard Deviation: {std_dev:.4f}")
print(f"Skewness: {skewness:.4f}")
print("\nOne-Sample T-Test:")
print(f"T-statistic: {t_stat:.4f}")
print(f"P-value: {p_value:.4f}")
print("\nOptimization Result:")
print(f"Minimum occurs at x = {result.x:.4f}")
print(f"Minimum value = {result.fun:.4f}")

Note: This code generates a normally distributed dataset, computes various statistical measures, performs a hypothesis test, and finds the minimum of a quadratic function using SciPy's optimization tools.

By: @DataScienceQ 🚀
#python #programming #question #fibonacci #intermediate #algorithm

Write a Python program that implements three different methods to generate the Fibonacci sequence up to the nth term:

1. Use an iterative approach with a loop.
2. Use recursion with memoization.
3. Use dynamic programming with a list.

For each method, calculate the 20th Fibonacci number and measure the execution time. Print the results for each method along with their respective times.

import time

def fibonacci_iterative(n):
if n <= 1:
return n
a, b = 0, 1
for i in range(2, n + 1):
a, b = b, a + b
return b

def fibonacci_recursive_memo(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fibonacci_recursive_memo(n - 1, memo) + fibonacci_recursive_memo(n - 2, memo)
return memo[n]

def fibonacci_dp(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]

# Test all three methods for the 20th Fibonacci number
n = 20

# Method 1: Iterative
start_time = time.time()
result_iter = fibonacci_iterative(n)
iter_time = time.time() - start_time

# Method 2: Recursive with memoization
start_time = time.time()
result_rec = fibonacci_recursive_memo(n)
rec_time = time.time() - start_time

# Method 3: Dynamic Programming
start_time = time.time()
result_dp = fibonacci_dp(n)
dp_time = time.time() - start_time

print(f"20th Fibonacci number using iterative method: {result_iter} (Time: {iter_time:.6f} seconds)")
print(f"20th Fibonacci number using recursive method: {result_rec} (Time: {rec_time:.6f} seconds)")
print(f"20th Fibonacci number using DP method: {result_dp} (Time: {dp_time:.6f} seconds)")

By: @DataScienceQ 🚀
#python #programming #question #simulation #intermediate #matryoshka

Write a Python program to simulate a Matryoshka doll game with the following requirements:

1. Create a class Matryoshka that represents a nested doll with attributes: size (int), color (string), and contents (list of smaller Matryoshka objects).
2. Implement methods to:
- Add a smaller Matryoshka inside the current one
- Remove the smallest Matryoshka from the set
- Display all dolls in the nesting hierarchy
3. Create a main function that:
- Builds a nesting of 4 Matryoshka dolls (largest to smallest)
- Displays the complete nesting
- Removes the smallest doll
- Displays the updated nesting

class Matryoshka:
def __init__(self, size, color):
self.size = size
self.color = color
self.contents = []

def add_doll(self, doll):
if doll.size < self.size:
self.contents.append(doll)
else:
print(f"Cannot add doll of size {doll.size} into size {self.size} doll")

def remove_smallest(self):
if not self.contents:
print("No dolls to remove")
return None

# Find the smallest doll recursively
smallest = self._find_smallest()
if smallest:
self._remove_doll(smallest)
return smallest
return None

def _find_smallest(self):
if not self.contents:
return self
smallest = self
for doll in self.contents:
result = doll._find_smallest()
if result.size < smallest.size:
smallest = result
return smallest

def _remove_doll(self, target):
if self.contents:
for i, doll in enumerate(self.contents):
if doll == target:
self.contents.pop(i)
return
elif doll._remove_doll(target):
return

def display(self, level=0):
indent = " " * level
print(f"{indent}{self.color} ({self.size})")
for doll in self.contents:
doll.display(level + 1)

def main():
# Create nesting of 4 Matryoshka dolls (largest to smallest)
large = Matryoshka(4, "Red")
medium = Matryoshka(3, "Blue")
small = Matryoshka(2, "Green")
tiny = Matryoshka(1, "Yellow")

# Build the nesting
large.add_doll(medium)
medium.add_doll(small)
small.add_doll(tiny)

# Display initial nesting
print("Initial nesting:")
large.display()
print()

# Remove the smallest doll
removed = large.remove_smallest()
if removed:
print(f"Removed: {removed.color} ({removed.size})")

# Display updated nesting
print("\nUpdated nesting:")
large.display()

if __name__ == "__main__":
main()


Output:
Initial nesting:
Red (4)
Blue (3)
Green (2)
Yellow (1)

Removed: Yellow (1)

Updated nesting:
Red (4)
Blue (3)
Green (2)


By: @DataScienceQ 🚀