Python Data Science Jobs & Interviews
20.3K subscribers
187 photos
4 videos
25 files
325 links
Your go-to hub for Python and Data Science—featuring questions, answers, quizzes, and interview tips to sharpen your skills and boost your career in the data-driven world.

Admin: @Hussein_Sheikho
Download Telegram
Question 25 (Advanced - CNN Implementation in Keras):
When building a CNN for image classification in Keras, what is the purpose of Global Average Pooling 2D as the final layer before classification?

A) Reduces spatial dimensions to 1x1 while preserving channel depth
B) Increases receptive field for better feature extraction
C) Performs pixel-wise normalization
D) Adds non-linearity before dense layers

#Python #Keras #CNN #DeepLearning

By: https://t.iss.one/DataScienceQ
2
#keras #python #programming #question #intermediate #machinelearning

Write a Keras program to perform the following tasks:

1. Load the MNIST dataset and preprocess it (normalize pixel values and convert labels to categorical).
2. Create a sequential model with two dense layers (128 units with ReLU activation, 10 units with softmax activation).
3. Compile the model using Adam optimizer and sparse categorical crossentropy loss.
4. Train the model for 5 epochs with a validation split of 0.2.
5. Evaluate the model on the test set and print the test accuracy.
6. Save the trained model to a file named 'mnist_model.h5'.

import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical

# 1. Load and preprocess data
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize pixel values to range [0, 1]
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

# Reshape data for the model
x_train = x_train.reshape(-1, 28*28)
x_test = x_test.reshape(-1, 28*28)

# Convert labels to categorical (one-hot encoding)
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# 2. Create sequential model
model = models.Sequential()
model.add(layers.Dense(128, activation='relu', input_shape=(784,)))
model.add(layers.Dense(10, activation='softmax'))

# 3. Compile model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# 4. Train model
history = model.fit(x_train, y_train,
epochs=5,
validation_split=0.2,
verbose=1)

# 5. Evaluate model
test_loss, test_accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f"Test Accuracy: {test_accuracy:.4f}")

# 6. Save model
model.save('mnist_model.h5')
print("Model saved as 'mnist_model.h5'")


By: @DataScienceQ 🚀
#DeepLearning #NeuralNetworks #Python #TensorFlow #Keras #MachineLearning #AdvancedNeuralNetworks #Programming #Tutorial #ExampleCode

Question: How can you implement a deep neural network with multiple hidden layers using Keras in Python, and what are the key considerations for optimizing its performance?

Answer:

To implement a deep neural network (DNN) with multiple hidden layers in Keras, follow this step-by-step example. We'll use the tf.keras API to build a model for classifying images from the MNIST dataset.

### Step 1: Import Libraries
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical

### Step 2: Load and Preprocess Data
# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize pixel values to range [0, 1]
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

# Reshape data to flatten each image into a vector
x_train = x_train.reshape(-1, 784)
x_test = x_test.reshape(-1, 784)

# Convert labels to categorical (one-hot encoding)
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

### Step 3: Build Deep Neural Network
model = keras.Sequential([
layers.Dense(256, activation='relu', input_shape=(784,)), # First hidden layer
layers.Dropout(0.3), # Regularization to prevent overfitting
layers.Dense(128, activation='relu'), # Second hidden layer
layers.Dropout(0.3),
layers.Dense(64, activation='relu'), # Third hidden layer
layers.Dropout(0.3),
layers.Dense(10, activation='softmax') # Output layer (10 classes)
])

### Step 4: Compile the Model
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)

### Step 5: Train the Model
history = model.fit(
x_train, y_train,
epochs=20,
batch_size=128,
validation_split=0.2
)

### Step 6: Evaluate the Model
test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f"Test Accuracy: {test_accuracy:.4f}")

---

### Key Considerations for Optimization:

1. Layer Size and Depth:
- Start with smaller networks and gradually increase depth.
- Use empirical rules: often hidden layers decrease in size (e.g., 256 → 128 → 64).

2. Activation Functions:
- Use ReLU for hidden layers (efficient and avoids vanishing gradients).
- Use softmax for multi-class classification output.

3. Regularization:
- Apply Dropout (e.g., 0.3) to reduce overfitting.
- Optionally use L2 regularization via kernel_regularizer.

4. Optimizers:
- Adam is usually a good default choice due to adaptive learning rates.

5. Batch Size and Epochs:
- Larger batch sizes speed up training but may generalize worse.
- Use early stopping or reduce learning rate on plateau.

6. Data Preprocessing:
- Normalize inputs (e.g., scale pixels to [0,1]).
- Use one-hot encoding for categorical labels.

---

### Example of Adding L2 Regularization:
from tensorflow.keras.regularizers import l2

model = keras.Sequential([
layers.Dense(256, activation='relu', input_shape=(784,), kernel_regularizer=l2(0.001)),
layers.Dropout(0.3),
layers.Dense(128, activation='relu', kernel_regularizer=l2(0.001)),
layers.Dropout(0.3),
layers.Dense(10, activation='softmax')
])

This implementation provides a solid foundation for advanced neural networks. You can extend it by adding more layers, experimenting with different architectures (e.g., CNNs for images), or tuning hyperparameters.

By: @DataScienceQ 🚀
1🔥1