Python | Algorithms | Data Structures | Cyber ​​Security | Networks
38.6K subscribers
777 photos
23 videos
21 files
711 links
This channel is for Programmers, Coders, Software Engineers.

1) Python
2) django
3) python frameworks
4) Data Structures
5) Algorithms
6) DSA

Admin: @Hussein_Sheikho

Ad & Earn money form your channel:
https://telega.io/?r=nikapsOH
Download Telegram
# Django ORM Comparison - Know both frameworks
# Django model (contrast with SQLAlchemy)
from django.db import models

class Department(models.Model):
name = models.CharField(max_length=50)

class Employee(models.Model):
name = models.CharField(max_length=100)
email = models.EmailField(unique=True)
department = models.ForeignKey(Department, on_delete=models.CASCADE)

# Django query (similar but different syntax)
Employee.objects.filter(department__name="HR").select_related('department')


# Async ORM - Modern Python requirement
# Requires SQLAlchemy 1.4+ and asyncpg
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession

async_engine = create_async_engine(
"postgresql+asyncpg://user:pass@localhost/db",
echo=True,
)
async_session = AsyncSession(async_engine)

async with async_session.begin():
result = await async_session.execute(
select(Employee).where(Employee.name == "Alice")
)
employee = result.scalar_one()


# Testing Strategies - Interview differentiator
from unittest import mock

# Mock database for unit tests
with mock.patch('sqlalchemy.create_engine') as mock_engine:
mock_conn = mock.MagicMock()
mock_engine.return_value.connect.return_value = mock_conn

# Test your ORM-dependent code
create_employee("Test", "[email protected]")
mock_conn.execute.assert_called()


# Production Monitoring - Track slow queries
from sqlalchemy import event

@event.listens_for(engine, "before_cursor_execute")
def before_cursor(conn, cursor, statement, params, context, executemany):
conn.info.setdefault('query_start_time', []).append(time.time())

@event.listens_for(engine, "after_cursor_execute")
def after_cursor(conn, cursor, statement, params, context, executemany):
total = time.time() - conn.info['query_start_time'].pop(-1)
if total > 0.1: # Log slow queries
print(f"SLOW QUERY ({total:.2f}s): {statement}")


# Interview Power Move: Implement caching layer
from functools import lru_cache

class CachedEmployeeRepository(EmployeeRepository):
@lru_cache(maxsize=100)
def get_by_id(self, employee_id):
return super().get_by_id(employee_id)

def invalidate_cache(self, employee_id):
self.get_by_id.cache_clear()

# Reduces database hits by 70% in read-heavy applications


# Pro Tip: Schema versioning in CI/CD pipelines
# Sample .gitlab-ci.yml snippet
deploy_db:
stage: deploy
script:
- alembic upgrade head
- pytest tests/db_tests.py # Verify schema compatibility
only:
- main


# Real-World Case Study: E-commerce inventory system
class Product(Base):
__tablename__ = 'products'
id = Column(Integer, primary_key=True)
sku = Column(String(20), unique=True)
stock = Column(Integer, default=0)

# Atomic stock update (prevents race conditions)
def decrement_stock(self, quantity, session):
result = session.query(Product).filter(
Product.id == self.id,
Product.stock >= quantity
).update({"stock": Product.stock - quantity})
if not result:
raise ValueError("Insufficient stock")

# Usage during checkout
product.decrement_stock(2, session)


By: @DATASCIENCE4 🔒

#Python #ORM #SQLAlchemy #Django #Database #BackendDevelopment #CodingInterview #WebDevelopment #TechJobs #SystemDesign #SoftwareEngineering #DataEngineering #CareerGrowth #APIs #Microservices #DatabaseDesign #TechTips #DeveloperTools #Programming #CareerTips
3
# Interview Power Move: Parallel Merging
from concurrent.futures import ThreadPoolExecutor
from PyPDF2 import PdfMerger

def parallel_merge(pdf_list, output, max_workers=4):
chunks = [pdf_list[i::max_workers] for i in range(max_workers)]
temp_files = []

def merge_chunk(chunk, idx):
temp = f"temp_{idx}.pdf"
merger = PdfMerger()
for pdf in chunk:
merger.append(pdf)
merger.write(temp)
return temp

with ThreadPoolExecutor() as executor:
temp_files = list(executor.map(merge_chunk, chunks, range(max_workers)))

# Final merge of chunks
final_merger = PdfMerger()
for temp in temp_files:
final_merger.append(temp)
final_merger.write(output)

parallel_merge(["doc1.pdf", "doc2.pdf", ...], "parallel_merge.pdf")


# Pro Tip: Validate PDFs before merging
from PyPDF2 import PdfReader

def is_valid_pdf(path):
try:
with open(path, "rb") as f:
reader = PdfReader(f)
return len(reader.pages) > 0
except:
return False

valid_pdfs = [f for f in pdf_files if is_valid_pdf(f)]
merger.append(valid_pdfs) # Only merge valid files


# Real-World Case Study: Invoice Processing Pipeline
import glob
from PyPDF2 import PdfMerger

def process_monthly_invoices():
# 1. Download invoices from SFTP
download_invoices("sftp://vendor.com/invoices/*.pdf")

# 2. Validate and sort
invoices = sorted(
[f for f in glob.glob("invoices/*.pdf") if is_valid_pdf(f)],
key=lambda x: extract_invoice_date(x)
)

# 3. Merge with cover page
merger = PdfMerger()
merger.append("cover_template.pdf")
for inv in invoices:
merger.append(inv, outline_item=get_client_name(inv))

# 4. Add metadata and encrypt
merger.add_metadata({"/InvoiceCount": str(len(invoices))})
merger.encrypt(owner_pwd="finance_team_2023")
merger.write(f"Q3_Invoices_{datetime.now().strftime('%Y%m')}.pdf")

# 5. Upload to secure storage
upload_to_s3("secure-bucket/processed/", "Q3_Invoices.pdf")

process_monthly_invoices()


By: https://t.iss.one/DataScience4

#Python #PDFProcessing #DocumentAutomation #PyPDF2 #CodingInterview #BackendDevelopment #FileHandling #DataEngineering #TechJobs #Programming #SystemDesign #DeveloperTips #CareerGrowth #CloudComputing #Docker #Microservices #Productivity #TechTips #Python3 #SoftwareEngineering
By combining all the code from the steps above into database.py and main.py, you have a robust, database-driven desktop application.

Results:
Data Persistence: Your inventory and invoice data is saved in warehouse.db and will be there when you restart the application.
Integrated Workflow: Adding a purchase directly increases stock. A sale checks for and decreases stock. Production consumes raw materials and creates finished goods, all reflected in the central inventory table.
Separation of Concerns: The UI logic in main.py is cleanly separated from the data logic in database.py, making the code easier to maintain and extend.
Reporting: You can easily export a snapshot of your current inventory to a CSV file for analysis in other programs like Excel or Google Sheets.

Discussion and Next Steps:
Scalability: While SQLite is excellent for small-to-medium applications, a large-scale, multi-user system would benefit from a client-server database like PostgreSQL or MySQL.
Invoice Complexity: The current invoice system is simplified. A real system would allow multiple items per invoice and store historical invoice data for viewing and printing.
User Interface (UI/UX): The UI is functional but could be greatly improved with better layouts, icons, search/filter functionality in tables, and more intuitive workflows.
Error Handling: The error handling is basic. A production-grade app would have more comprehensive checks for user input and database operations.
Advanced Features: Future additions could include user authentication, supplier and customer management, barcode scanning, and more detailed financial reporting.

This project forms a powerful template for building custom internal business tools with Python.

#ProjectComplete #SoftwareEngineering #ERP #PythonGUI #BusinessApp

━━━━━━━━━━━━━━━
By: @DataScience4
Interview Question

What is the potential pitfall of using a mutable object (like a list or dictionary) as a default argument in a Python function?

Answer: A common pitfall is that the default argument is evaluated only once, when the function is defined, not each time it is called. If that default object is mutable, any modifications made to it in one call will persist and be visible in subsequent calls.

This can lead to unexpected and buggy behavior.

Incorrect Example (The Pitfall):

def add_to_list(item, my_list=[]):
my_list.append(item)
return my_list

# First call seems to work fine
print(add_to_list(1)) # Output: [1]

# Second call has unexpected behavior
print(add_to_list(2)) # Output: [1, 2] -- The list from the first call was reused!

# Third call continues the trend
print(add_to_list(3)) # Output: [1, 2, 3]


The Correct, Idiomatic Solution:

The standard practice is to use None as the default and create a new mutable object inside the function if one isn't provided.

def add_to_list_safe(item, my_list=None):
if my_list is None:
my_list = [] # Create a new list for each call
my_list.append(item)
return my_list

# Each call now works independently
print(add_to_list_safe(1)) # Output: [1]
print(add_to_list_safe(2)) # Output: [2]
print(add_to_list_safe(3)) # Output: [3]


tags: #Python #Interview #CodingInterview #PythonTips #Developer #SoftwareEngineering #TechInterview

━━━━━━━━━━━━━━━
By: @DataScience4
2