Academy and Foundation unixmens | Your skills, Your future
2.28K subscribers
6.65K photos
1.36K videos
1.23K files
5.97K links
@unixmens_support
@yashar_esm
[email protected]
یک کانال علمی تکنولوژی
فلسفه متن باز-گنو/لینوکس-امنیت - اقتصاد
دیجیتال
Technology-driven -بیزینس های مبتنی بر تکنولوژی
Enterprise open source
ارایه دهنده راهکارهای ارتقای سازمانی - فردی - تیمی
Download Telegram
Red Hat Summit 2025 in Boston concluded in May, where several critical open source technology innovations were showcased with a focus on virtualization, automation, cloud technologies, AI/ML and enterprise security, demonstrating the value of Red Hat for business leaders, IT teams and organizations. A journey of collaboration - Red Hat and Emirates NBDOne of the featured events at Red Hat Summit 2025 was the keynote, which included a discussion between Stefanie Chiras, SVP of Partner Ecosystem at Red Hat and Nicholas Grimm, Head of Cloud Compute at Emirates NBD regarding the companies’ lo

via Red Hat Blog https://ift.tt/5TEsuka
PacketFence is an open-source Network Access Control (NAC) solution that provides a comprehensive set of features for managing network access. It supports various functionalities such as captive portals for user registration, centralized management for wired and wireless networks, and robust BYOD (Bring Your Own Device) capabilities. The latest version, PacketFence 14.1, was released on February 18, 2025, and includes numerous improvements and new features aimed at enhancing network security and management.

Recent Updates and Features

Version 14.1: This release includes enhancements to existing features and introduces new functionalities that improve user experience and security management.
Zero Effort NAC (ZEN): A preconfigured version of PacketFence designed for rapid deployment, making it easier for organizations to implement NAC solutions without extensive setup.
Integration Capabilities: PacketFence continues to evolve by integrating with various network devices and security solutions, allowing for better management of diverse network environments.

Network Access Control (NAC) Trends for 2025

Market Growth and Trends
The Network Access Control market is expected to experience significant growth, with projections indicating a compound annual growth rate (CAGR) of approximately 45.9% from 2025 to 2029. This growth is driven by several factors:

Increased Cybersecurity Threats: As cyberattacks become more sophisticated, organizations are prioritizing NAC solutions to secure their networks.
Adoption of BYOD Policies: The rise of personal devices in the workplace necessitates robust NAC solutions to manage and secure these devices effectively.
Integration with IoT and Cloud Services: The growing number of IoT devices and the shift towards cloud-based services are pushing the demand for advanced NAC solutions that can handle diverse and dynamic network environments.

Key Features and Innovations

AI and Machine Learning: The integration of AI and machine learning in NAC solutions is enhancing threat detection and response capabilities, allowing for more proactive security measures.
Regulatory Compliance: Stricter data privacy regulations, such as GDPR, are driving the need for secure network access control solutions, influencing product development and market strategies.
Market Segmentation: The NAC market is segmented by components (hardware, software, services), deployment models (on-premises, cloud), and enterprise sizes, with a notable demand from small and medium enterprises (SMEs) due to increasing cyber threats.


#security #net #network #linux #nac #packetfense

https://t.iss.one/unixmens

unixmens
When you subscribe to Red Hat Enterprise Linux (RHEL), you get security fixes for Common Vulnerabilities and Exposures (CVE). As defined in the RHEL Life Cycle Policy, we classify any issue rated with a Common Vulnerability Scoring System score of 7.0 or higher as Critical, Important or Moderate. Our enhanced support plans (RHEL Extended Life Cycle Support, Extended Update Support, and Enhanced Extended Update Support) include similar coverage. But compliance in finance, healthcare, telecommunications, the public sector and other highly regulated industries may demand fixes and patches outsid

via Red Hat Blog https://ift.tt/1Xpxtof
At Red Hat OpenShift Commons Gatherings, we shine a spotlight on the people behind the screens who are building, scaling, and evolving with Red Hat OpenShift. These events are powered by real-world experiences, and we're inviting you to take the mic.The next OpenShift Commons Gathering is coming to Atlanta, Georgia, on November 10, alongside KubeCon + CloudNativeCon North America, and the call for proposals is officially open. Whether you're enabling AI at scale, migrating from legacy virtualization or accelerating application development, your experience can help guide others on the same path

via Red Hat Blog https://ift.tt/Di0Zv4I
🔘صندوق زیرساخت هوش مصنوعی با ۱۰هزار میلیارد تومان سرمایه اولیه تشکیل می‌شود

معاون علمی رییس جمهور:

به دنبال تشکیل زیر ساخت در حوزه هوش مصنوعی هستیم که کل تجهیزاتی که وارد می‌شود و GPU که معاونت ایجاد کرده، به عنوان سرمایه اولیه این صندوق قرار می‌گیرد. سایر فعالان این حوزه می‌توانند در این صندوق سرمایه‌گذاری کنند که این سرمایه‌گذاری می‌تواند یا به صورت تجهیزات GPU‌ و یا به صورت مالی باشد.

معاونت علمی این آمادگی را دارد که به ازای سرمایه‌گذاری شرکت‌ها، تا دو برابر آن را در قالب سهم، به شرکت‌های بخش خصوصی تخصیص دهد تا صندوق زیر ساخت هوش مصنوعی، در اختیار اکوسیستم خصوصی قرار گیرد، ولی معاونت در کنار آن خواهد بود تا توسعه متقارنی در حوزه توسعه زیر ساخت هوش مصنوعی صورت گیرد.

معاون علمی رئیس‌جمهور، سرمایه اولیه این صندوق را از سوی معاونت حدود ۱۰ همت ذکر کرد.
فلسفه، هوش مصنوعی را بلعیده است

در سال ۲۰۱۱، مارک اندریسین، کارآفرین و سرمایه‌گذار استارتاپی، در مقاله‌ای در وال‌استریت ژورنال نوشت: «نرم‌افزار در حال بلعیدن جهان است.» کمتر از شش سال بعد، جنسن هوانگ، مدیرعامل و بنیان‌گذار انویدیا، نسخه‌ای به‌روز‌شده از این جمله را ارائه داد: «نرم‌افزار در حال بلعیدن جهان است… اما هوش مصنوعی در حال بلعیدن نرم‌افزار است.»

امروز اما با گزاره‌ای جدید و غیرمنتظره مواجه‌ایم: «فلسفه در حال بلعیدن هوش مصنوعی است.»

وقتی به تقاطع فلسفه و هوش مصنوعی می‌نگریم، ذهن‌ها معمولاً به مباحث اخلاقی در این حوزه محدود می‌شود، اما این تنها یک بخش کوچک از چشم‌انداز وسیع‌تر فلسفه و کاربرد آن در هوش مصنوعی است. جالب توجه است که ریشه‌های فکری هوش مصنوعی نیز با فلسفه پیوند عمیقی دارند. آلن تورینگ (۱۹۱۲–۱۹۵۴)، پدر علم کامپیوتر، با طرح پرسش‌های فلسفی درباره محاسبه‌پذیری و هوش، به ابداع ماشین تورینگ دست یافت که خود یک آزمایش فکری-فلسفی بود. پس از او، لودویگ ویتگنشتاین با تحلیل بازی‌های زبانی و گوتلوب فرگه پایه‌های منطقی زبان‌های برنامه‌نویسی را بنا نهادند. جفری هینتون، برنده جوایز برجسته علمی در سال ۲۰۲۴، پژوهش‌هایش در مورد شبکه‌های عصبی را از پرسش‌های فلسفی آغاز کرد.

این روزها بحث‌های اصلی در حوزه‌ی هوش مصنوعی از مدل‌های زبانی به سمت توسعه‌ی عامل‌های هوش مصنوعی متمایل شده است. ایجنت‌های تصمیم‌گیر و خودمختار (Agentic AI) که وظایف را به‌طور خودکار انجام می‌دهند، دیگر صرفاً برای کارایی طراحی نمی‌شوند؛ بلکه نیازمند معنا، هدف و عاملیت واقعی هستند. این امر نه تنها در حیطه مهندسی و برنامه‌نویسی، بلکه در چارچوب فلسفه و تفکر متفکران معنا پیدا می‌کند.

پژوهشگران بر اساس این نیازها، چهار چارچوب فلسفی برای آموزش ایجنت‌های هوش مصنوعی پیشنهاد کرده‌اند:

۱. معرفت‌شناسی (Epistemology): ایجنت باید بداند چه چیزهایی را می‌داند و چه چیزهایی را نمی‌داند و همچنین سطح اعتماد به دانش خود را تشخیص دهد.

۲. هستی‌شناسی (Ontology): ایجنت باید نسبت به توانایی‌ها و محدودیت‌های خود آگاهی داشته باشد و پیوند اجزای محیط را درک کند.

۳. غایت‌شناسی (Teleology): ایجنت باید قادر به تعیین هدف باشد و در راستای دستیابی به آن حرکت کند؛ برای مثال، توجه به ارزش طول عمر مشتری به جای تمرکز صرف بر فروش بیشتر.

۴. اخلاق (Ethics): ایجنت باید توانایی تصمیم‌گیری اخلاقی داشته باشد، پیامدها را ارزیابی کند و دلایل تصمیماتش را شفاف بیان کند.

این موضوعات نشان می‌دهند که صرفاً با آموزش داده‌های بیشتر یا حتی داده‌های باکیفیت‌تر نمی‌توان ایجنت‌های هوش مصنوعی قدرتمندتر و دقیق‌تری ساخت. بسیاری از شرکت‌های بزرگ رویکردهای فلسفی را در توسعه محصولات هوش مصنوعی خود مد نظر قرار داده‌اند؛ از جمله آنتروپیک که مجموعه‌ای از اصول اخلاقی را تحت عنوان «قانون اساسی» در فرآیند آموزش مدل‌هایش اعمال کرده است. یا گوگل که در سال ۲۰۱۸ اصول رسمی توسعه اخلاقی هوش مصنوعی را منتشر کرد. حتی استارتاپ مشهور پالانتیر که بر همکاری اطلاعاتی با سرویس‌های امنیتی و دولتی متمرکز است، با تأکید مدیرعاملش الکس کارپ بر این چارچوب فلسفی پایبند است.

به نظر می‌رسد انسان، حتی پس از رسیدن به مرحله تکنیکی تزریق هوش انسانی به ذهن ماشین، نیازمند بازتعریف دوباره فلسفه زندگی و درکی نو از فهم جهان است. در چنین فضایی، سوال این نیست که آیا فلسفه باید در آموزش هوش مصنوعی دخیل باشد یا نه، بلکه سوال این است: کدام فلسفه؟ و چگونه؟

آینده هوش مصنوعی به طرز غیرمنتظره‌ای به آینده اندیشه فلسفی گره خورده است. این همگرایی می‌تواند نه تنها به پیشرفت فناوری کمک کند، بلکه ما را به سمت درک عمیق‌تری از خود و جهان هدایت نماید. بنابراین، زمان آن رسیده است که به نقش فلسفه در شکل‌دهی به آینده هوش مصنوعی توجه بیشتری کنیم و از آن بهره‌برداری کنیم.
This media is not supported in your browser
VIEW IN TELEGRAM
انقلابی در رباتیک: ربات انسان‌نما با قیمت کمتر از ۳ هزار دلار در راه است

شرکت Hugging Face از ربات انسان‌نمای متن‌باز خود به نام HopeJR رونمایی کرد؛ رباتی با ۶۶ درجه آزادی حرکتی که قابلیت راه‌رفتن و حرکت‌دادن اندام‌ها را دارد. قیمت این ربات تنها ۳ هزار دلار اعلام شده که بیش از ۱۰ هزار دلار ارزان‌تر از مدل‌های مشابه در بازار است.

مدیرعامل Hugging Face اعلام کرد این ربات با هدف دموکراتیک‌سازی فناوری و کاهش انحصار در صنعت رباتیک طراحی شده است. عرضه‌ی محدود HopeJR از پایان سال جاری آغاز خواهد شد.
⭕️ نمودار دانینگ کروگر تکنولوژی جدید
👏1
داتین: حمله اینترنتی به دو بانک کشور سخت‌افزاری بود و در همان دقایق نخست باعت از بین‌رفتن سخت‌افزارهای ذخیره‌سازی بانک‌ها در سه مرکز داده مختلف از بین رفتند و غیر قابل استفاده شدند.
Import_Wizard.mp4
31.7 MB
Proxmox VE Import Wizard: How to import VMs from VMware ESXi


This video will show how to use the Proxmox VE Import Wizard to migrate VMware ESXi VMs to Proxmox Virtual Environment. Version 8.2 provides an integrated VM importer using the storage plugin system for native integration into the API and web-based user interface. You can use this to import a VMware ESXi VM as a whole. The video demonstrates the following steps:

Mounting the host as a new Proxmox storage
Launching the Import Wizard for the Windows 2022 Server
Resulting configuration and import
Import progress
First boot of the imported VM
Enabling VirtIO SCSI boot
Device Manager – final checks
and much more....

Read the step-by-step guide:
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Automatic_Import_of_Full_VM
#vmware #proxmox #migrate #kvm #virtualization


https://t.iss.one/unixmens
The Machine Config Operator (MCO) in Red Hat OpenShift has been able to perform disruptionless updates on select changes since version 4.7. These select changes were hardcoded in the MCO. To make this process more user-friendly and customizable, the MCO team is introducing node disruption policies.This blog post will offer context behind node disruption policies, how MCO uses node disruption policies during a MachineConfig Operator update, and important points to be aware of while using them. Why hand over node disruption control to administrators?Disruptions can be very expensive for customer

via Red Hat Blog https://ift.tt/MPRgjN5
People are asking AI for answers. Is your infrastructure ready to deliver?I recently came across a case study showing that traffic from ChatGPT was converting at over 15%, nearly 10x higher than traditional organic search.That kind of stat is hard to ignore, and it points to a broader shift that’s already underway: people aren’t just Googling anymore. They’re turning to large language models (LLMs) to ask for advice, recommendations and product suggestions in natural language. Because these tools feel so intuitive, users expect them to deliver facts. In reality, some models are trained t

via Red Hat Blog https://ift.tt/lX8oUbV
Red Hat Enterprise Linux (RHEL) remains the trusted backbone of enterprise IT, constantly evolving to meet modern demands. At Red Hat Summit, we unveiled a powerful series of innovations that redefine RHEL's capabilities across the hybrid cloud, security and management. This roundup explores key blog posts published during Summit 2025 and covers everything from deep cloud integrations and cutting-edge security features like post-quantum cryptography, AI-powered assistance and operating system (OS) management with image mode. Explore how RHEL is building a resilient and secure foundation for yo

via Red Hat Blog https://ift.tt/SxJh7HD
KubeVirt is an innovative tool designed to manage the lifecycle and scheduling of Virtual Machines (VMs) within Kubernetes clusters. It aims to bridge the gap between traditional virtualization and modern container orchestration, allowing for a hybrid environment where both VMs and containers can coexist. Here’s a detailed overview of KubeVirt, its comparisons with other projects, and its use cases.
Overview of KubeVirt
KubeVirt extends Kubernetes by enabling it to manage VMs alongside containerized applications. This integration allows organizations to leverage Kubernetes' orchestration capabilities for both types of workloads, providing a unified platform for managing resources in a datacenter or cloud environment.
KubeVirt vs. Other Projects
Kubernetes:
Kubernetes is primarily focused on automating the deployment and management of containerized applications.
KubeVirt acts as an add-on to Kubernetes, enabling it to manage VMs, thus enhancing Kubernetes' capabilities.
OpenStack:
OpenStack is a comprehensive IaaS platform that includes various components for compute, networking, and storage.
KubeVirt is a single component that specializes in VM scheduling and lifecycle management, relying on other systems for networking and storage.
Nova:
Nova is the VM scheduling component of OpenStack, supporting multiple virtualization technologies.
KubeVirt focuses specifically on KVM managed by Libvirt, allowing for a more streamlined and efficient management of VMs.
oVirt:
oVirt is a virtualization management platform that emphasizes high availability and infrastructure-level guarantees.
KubeVirt aims to provide similar consistency guarantees while also offering the scalability needed for cloud environments.
Libvirt:
Libvirt is a toolkit for managing VMs on a local node, providing lifecycle management and network/storage interface management.
KubeVirt utilizes Libvirt for managing KVM VMs, leveraging its existing capabilities rather than reinventing the wheel.
AWS EC2 and Google GCE:
Both EC2 and GCE are proprietary cloud services that lock users into specific pricing models and infrastructures.
KubeVirt is an open-source project that focuses solely on VM scheduling, providing flexibility and independence from specific cloud providers.
Use Cases
KubeVirt is designed to address several key use cases:
Cloud Virtualization:
It provides a feature set for managing VM scale-out, similar to the abstractions offered by cloud IaaS APIs.
Datacenter Virtualization:
KubeVirt aims to deliver strong infrastructure consistency guarantees, making it suitable for managing large numbers of VMs.
Kubernetes Trusted Workloads:
It allows for the execution of virtualized workloads that require the security guarantees provided by a hypervisor.
Combining Container and Virtualized Workloads:
KubeVirt enables the scheduling of both containerized and virtualized workloads on the same Kubernetes cluster, facilitating a more integrated approach to resource management.
Conclusion
KubeVirt is positioned as a powerful tool for organizations looking to manage VMs within a Kubernetes environment. By focusing on KVM and leveraging existing technologies like Libvirt, KubeVirt aims to provide a robust solution for both cloud and datacenter virtualization, while also supporting the coexistence of containerized applications. Its open-source nature and flexibility make it an attractive option for IaaS providers and enterprises alike.


#ovirt #kubevirt #linux #k8s #kubernetes #lcm #virtualization


https://t.iss.one/unixmens
In 2013, Redmonk analyst Steven O’Grady positioned application developers as the new kingmakers. It was a role that enterprise IT had served since the rise of business-driven computing. First, systems administrators held the keys to the kingdom by having the (then) esoteric knowledge of the operating system - but as Linux took hold in the late 90s/early 00s, the applications, not the OS, took center stage. This made developers the unlikely “voice behind the throne” in a CxO monarchy. But we’re looking at another shift in royalty fabrication with the continued velocity of generative AI

via Red Hat Blog https://ift.tt/ZAmW492
KubeVirt is an innovative tool designed to manage the lifecycle and scheduling of Virtual Machines (VMs) within Kubernetes clusters. It aims to bridge the gap between traditional virtualization and modern container orchestration, allowing for a hybrid environment where both VMs and containers can coexist. Here’s a detailed overview of KubeVirt, its comparisons with other projects, and its use cases.
Overview of KubeVirt
KubeVirt extends Kubernetes by enabling it to manage VMs alongside containerized applications. This integration allows organizations to leverage Kubernetes' orchestration capabilities for both types of workloads, providing a unified platform for managing resources in a datacenter or cloud environment.
KubeVirt vs. Other Projects
Kubernetes:
Kubernetes is primarily focused on automating the deployment and management of containerized applications.
KubeVirt acts as an add-on to Kubernetes, enabling it to manage VMs, thus enhancing Kubernetes' capabilities.
OpenStack:
OpenStack is a comprehensive IaaS platform that includes various components for compute, networking, and storage.
KubeVirt is a single component that specializes in VM scheduling and lifecycle management, relying on other systems for networking and storage.
Nova:
Nova is the VM scheduling component of OpenStack, supporting multiple virtualization technologies.
KubeVirt focuses specifically on KVM managed by Libvirt, allowing for a more streamlined and efficient management of VMs.
oVirt:
oVirt is a virtualization management platform that emphasizes high availability and infrastructure-level guarantees.
KubeVirt aims to provide similar consistency guarantees while also offering the scalability needed for cloud environments.
Libvirt:
Libvirt is a toolkit for managing VMs on a local node, providing lifecycle management and network/storage interface management.
KubeVirt utilizes Libvirt for managing KVM VMs, leveraging its existing capabilities rather than reinventing the wheel.
AWS EC2 and Google GCE:
Both EC2 and GCE are proprietary cloud services that lock users into specific pricing models and infrastructures.
KubeVirt is an open-source project that focuses solely on VM scheduling, providing flexibility and independence from specific cloud providers.
Use Cases
KubeVirt is designed to address several key use cases:
Cloud Virtualization:
It provides a feature set for managing VM scale-out, similar to the abstractions offered by cloud IaaS APIs.
Datacenter Virtualization:
KubeVirt aims to deliver strong infrastructure consistency guarantees, making it suitable for managing large numbers of VMs.
Kubernetes Trusted Workloads:
It allows for the execution of virtualized workloads that require the security guarantees provided by a hypervisor.
Combining Container and Virtualized Workloads:
KubeVirt enables the scheduling of both containerized and virtualized workloads on the same Kubernetes cluster, facilitating a more integrated approach to resource management.
Conclusion
KubeVirt is positioned as a powerful tool for organizations looking to manage VMs within a Kubernetes environment. By focusing on KVM and leveraging existing technologies like Libvirt, KubeVirt aims to provide a robust solution for both cloud and datacenter virtualization, while also supporting the coexistence of containerized applications. Its open-source nature and flexibility make it an attractive option for IaaS providers and enterprises alike.


#kubevirt #linux #k8s #kubernetes #vm #virtualization

https://t.iss.one/unixmens
ZFS (Zettabyte File System) offers several RAID-like configurations, including ZRAID and DRAID, which provide different advantages for data storage and redundancy.
ZRAID
ZRAID is a term often used to describe the traditional RAID configurations available in ZFS, such as RAID-Z1, RAID-Z2, and RAID-Z3. These configurations allow for:
Data Redundancy: Protects against data loss due to disk failures. RAID-Z1 can tolerate one disk failure, RAID-Z2 can tolerate two, and RAID-Z3 can tolerate three.
Efficient Storage: Unlike traditional RAID, ZFS uses variable block sizes and can efficiently utilize disk space.
Self-Healing: ZFS checksums all data and can automatically repair corrupted data using redundant copies.
DRAID
DRAID (Distributed RAID) is a newer feature in ZFS that enhances the traditional RAID configurations by distributing parity and data across all disks in a pool. Key benefits include:
Improved Performance: DRAID can offer better performance during rebuilds and normal operations by distributing the workload across all disks.
Scalability: It allows for easier expansion of storage pools by adding new disks without significant performance degradation.
Reduced Rebuild Times: Since data and parity are distributed, the time taken to rebuild a failed disk is generally shorter compared to traditional RAID configurations.

ZRAID (RAID-Z)

ZRAID encompasses the various RAID-Z configurations in ZFS, which include:

RAID-Z1:
Configuration: Similar to RAID 5, it uses one parity block.
Fault Tolerance: Can withstand one disk failure.
Use Case: Suitable for environments where data redundancy is important but cost needs to be managed.

RAID-Z2:
Configuration: Similar to RAID 6, it uses two parity blocks.
Fault Tolerance: Can withstand two disk failures.
Use Case: Ideal for critical data storage where higher redundancy is required.

RAID-Z3:
Configuration: Uses three parity blocks.
Fault Tolerance: Can withstand three disk failures.
Use Case: Best for environments with very high data availability requirements.

Advantages of ZRAID:

Data Integrity: ZFS checksums all data, ensuring that any corruption can be detected and repaired.
Snapshots and Clones: ZFS allows for efficient snapshots and clones, which can be useful for backups and testing.
Compression: ZFS supports data compression, which can save space and improve performance.

Considerations for ZRAID:

Rebuild Times: In traditional RAID configurations, rebuilding a failed disk can take a significant amount of time, during which the system may be vulnerable to additional failures.
Performance: Write performance can be impacted due to the overhead of calculating parity.

DRAID (Distributed RAID)

DRAID is a more recent addition to ZFS, designed to address some of the limitations of traditional RAID configurations.
Key Features of DRAID:

Distributed Parity: Unlike ZRAID, where parity is concentrated, DRAID distributes parity across all disks, which can lead to improved performance.
Dynamic Resiliency: DRAID can adapt to changes in the storage pool, such as adding or removing disks, without significant performance penalties.
Faster Rebuilds: The distributed nature of DRAID allows for faster rebuild times since the workload is shared across multiple disks.

Advantages of DRAID:

Performance: DRAID can provide better read and write performance, especially in environments with high I/O demands.
Scalability: It is easier to scale storage by adding disks, as the system can dynamically adjust to the new configuration.



Conclusion


Both ZRAID and DRAID provide robust solutions for data storage, with ZRAID being more traditional and widely used, while DRAID offers modern enhancements for performance and scalability. The choice between them depends on specific use cases, performance requirements, and the desired level of redundancy.



#zfs #raid #linux #storage #kernel #data

https://t.iss.one/unixmens
IntroductionRed Hat understands that customer feedback plays a crucial role in guiding technology purchasing decisions. Consequently, peer review vendors such as TrustRadius and G2 have become essential tools for businesses and buyers alike. Buyers benefit from reading authentic customer experiences, ratings and accolades on these trusted peer review sites in order to make the best buying decision for their business. At the same time, the feedback collected from our customers through peer review sites, alongside other channels, contributes significantly to Red Hat’s ongoing effort to enhance

via Red Hat Blog https://ift.tt/vhELCqN
Why agents are the new kingmakersFor more than a decade, developers have been looked as the kingmakers when it comes to enterprise IT and innovation...but are AI agents supplanting them? Learn more SiliconANGLE - Red Hat offers free and simple self-serve access to RHEL for application developersRed Hat is cutting through some of the complexity of today’s intricate hybrid cloud and on-premises computing environments with a new version of its flagship operating system that’s more accessible for developer teams who design and test new applications. Learn more TheCUBE - Matt Hicks, Red Hat Pre

via Red Hat Blog https://ift.tt/tHxnJhy
ماژول stream در Nginx یکی از ماژول‌های قدرتمند و در عین حال کمتر شناخته‌شده است که برای پراکسی کردن (proxying) ترافیک لایه‌ی چهارم (TCP/UDP) به کار می‌رود. برخلاف ماژول http که برای سرویس‌های لایه‌ی هفتم طراحی شده، ماژول stream مخصوص لایه‌ی انتقال است (لایه چهارم در مدل OSI).
🎯 کاربرد اصلی Stream Module

ا Load Balancing برای دیتابیس‌ها (مثل MySQL, PostgreSQL)

ا TCP-level reverse proxy (مثلاً برای SSH، Redis، MQTT)

پراکسی کردن ترافیک UDP (مثل DNS، VoIP)

ساخت TLS passthrough proxy (برخلاف termination در http)

استفاده به عنوان ورودی برای SSL offloading

🧩 فعال‌سازی ماژول Stream

ماژول stream به‌طور پیش‌فرض در نسخه‌های pre-built رسمی Nginx فعال نیست. برای استفاده از آن باید:

از نسخه‌ی Nginx Plus استفاده کنید
یا

از سورس با --with-stream کامپایل کنید

بررسی فعال بودن:


nginx -V 2>&1 | grep -- --with-stream



📜 نمونه پیکربندی
پراکسی TCP ساده


stream {
upstream backend {
server 192.168.1.10:3306;
server 192.168.1.11:3306;
}

server {
listen 3306;
proxy_pass backend;
}
}



پراکسی UDP


stream {
server {
listen 53 udp;
proxy_pass 8.8.8.8:53;
}
}



ا SSL passthrough برای mail server



stream {
map $ssl_preread_server_name $backend {
mail.example.com mail_backend:993;
default default_backend:993;
}

server {
listen 993;
proxy_pass $backend;
ssl_preread on;
}
}


نکته: قابلیت ssl_preread در stream شبیه SNI sniffing در TLS است.

⚙️ دستورات مهم Stream Module



proxy_pass  تعریف مقصد
upstream تعریف سرورهای backend
listen تعریف پورت/پروتکل ورودی
ssl_preread فعال‌سازی خواندن SNI بدون decryption
proxy_timeout تایم‌اوت بین Nginx و سرور مقصد
proxy_connect_timeout تایم‌اوت برای اتصال اولیه
proxy_protocol فعال‌سازی PROXY protocol (برای انتقال IP کلاینت)



📌 محدودیت‌ها


امکان دستکاری محتوا وجود ندارد (چون لایه ۴ است)

نمی‌توان از rewrite, headers, gzip, cache استفاده کرد

لاگ‌گیری محدود به connection-level است، نه request-level

🧠 جمع‌بندی

ماژول stream در Nginx به شما اجازه می‌دهد به‌جای لایه‌ی اپلیکیشن (HTTP)، روی لایه‌ی انتقال (TCP/UDP) تمرکز کنید و خدماتی مانند load balancing، SSL passthrough، reverse proxy برای سرویس‌های غیروبی ارائه دهید. این قابلیت برای طراحی زیرساخت‌های حرفه‌ای بسیار حیاتی است، به‌خصوص در محیط‌هایی مثل:

ا Kubernetes ingress برای TCP/UDP

انتقال ترافیک دیتابیس‌ها

پیاده‌سازی reverse proxy با امنیت بالا

🎯 امکانات ماژول stream در NGINX

پشتیبانی از TCP و UDP Reverse Proxy

ا Load Balancing برای TCP/UDP با الگوریتم‌های:

ا round-robin (پیش‌فرض)

ا least_conn (در نسخه Plus)

ا hash-based routing

پشتیبانی از Health Checks (در نسخه NGINX Plus)

ا SSL Passthrough با قابلیت ssl_preread

ا SNI-based Routing بدون decrypt کردن TLS

پشتیبانی از PROXY Protocol (برای انتقال IP کلاینت)

کنترل دسترسی با allow / deny

لاگ‌گیری سفارشی برای TCP/UDP connections

تنظیم Timeoutهای مختلف:

proxy_timeout

proxy_connect_timeout

پشتیبانی از متریک‌های ساده‌ی اتصال

تعریف upstream blocks برای backend سرورها

🧠 کاربردهای ماژول stream

پراکسی و Load Balancing برای دیتابیس‌ها (PostgreSQL, MySQL, Redis)

ا SSL passthrough برای HTTPS با SNI routing

پراکسی سرویس‌های VoIP و UDP-based (مانند RTP، SIP)

ا Reverse Proxy برای DNS (TCP و UDP)

پراکسی و مسیردهی ترافیک mail servers (IMAP, POP3, SMTP)

ا Bastion SSH Host برای هدایت کاربران SSH به سرورهای مختلف

پراکسی برای پروتکل‌های خاص TCP مانند MQTT، FIX، یا custom protocols

استفاده در Kubernetes Ingress Controller برای TCP/UDP Services




#nginx #stream #linux #network

https://t.iss.one/unixmens
3👍1