Code & Life
84 subscribers
24 photos
2 videos
1 file
31 links
اینجا درباره کدنویسی ، چیزهایی که یاد میگیرم و مطالعه میکنم و روزمره‌هام می‌نویسم.

https://erfuuan.github.io/
Download Telegram
Code & Life
Photo
The I/O loop (also called the event loop) is the core part of the libuv library, which handles all input and output (I/O) operations. It runs on a single thread, meaning all tasks are executed in one sequence without splitting into multiple threads. However, you can run more than one event loop if each one runs on a different thread.

The event loop works by managing I/O operations asynchronously. This means tasks like reading and writing to network sockets don’t block or stop the program from continuing to do other things at the same time.

Different operating systems have their own ways of handling these tasks:

On Linux, the event loop uses epoll.
On macOS and BSD systems, it uses kqueue.
On SunOS, it uses event ports, and on Windows, it uses IOCP.

The loop waits for I/O events to happen (like a socket being ready to read or write), and when an event occurs, it triggers a callback function to handle it (such as reading data from the network or sending it). This way, the program can efficiently handle many tasks without pausing or waiting unnecessarily.

In short, the event loop is a single-threaded, non-blocking system that handles I/O tasks efficiently, allowing other parts of the program to keep running while I/O tasks are being processed.
Synchronous Programming:

In synchronous programming, tasks are sequentially executed one after another. Each task must be completed before the next one can begin. This means that if a task takes a long time to execute, it blocks the execution of subsequent tasks, causing the entire program to pause until it finishes. This blocking behavior can decrease performance and responsiveness, especially when tasks involve I/O operations or time-consuming computations.

Consider a simple example of synchronous programming where a web server needs to fetch data from a database before responding to a client’s request. In a synchronous approach, the server must wait for the database operation to complete before it can continue processing other requests. As a result, the server’s ability to handle concurrent requests is limited, and the overall response time for clients can be negatively impacted.
Asynchronous Programming:

In contrast, asynchronous programming allows tasks to be executed concurrently and independently of each other. In this model, a task initiates an operation and then continues its execution without waiting for the operation to complete. The program doesn’t block, and other tasks can continue their execution simultaneously. When the asynchronous operation finishes, a callback function or a promise is used to handle the result or trigger further actions.

Asynchronous programming is particularly advantageous when dealing with time-consuming operations, such as network requests, file system operations, or database queries. By not blocking the program’s execution while waiting for these operations to complete, asynchronous programming enables better resource utilization and responsiveness. It allows the program to handle multiple tasks concurrently, improving overall performance and user experience.

Taking our previous example of the web server, an asynchronous approach would enable the server to initiate the database operation and continue serving other requests while waiting for the result. This concurrency allows the server to handle more concurrent clients efficiently, improving scalability and responsiveness.
Event-Driven Architecture:

At the core of Node.js’ non-blocking I/O model is its event-driven architecture. Instead of waiting for I/O operations to complete, Node.js registers event handlers for various I/O events, such as data being available to read from a file or a network request receiving a response. The corresponding event handler is triggered when an event occurs, and the program can respond accordingly. This event-driven approach allows Node.js to process multiple I/O operations simultaneously and efficiently manage their completion.
Callbacks and Event Loop:

Callbacks play a crucial role in Node.js’ non-blocking I/O model. When an I/O operation is initiated, a callback function is provided to handle the completion of that operation. The event loop, responsible for managing I/O events and executing callbacks, continuously checks for completed operations. When an operation finishes, its corresponding callback is queued for execution, allowing the program to continue processing other tasks. This asynchronous callback mechanism ensures that the application remains responsive and can handle concurrent operations efficiently.
Asynchronous Patterns and Techniques:
—- Blocking vs. Non-blocking —-
Certainly! The terms blocking I/O and non-blocking I/O describe how input/output operations (e.g., reading from a file, network communication) are handled in a system, particularly regarding whether other operations are paused while waiting for I/O to complete.
Blocking I/O

In blocking I/O, when an I/O operation (such as reading from a file or waiting for a network response) is initiated, the process or thread making the request is blocked until the operation is completed. During this waiting period, the process cannot do anything else.

Example: Let's say you want to read a file from disk. If you use a blocking call, the program stops and waits until the entire file is read. Only after the file is fully read will the program continue executing the next instructions.

Analogy: Imagine standing in line at a coffee shop. You place your order, but you have to wait until your coffee is ready before you can move on to other tasks. You’re "blocked" while waiting.

Benefits:
Simpler code logic since you don’t have to handle multiple tasks concurrently.

Drawbacks:
Inefficient in systems that deal with a lot of I/O since the CPU remains idle while waiting.
Doesn’t scale well when multiple I/O operations are required.

Non-blocking I/O

In non-blocking I/O, the process can initiate an I/O operation and continue executing other tasks while waiting for the I/O to complete. Instead of being blocked, the process receives a notification (e.g., a callback or an event) when the I/O operation finishes.

Example: If you want to read from a network socket in a non-blocking manner, you would initiate the read, and the program would immediately continue to do other things. Later, when the data is available, the system notifies the program so it can handle the data.

Analogy: Imagine ordering coffee at a shop and then going to do something else (like reading your emails). When your coffee is ready, you receive a notification (your name gets called), and you can pick it up without having to wait idly in line.

Benefits:
Efficient use of resources since the CPU can do other work while waiting for I/O.
Scales better in applications with many I/O operations (like web servers) since it allows multiple tasks to proceed without waiting for I/O.

Drawbacks:
More complex code since you have to handle the coordination of tasks and responses (e.g., callbacks, promises in JavaScript).

Blocking vs. Non-blocking in Node.js

Node.js is a perfect example of a platform that uses non-blocking I/O. Functions like fs.readFile() are asynchronous by default in Node.js, meaning the program can continue running other tasks while waiting for the file read operation to complete. When it’s done, a callback (or promise) is triggered.

Blocking I/O in Node.js can also be achieved with synchronous functions like fs.readFileSync(), where the code execution pauses until the file is completely read.

In Summary:

Blocking I/O: The process is paused while waiting for the I/O operation to complete.
Non-blocking I/O: The process can continue working on other things while waiting for the I/O operation to complete, and gets notified once the operation is done.
Code & Life
نکات مهم :
JIT essentially plays the role of both an interpreter and a compiler simultaneously.
1
Just-In-Time Compiler (JIT):

JIT comprises three primary phases: Profiler, Baseline Compiler, and Optimizing Compiler.
1
Profiler: Also referred to as a monitor, the profiler tracks the portions of code running most frequently while the JavaScript (JS) code passes through the JS engine. It identifies frequently used code, designating it as “WARM.” If the same code is used even more frequently, it becomes “HOT CODE.” The profiler extracts the most commonly executed code from our source code.
1
Baseline Compiler: The WARM or HOT code is then translated into bytecode within the Baseline Compiler.
1
Optimizing Compiler: The HOT parts identified by the profiler are passed to the optimizing compiler. Its main task is to transform these hot parts into an optimized version that runs even faster. The JavaScript engine utilizes the concept of “shape” to achieve this optimization. Objects created using the same constructor function have the same shape because their properties are identical. Shape creation facilitates inline caching and other optimizations. In the baseline compiler, we discussed bytecode. However, bytecode is not as fast as machine code. By directly converting frequently executed code into machine code, the program’s performance improves significantly. The optimizing compiler handles this task.

The process involves transforming JavaScript source code into bytecode, executed by an interpreter. Meanwhile, the monitor or profiler forwards warm and hot code parts to the optimizing compiler, which then converts them into optimized machine code. Both the interpreter and the compiler contribute to enhancing program performance. This exemplifies the concept of just-in-time compilation. However, it’s important to note that different browsers have their own JIT implementations, while the main task remains consistent.
1
1
When a JavaScript program reaches the parser, it generates nodes from the provided tokens after the tokenization process. These nodes are used to create the Abstract Syntax Tree (AST), which is later converted into bytecode by the interpreter. The interpreter in the V8 engine is known as Ignition. Within the interpreter, bytecode is executed using registers as memory. The V8 engine optimizes performance by creating shapes for each object and describing their structure. This optimization enables inline caching and other optimizations.
2