Code & Life
84 subscribers
24 photos
2 videos
1 file
31 links
اینجا درباره کدنویسی ، چیزهایی که یاد میگیرم و مطالعه میکنم و روزمره‌هام می‌نویسم.

https://erfuuan.github.io/
Download Telegram
Callbacks and Event Loop:

Callbacks play a crucial role in Node.js’ non-blocking I/O model. When an I/O operation is initiated, a callback function is provided to handle the completion of that operation. The event loop, responsible for managing I/O events and executing callbacks, continuously checks for completed operations. When an operation finishes, its corresponding callback is queued for execution, allowing the program to continue processing other tasks. This asynchronous callback mechanism ensures that the application remains responsive and can handle concurrent operations efficiently.
Asynchronous Patterns and Techniques:
—- Blocking vs. Non-blocking —-
Certainly! The terms blocking I/O and non-blocking I/O describe how input/output operations (e.g., reading from a file, network communication) are handled in a system, particularly regarding whether other operations are paused while waiting for I/O to complete.
Blocking I/O

In blocking I/O, when an I/O operation (such as reading from a file or waiting for a network response) is initiated, the process or thread making the request is blocked until the operation is completed. During this waiting period, the process cannot do anything else.

Example: Let's say you want to read a file from disk. If you use a blocking call, the program stops and waits until the entire file is read. Only after the file is fully read will the program continue executing the next instructions.

Analogy: Imagine standing in line at a coffee shop. You place your order, but you have to wait until your coffee is ready before you can move on to other tasks. You’re "blocked" while waiting.

Benefits:
Simpler code logic since you don’t have to handle multiple tasks concurrently.

Drawbacks:
Inefficient in systems that deal with a lot of I/O since the CPU remains idle while waiting.
Doesn’t scale well when multiple I/O operations are required.

Non-blocking I/O

In non-blocking I/O, the process can initiate an I/O operation and continue executing other tasks while waiting for the I/O to complete. Instead of being blocked, the process receives a notification (e.g., a callback or an event) when the I/O operation finishes.

Example: If you want to read from a network socket in a non-blocking manner, you would initiate the read, and the program would immediately continue to do other things. Later, when the data is available, the system notifies the program so it can handle the data.

Analogy: Imagine ordering coffee at a shop and then going to do something else (like reading your emails). When your coffee is ready, you receive a notification (your name gets called), and you can pick it up without having to wait idly in line.

Benefits:
Efficient use of resources since the CPU can do other work while waiting for I/O.
Scales better in applications with many I/O operations (like web servers) since it allows multiple tasks to proceed without waiting for I/O.

Drawbacks:
More complex code since you have to handle the coordination of tasks and responses (e.g., callbacks, promises in JavaScript).

Blocking vs. Non-blocking in Node.js

Node.js is a perfect example of a platform that uses non-blocking I/O. Functions like fs.readFile() are asynchronous by default in Node.js, meaning the program can continue running other tasks while waiting for the file read operation to complete. When it’s done, a callback (or promise) is triggered.

Blocking I/O in Node.js can also be achieved with synchronous functions like fs.readFileSync(), where the code execution pauses until the file is completely read.

In Summary:

Blocking I/O: The process is paused while waiting for the I/O operation to complete.
Non-blocking I/O: The process can continue working on other things while waiting for the I/O operation to complete, and gets notified once the operation is done.
Code & Life
نکات مهم :
JIT essentially plays the role of both an interpreter and a compiler simultaneously.
1
Just-In-Time Compiler (JIT):

JIT comprises three primary phases: Profiler, Baseline Compiler, and Optimizing Compiler.
1
Profiler: Also referred to as a monitor, the profiler tracks the portions of code running most frequently while the JavaScript (JS) code passes through the JS engine. It identifies frequently used code, designating it as “WARM.” If the same code is used even more frequently, it becomes “HOT CODE.” The profiler extracts the most commonly executed code from our source code.
1
Baseline Compiler: The WARM or HOT code is then translated into bytecode within the Baseline Compiler.
1
Optimizing Compiler: The HOT parts identified by the profiler are passed to the optimizing compiler. Its main task is to transform these hot parts into an optimized version that runs even faster. The JavaScript engine utilizes the concept of “shape” to achieve this optimization. Objects created using the same constructor function have the same shape because their properties are identical. Shape creation facilitates inline caching and other optimizations. In the baseline compiler, we discussed bytecode. However, bytecode is not as fast as machine code. By directly converting frequently executed code into machine code, the program’s performance improves significantly. The optimizing compiler handles this task.

The process involves transforming JavaScript source code into bytecode, executed by an interpreter. Meanwhile, the monitor or profiler forwards warm and hot code parts to the optimizing compiler, which then converts them into optimized machine code. Both the interpreter and the compiler contribute to enhancing program performance. This exemplifies the concept of just-in-time compilation. However, it’s important to note that different browsers have their own JIT implementations, while the main task remains consistent.
1
1
When a JavaScript program reaches the parser, it generates nodes from the provided tokens after the tokenization process. These nodes are used to create the Abstract Syntax Tree (AST), which is later converted into bytecode by the interpreter. The interpreter in the V8 engine is known as Ignition. Within the interpreter, bytecode is executed using registers as memory. The V8 engine optimizes performance by creating shapes for each object and describing their structure. This optimization enables inline caching and other optimizations.
2
The interpreter, called Ignition in the V8 engine
Code & Life
The interpreter, called Ignition in the V8 engine
خوشم اومد از توضیحش 😂😂
Code & Life
Photo
Although the description above doesn’t actively mention the profiler, it works behind the scenes, identifying the “hot” code parts that are then passed to the Turbofan compiler. Turbofan is the JIT compiler within the V8 engine responsible for optimizing these “hot code” sections. It transforms them into machine code tailored to the specific architecture, ensuring optimal performance.

In summary, the profiler receives bytecode from the interpreter and passes the identified “hot” sections to the Turbofan compiler for transformation into machine code. This machine code is designed to deliver efficient performance on the target architecture.

@erfuuan_dev
1