Brendan Eich, who worked for Netscape, invented JavaScript in 1995. But it was a programming language that could only run on a browser.
In 2008, Google announced a new Web Browser called Chrome. This browser when released revolutionized the world of internet browsing. It's an optimized browser that executes JavaScript fast and has improved the user experience on the web.
The reason Google Chrome could execute JavaScript code so fast was that a JavaScript engine called V8 ran inside Chrome. That engine was responsible for accepting JavaScript code, optimizing the code, then executing it on the computer.
The engine was a proper solution for client-side JavaScript. Google Chrome became the leading Web Browser.
In 2009, a software engineer named Ryan Dahl criticized the popular way back-end servers were run at the time. The most popular software for building Web Servers was the Apache HTTP Server. Dahl argued that it was limited, in that it could not handle a large number of real-time user connections (10,000 +) effectively.
This was one of the main reasons that Ryan Dahl developed Node.js, a tool he built. Node.js used Google’s V8 engine to understand and execute JavaScript code outside the browser. It was a program whose purpose was to run Web Servers.
Node.js was a great alternative to the traditional Apache HTTP server and slowly gained acceptance among the developer community.
The reason Google Chrome could execute JavaScript code so fast was that a JavaScript engine called V8 ran inside Chrome. That engine was responsible for accepting JavaScript code, optimizing the code, then executing it on the computer.
The engine was a proper solution for client-side JavaScript. Google Chrome became the leading Web Browser.
In 2009, a software engineer named Ryan Dahl criticized the popular way back-end servers were run at the time. The most popular software for building Web Servers was the Apache HTTP Server. Dahl argued that it was limited, in that it could not handle a large number of real-time user connections (10,000 +) effectively.
This was one of the main reasons that Ryan Dahl developed Node.js, a tool he built. Node.js used Google’s V8 engine to understand and execute JavaScript code outside the browser. It was a program whose purpose was to run Web Servers.
Node.js was a great alternative to the traditional Apache HTTP server and slowly gained acceptance among the developer community.
What is a runtime environment?
The Runtime Environment of a programming language is any environment where a user can execute code written in that language. That environment provides all the tools and resources necessary for running the code. Node.js is a JavaScript runtime environment.
The Runtime Environment of a programming language is any environment where a user can execute code written in that language. That environment provides all the tools and resources necessary for running the code. Node.js is a JavaScript runtime environment.
The V8 engine contains a memory heap and call stack. They are the building blocks for the V8 engine. They help manage the execution of JavaScript code.
The memory heap is the data store of the V8 engine. Whenever we create a variable that holds an object or function in JavaScript, the engine saves that value in the memory heap. To keep things simple, it is similar to a backpack that stores supplies for a hiker.
Whenever the engine is executing code and comes across any of those variables, it looks up the actual value from the memory heap – just like whenever a hiker is feeling cold and wants to start a fire, they can look into their backpack for a lighter.
The memory heap is the data store of the V8 engine. Whenever we create a variable that holds an object or function in JavaScript, the engine saves that value in the memory heap. To keep things simple, it is similar to a backpack that stores supplies for a hiker.
Whenever the engine is executing code and comes across any of those variables, it looks up the actual value from the memory heap – just like whenever a hiker is feeling cold and wants to start a fire, they can look into their backpack for a lighter.
The call stack is another building block in the V8 engine. It is a data structure that manages the order of functions to be executed. Whenever the program invokes a function, the function is placed on the call stack and can only leave the stack when the engine has handled that function.
JavaScript is a single-threaded language, which means that it can only execute one instruction at a time. Since the call stack contains the order of instructions to be executed, it means that the JavaScript engine has just one order, one call stack.
The Callback Queue works with the First In First Out (FIFO) approach. That means the first instruction (callback) to enter the queue is the first to be invoked.
Code & Life
Photo
The I/O loop (also called the event loop) is the core part of the libuv library, which handles all input and output (I/O) operations. It runs on a single thread, meaning all tasks are executed in one sequence without splitting into multiple threads. However, you can run more than one event loop if each one runs on a different thread.
The event loop works by managing I/O operations asynchronously. This means tasks like reading and writing to network sockets don’t block or stop the program from continuing to do other things at the same time.
Different operating systems have their own ways of handling these tasks:
On Linux, the event loop uses epoll.
On macOS and BSD systems, it uses kqueue.
On SunOS, it uses event ports, and on Windows, it uses IOCP.
The loop waits for I/O events to happen (like a socket being ready to read or write), and when an event occurs, it triggers a callback function to handle it (such as reading data from the network or sending it). This way, the program can efficiently handle many tasks without pausing or waiting unnecessarily.
In short, the event loop is a single-threaded, non-blocking system that handles I/O tasks efficiently, allowing other parts of the program to keep running while I/O tasks are being processed.
The event loop works by managing I/O operations asynchronously. This means tasks like reading and writing to network sockets don’t block or stop the program from continuing to do other things at the same time.
Different operating systems have their own ways of handling these tasks:
On Linux, the event loop uses epoll.
On macOS and BSD systems, it uses kqueue.
On SunOS, it uses event ports, and on Windows, it uses IOCP.
The loop waits for I/O events to happen (like a socket being ready to read or write), and when an event occurs, it triggers a callback function to handle it (such as reading data from the network or sending it). This way, the program can efficiently handle many tasks without pausing or waiting unnecessarily.
In short, the event loop is a single-threaded, non-blocking system that handles I/O tasks efficiently, allowing other parts of the program to keep running while I/O tasks are being processed.
Synchronous Programming:
In synchronous programming, tasks are sequentially executed one after another. Each task must be completed before the next one can begin. This means that if a task takes a long time to execute, it blocks the execution of subsequent tasks, causing the entire program to pause until it finishes. This blocking behavior can decrease performance and responsiveness, especially when tasks involve I/O operations or time-consuming computations.
Consider a simple example of synchronous programming where a web server needs to fetch data from a database before responding to a client’s request. In a synchronous approach, the server must wait for the database operation to complete before it can continue processing other requests. As a result, the server’s ability to handle concurrent requests is limited, and the overall response time for clients can be negatively impacted.
In synchronous programming, tasks are sequentially executed one after another. Each task must be completed before the next one can begin. This means that if a task takes a long time to execute, it blocks the execution of subsequent tasks, causing the entire program to pause until it finishes. This blocking behavior can decrease performance and responsiveness, especially when tasks involve I/O operations or time-consuming computations.
Consider a simple example of synchronous programming where a web server needs to fetch data from a database before responding to a client’s request. In a synchronous approach, the server must wait for the database operation to complete before it can continue processing other requests. As a result, the server’s ability to handle concurrent requests is limited, and the overall response time for clients can be negatively impacted.
Asynchronous Programming:
In contrast, asynchronous programming allows tasks to be executed concurrently and independently of each other. In this model, a task initiates an operation and then continues its execution without waiting for the operation to complete. The program doesn’t block, and other tasks can continue their execution simultaneously. When the asynchronous operation finishes, a callback function or a promise is used to handle the result or trigger further actions.
Asynchronous programming is particularly advantageous when dealing with time-consuming operations, such as network requests, file system operations, or database queries. By not blocking the program’s execution while waiting for these operations to complete, asynchronous programming enables better resource utilization and responsiveness. It allows the program to handle multiple tasks concurrently, improving overall performance and user experience.
Taking our previous example of the web server, an asynchronous approach would enable the server to initiate the database operation and continue serving other requests while waiting for the result. This concurrency allows the server to handle more concurrent clients efficiently, improving scalability and responsiveness.
In contrast, asynchronous programming allows tasks to be executed concurrently and independently of each other. In this model, a task initiates an operation and then continues its execution without waiting for the operation to complete. The program doesn’t block, and other tasks can continue their execution simultaneously. When the asynchronous operation finishes, a callback function or a promise is used to handle the result or trigger further actions.
Asynchronous programming is particularly advantageous when dealing with time-consuming operations, such as network requests, file system operations, or database queries. By not blocking the program’s execution while waiting for these operations to complete, asynchronous programming enables better resource utilization and responsiveness. It allows the program to handle multiple tasks concurrently, improving overall performance and user experience.
Taking our previous example of the web server, an asynchronous approach would enable the server to initiate the database operation and continue serving other requests while waiting for the result. This concurrency allows the server to handle more concurrent clients efficiently, improving scalability and responsiveness.
Event-Driven Architecture:
At the core of Node.js’ non-blocking I/O model is its event-driven architecture. Instead of waiting for I/O operations to complete, Node.js registers event handlers for various I/O events, such as data being available to read from a file or a network request receiving a response. The corresponding event handler is triggered when an event occurs, and the program can respond accordingly. This event-driven approach allows Node.js to process multiple I/O operations simultaneously and efficiently manage their completion.
At the core of Node.js’ non-blocking I/O model is its event-driven architecture. Instead of waiting for I/O operations to complete, Node.js registers event handlers for various I/O events, such as data being available to read from a file or a network request receiving a response. The corresponding event handler is triggered when an event occurs, and the program can respond accordingly. This event-driven approach allows Node.js to process multiple I/O operations simultaneously and efficiently manage their completion.
Callbacks and Event Loop:
Callbacks play a crucial role in Node.js’ non-blocking I/O model. When an I/O operation is initiated, a callback function is provided to handle the completion of that operation. The event loop, responsible for managing I/O events and executing callbacks, continuously checks for completed operations. When an operation finishes, its corresponding callback is queued for execution, allowing the program to continue processing other tasks. This asynchronous callback mechanism ensures that the application remains responsive and can handle concurrent operations efficiently.
Callbacks play a crucial role in Node.js’ non-blocking I/O model. When an I/O operation is initiated, a callback function is provided to handle the completion of that operation. The event loop, responsible for managing I/O events and executing callbacks, continuously checks for completed operations. When an operation finishes, its corresponding callback is queued for execution, allowing the program to continue processing other tasks. This asynchronous callback mechanism ensures that the application remains responsive and can handle concurrent operations efficiently.
—- Blocking vs. Non-blocking —-
Certainly! The terms blocking I/O and non-blocking I/O describe how input/output operations (e.g., reading from a file, network communication) are handled in a system, particularly regarding whether other operations are paused while waiting for I/O to complete.
Blocking I/O
In blocking I/O, when an I/O operation (such as reading from a file or waiting for a network response) is initiated, the process or thread making the request is blocked until the operation is completed. During this waiting period, the process cannot do anything else.
Example: Let's say you want to read a file from disk. If you use a blocking call, the program stops and waits until the entire file is read. Only after the file is fully read will the program continue executing the next instructions.
Analogy: Imagine standing in line at a coffee shop. You place your order, but you have to wait until your coffee is ready before you can move on to other tasks. You’re "blocked" while waiting.
Benefits:
Simpler code logic since you don’t have to handle multiple tasks concurrently.
Drawbacks:
Inefficient in systems that deal with a lot of I/O since the CPU remains idle while waiting.
Doesn’t scale well when multiple I/O operations are required.
Non-blocking I/O
In non-blocking I/O, the process can initiate an I/O operation and continue executing other tasks while waiting for the I/O to complete. Instead of being blocked, the process receives a notification (e.g., a callback or an event) when the I/O operation finishes.
Example: If you want to read from a network socket in a non-blocking manner, you would initiate the read, and the program would immediately continue to do other things. Later, when the data is available, the system notifies the program so it can handle the data.
Analogy: Imagine ordering coffee at a shop and then going to do something else (like reading your emails). When your coffee is ready, you receive a notification (your name gets called), and you can pick it up without having to wait idly in line.
Benefits:
Efficient use of resources since the CPU can do other work while waiting for I/O.
Scales better in applications with many I/O operations (like web servers) since it allows multiple tasks to proceed without waiting for I/O.
Drawbacks:
More complex code since you have to handle the coordination of tasks and responses (e.g., callbacks, promises in JavaScript).
Blocking vs. Non-blocking in Node.js
Node.js is a perfect example of a platform that uses non-blocking I/O. Functions like fs.readFile() are asynchronous by default in Node.js, meaning the program can continue running other tasks while waiting for the file read operation to complete. When it’s done, a callback (or promise) is triggered.
Blocking I/O in Node.js can also be achieved with synchronous functions like fs.readFileSync(), where the code execution pauses until the file is completely read.
In Summary:
Blocking I/O: The process is paused while waiting for the I/O operation to complete.
Non-blocking I/O: The process can continue working on other things while waiting for the I/O operation to complete, and gets notified once the operation is done.
Certainly! The terms blocking I/O and non-blocking I/O describe how input/output operations (e.g., reading from a file, network communication) are handled in a system, particularly regarding whether other operations are paused while waiting for I/O to complete.
Blocking I/O
In blocking I/O, when an I/O operation (such as reading from a file or waiting for a network response) is initiated, the process or thread making the request is blocked until the operation is completed. During this waiting period, the process cannot do anything else.
Example: Let's say you want to read a file from disk. If you use a blocking call, the program stops and waits until the entire file is read. Only after the file is fully read will the program continue executing the next instructions.
Analogy: Imagine standing in line at a coffee shop. You place your order, but you have to wait until your coffee is ready before you can move on to other tasks. You’re "blocked" while waiting.
Benefits:
Simpler code logic since you don’t have to handle multiple tasks concurrently.
Drawbacks:
Inefficient in systems that deal with a lot of I/O since the CPU remains idle while waiting.
Doesn’t scale well when multiple I/O operations are required.
Non-blocking I/O
In non-blocking I/O, the process can initiate an I/O operation and continue executing other tasks while waiting for the I/O to complete. Instead of being blocked, the process receives a notification (e.g., a callback or an event) when the I/O operation finishes.
Example: If you want to read from a network socket in a non-blocking manner, you would initiate the read, and the program would immediately continue to do other things. Later, when the data is available, the system notifies the program so it can handle the data.
Analogy: Imagine ordering coffee at a shop and then going to do something else (like reading your emails). When your coffee is ready, you receive a notification (your name gets called), and you can pick it up without having to wait idly in line.
Benefits:
Efficient use of resources since the CPU can do other work while waiting for I/O.
Scales better in applications with many I/O operations (like web servers) since it allows multiple tasks to proceed without waiting for I/O.
Drawbacks:
More complex code since you have to handle the coordination of tasks and responses (e.g., callbacks, promises in JavaScript).
Blocking vs. Non-blocking in Node.js
Node.js is a perfect example of a platform that uses non-blocking I/O. Functions like fs.readFile() are asynchronous by default in Node.js, meaning the program can continue running other tasks while waiting for the file read operation to complete. When it’s done, a callback (or promise) is triggered.
Blocking I/O in Node.js can also be achieved with synchronous functions like fs.readFileSync(), where the code execution pauses until the file is completely read.
In Summary:
Blocking I/O: The process is paused while waiting for the I/O operation to complete.
Non-blocking I/O: The process can continue working on other things while waiting for the I/O operation to complete, and gets notified once the operation is done.
Code & Life
https://dev.to/robiulhr/is-javascript-compiled-or-interpreted-language-l20?signin=true
این مقاله خیلی خوبه حتما بخونینش