My Logo

PUBLISHED APRIL 06, 2024

Teghen Donald Ticha
Lastly Updated: 3 months ago
Reading time 17 mins

Asynchronous Flow Control

In this series part, we'll explore various aspects of asynchronous flow control in Node.js, from basic concepts to advanced techniques.
Asynchronous Flow Control

Prerequisite

Before diving into this guide, it's recommended to have a basic understanding of JavaScript and Node.js fundamentals.

Familiarity with concepts such as functions, callbacks, and event-driven programming will be beneficial, although not required.

Asynchronous programming is a fundamental aspect of Node.js, allowing non-blocking execution of code to handle I/O operations efficiently. In this section, we'll delve into the basics of asynchronous programming and explore how it works in Node.js.

1. Event Loop and Non-Blocking I/O

As we saw in the previous chapter, Node.js operates on a single-threaded event loop, which enables it to handle multiple concurrent operations without blocking the execution of other tasks.

When an asynchronous operation, such as reading from a file or making an HTTP request, is initiated, Node.js continues executing other tasks while waiting for the operation to complete.

Once the operation is finished, a callback function is triggered to handle the result.

Let's illustrate with some examples:

// Reading a File
// NB: to try this out, create a file with random content in same directory as the js file, and name it file.txt
const fs = require('fs');

// Asynchronously read contents from a file
fs.readFile('./file.txt', 'utf8', (err, data) => { // assuming file.txt is in the same directory.
    if (err) {
        console.error('File read error:', err);
        return;
    }
    console.log('File contents:', data);
});
console.log('Reading file...');

Breakdown: we use fs.readFile to read the contents of a file asynchronously. The callback function handles the result or error once the file is read.

Notice how the main thread continues executing the next statement (console.log) without waiting for the file read operation to complete.


Let's see another example:

// Making an http get request

const https = require('https');

// Asynchronously make an HTTP GET request
https.get('https://reqbin.com/echo/get/json', (res) => {
    console.log(`Response status code: ${res.statusCode}`);
    res.on('data', (data) => {
        process.stdout.write(data);
    });
    return
})
.on('error', (err) => {
    console.error('Error making HTTP request:', err);
    return
});

console.log('Initiating HTTP request...');

Breadown: Similarly, we use http.get to make an HTTP GET request asynchronously. The callback function handles the response once it's received. Meanwhile, the main thread continues executing other statements.

Try the above💻 and see the result for more clarity.

Now that the ground work is done with, let's move on to more fun stuff!

Understanding Callbacks

Callbacks are functions passed as arguments to other functions and are invoked once the asynchronous operation is completed.

They allow us to handle the result of an asynchronous operation or propagate errors. In Node.js, callbacks are a fundamental mechanism for working with asynchronous code.

Let's look at this asynchronous file operation with callback example.

// Intro to discussion
const fs = require('fs');

// Asynchronously read contents from a file
fs.readFile('example.txt', 'utf8', (err, data) => {
    if (err) {
        console.error('Error reading file:', err);
        return;
    }
    console.log('File contents:', data);
});
console.log('Initiating file read...');


Callback Hell Problem

Callback hell refers to the situation where nested callbacks lead to deeply nested and unreadable code.

This can happen when multiple asynchronous operations are chained together, resulting in a pyramid of callbacks. Callback hell makes code difficult to understand, maintain, and debug.

Following the file operation example above, let's see how things can get out of hand very fast.

// Fake Operations
function asyncOperation1(callback) {
    setTimeout(() => {
        const result1 = 'Result from asyncOperation1';
        callback(null, result1);
    }, 1000);
}

function asyncOperation2(callback) {
    setTimeout(() => {
        const result2 = 'Result from asyncOperation2';
        callback(null, result2);
    }, 1500);
}

function asyncOperation3(callback) {
    setTimeout(() => {
        const result3 = 'Result from asyncOperation3';
        callback(null,  result3);
    }, 2000);
}


// Callback hell scenario
asyncOperation1((err, result1) => {
    if (err) {
        console.error('Error:', err);
        return;
    }
    console.log(result1);
    asyncOperation2((err, result2) => {
        if (err) {
            console.error('Error:', err);
            return;
        }
        console.log(result2);
        asyncOperation3((err, result3) => {
            if (err) {
                console.error('Error:', err);
                return;
            }
            console.log(result3);
            // Nested callback continues...
        });
    });
});


As you can see, the code structure becomes increasingly convoluted and difficult to follow as each asynchronous operation is nested within the callback of the previous one.

This nesting can quickly lead to what's known as `callback hell` where the code becomes deeply nested and hard to understand.

As more operations are added, the pyramid of callbacks grows taller, making the code even more unwieldy and prone to errors.

This illustrates the pressing need for solutions to refactor and organize such code effectively as we will see below.


Mitigating Callback Hell

To mitigate callback hell, various strategies can be employed, such as modularization, named functions, and control flow libraries like `async.js `or `Promises`.

These techniques help organize and structure asynchronous code, making it more readable and maintainable.


Let's refactor the above code using some simple strategies:

a. Named functions

// Using named functions to reafctor callback hell

// Fake Operations
function asyncOperation1(callback) {
    setTimeout(() => {
        const result1 = 'Result from asyncOperation1';
        callback(null, result1);
    }, 1000);
}

function asyncOperation2(callback) {
    setTimeout(() => {
        const result2 = 'Result from asyncOperation2';
        callback(null, result2);
    }, 1500);
}

function asyncOperation3(callback) {
    setTimeout(() => {
        const result3 = 'Result from asyncOperation3';
        callback(null,  result3);
    }, 2000);
}


// Named functions for handling results
function handleResult1(err, result1) {
    if (err) {
        console.error('Error:', err);
        return;
    }
    console.log(result1);
    asyncOperation2(handleResult2);
}

function handleResult2(err, result2) {
    if (err) {
        console.error('Error:', err);
        return;
    }
    console.log(result2);
    asyncOperation3(handleResult3);
}

function handleResult3(err, result3) {
    if (err) {
        console.error('Error:', err);
        return;
    }
    console.log(result3);
    // Continue with additional operations if needed
}

asyncOperation1(handleResult1);


b. Promises

// Fake Operations

// Asynchronous operation 1: Simulate fetching data from an API
function asyncOperation1() {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve('Data from API');
        }, 1000); // Simulate 1-second delay
    });
}

// Asynchronous operation 2: Simulate processing data
function asyncOperation2(data) {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve(`Processed ${data}`);
        }, 1500); // Simulate 1.5-second delay
    });
}

// Asynchronous operation 3: Simulate saving data to a database
function asyncOperation3(data) {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve(`Saved ${data} to database`);
        }, 2000); // Simulate 2-second delay
    });
}


//Using promise to call our operation
asyncOperation1()
    .then((result1) => {
        console.log(result1);
        return asyncOperation2(result1);
    })
    .then((result2) => {
        console.log(result2);
        return asyncOperation3(result2);
    })
    .then((result3) => {
        console.log(result3);
        // Continue with additional operations if needed
    })
    .catch((error) => {
        console.error('Error:', error);
    });




c. Async / Await

// Fake Operations

// Asynchronous operation 1: Simulate fetching data from an API
function asyncOperation1() {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve('Data from API');
        }, 1000); // Simulate 1-second delay
    });
}

// Asynchronous operation 2: Simulate processing data
function asyncOperation2(data) {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve(`Processed ${data}`);
        }, 1500); // Simulate 1.5-second delay
    });
}

// Asynchronous operation 3: Simulate saving data to a database
function asyncOperation3(data) {
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve(`Saved ${data} to database`);
        }, 2000); // Simulate 2-second delay
    });
}

// Using Async / Await to call our operations
async function main() {
    try {
        const result1 = await asyncOperation1();
        console.log(result1);

        const result2 = await asyncOperation2(result1);
        console.log(result2);

        const result3 = await asyncOperation3(result2);
        console.log(result3);

        // Continue with additional operations if needed
    } catch (error) {
        console.error('Error:', error);
    }
}

main();


By refactoring the code with named functions, we avoid the nesting of callbacks and make the code more readable and maintainable.

Asynchronous programming in Node.js poses unique challenges for error handling due to its non-blocking nature.

Errors can occur asynchronously, propagate through callback chains or Promise chains, and impact the reliability and stability of Node.js applications. In this section, we'll explore various techniques and best practices for effective error handling in asynchronous JavaScript code, specifically tailored for Node.js development.

1. Error-First Callbacks

Error-first callbacks, also known as `Node-style callbacks`, are a common convention in nodeJS for handling errors in asynchronous code.

In this approach, asynchronous functions pass an error object as the first argument to the callback function, allowing developers to handle errors explicitly.

Let's start with a simple example

// Example of an error-first callback
const fs = require('fs');

// Asynchronous file reading with error-first callback
fs.readFile('example.txt', (err, data) => {
    if (err) {
        console.error('Error reading file:', err);
        return;
    }
    console.log('File content:', data);
});

In this example, the fs.readFile function asynchronously reads a file and passes an error object as the first argument to the callback function. If an error occurs during the file reading operation, the `err` parameter will contain the error object.

Advantages of Error-First Callbacks in NodeJS
  • Explicit error handling with clear indication of errors.
  • Widely adopted convention in the Node.js ecosystem.
Limitations of Error-First Callbacks in NodeJS
  • Callback hell: Nested and complex callback structures can lead to unreadable code.
  • Error propagation: Errors must be handled at each level of the callback chain.

2.
Promises

Promises provide a more elegant and composable approach to error handling in asynchronous JavaScript code.

With Promises, errors are propagated through the Promise chain until they are handled using the .catch() method.


Let's see what that looks like. (we must use the promise-based version of the fs module).

// Example of error handling with Promises
const fs = require('fs').promises;

// Asynchronous file reading with Promises
fs.readFile('example.txt')
    .then(data => {
        console.log('File content:', data);
    })
    .catch(err => {
        console.error('Error reading file:', err);
    });

In this example, the fs.promises.readFile function returns a Promise that resolves with the file content or rejects with an error. Errors can be handled using the .catch() method, allowing for cleaner and more readable error handling code.


Advantages of Promises in Node.js:
  • Improved readability and maintainability compared to error-first callbacks.
  • Built-in error handling with the .catch() method.

3. Async/Await

Async functions and the `await` keyword provide a synchronous-like syntax for writing asynchronous code in Node.js, making error handling even more intuitive. Errors can be handled using traditional try-catch blocks.

// Example of error handling with async/await
const fs = require('fs').promises;

async function readFileAsync() {
    try {
        const data = await fs.readFile('example.txt');
        console.log('File content:', data);
    } catch (err) {
        console.error('Error reading file:', err);
    }
}

readFileAsync();

In this nodeJS example, the fs.promises.readFile function is awaited within an `async` function, allowing for the use of try-catch blocks to handle errors.

This syntax provides a synchronous-like experience for handling asynchronous operations in nodeJS.


Advantages of Async/Await in NodeJS:
  • Synchronous-like syntax for writing asynchronous code.
  • Exception handling with try-catch blocks for intuitive error handling.

Best Practices for Error Handling in NodeJS
  • Handle errors immediately to prevent them from propagating further.
  • Use consistent error handling patterns throughout the Node.js application.
  • Provide descriptive error messages to aid in debugging and troubleshooting.
  • Gracefully recover from errors whenever possible.
  • Test error handling scenarios to ensure robustness and reliability.


In the next section, we'll discuss different execution strategies in asynchronous Node.js code, including sequential, semi-parallel, and full-parallel execution, and their implications for error handling.

In asynchronous programming with Node.js, managing the execution of multiple tasks can be challenging, especially when dealing with callback-based asynchronous functions.

Different execution strategies can be employed to handle these tasks effectively.

In this section, we'll explore three main execution strategies: sequential, semi-parallel, and full-parallel execution, focusing on callback-based approaches.


Before starting, let's create a file named fakeAsync.js add the following to it.

export function asyncTask1(callback) {
    setTimeout(() => {
        const data = 'asyncTask1 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask2(callback) {
    setTimeout(() => {
        const data = 'asyncTask2 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask3(callback) {
    setTimeout(() => {
        const data = 'asyncTask3 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask4(callback) {
    setTimeout(() => {
        const data = 'asyncTask4 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask5(callback) {
    setTimeout(() => {
        const data = 'asyncTask5 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask6(callback) {
    setTimeout(() => {
        const data = 'asyncTask6 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask7(callback) {
    setTimeout(() => {
        const data = 'asyncTask7 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask8(callback) {
    setTimeout(() => {
        const data = 'asyncTask8 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask9(callback) {
    setTimeout(() => {
        const data = 'asyncTask9 completed'
        callback(null, data);
    }, 1000); 
}
export function asyncTask10(callback) {
    setTimeout(() => {
        const data = 'asyncTask10 completed'
        callback(null, data);
    }, 1000); 
}


1. Sequential Execution

Sequential execution involves executing asynchronous tasks one after the other, ensuring that each task completes before starting the next one.

This approach is straightforward and suitable for scenarios where tasks have dependencies or require a specific order of execution.


Let's have an illustration:

// import Fake async opeartions
const my_tasks = require('./fakeAsync')


function sequentialExecution(callback) {

    let index = 0;
    const no_tasks =my_tasks.length;

    function runner () {
        my_tasks[index]((err, result) => {
            if (err) {
                callback(err);
                return;
            }
            console.log(result);
            index++ ;
            if (index < no_tasks) {                
                runner() ;
            }
            else {
                callback(null, result);
            }
        }) 
    }

    runner();
}

// Usage
sequentialExecution((err, result) => {
    if (err) {
        console.error('Error during sequential execution:', err);
    } else {
        console.log('Sequential execution completed:', result);
    }
});

Breakdown: the `sequentialExecution` function executes the asynchronous tasks sequentially, passing the result of each task to the next one via callback functions.

Advantages of Sequential Execution
  • Simple and easy to understand.
  • Clear control flow with explicit error handling.
  • Ensures task dependencies are respected.
Limitations of Sequential Execution
  • May result in longer overall execution time if tasks are independent and could run concurrently.
2. Semi-Parallel Execution

Semi-parallel execution involves executing certain tasks concurrently while maintaining a specific order for dependent tasks. This approach strikes a balance between performance and control.

To implement dynamic concurrency control in the semi-parallel execution strategy, we can use a semaphore-like mechanism to limit the number of concurrent tasks running at any given time.


Let's see what that looks like:

// import Fake async opeartions
const my_tasks = require('./fakeAsync');

function semiParallelExecution(concurrencyLimit, mytasks, callback) {
    let runningTasks = 0;
    //let completedTasks = 0;
    let results = [];
    let tasks = mytasks;

    function runTask() {  
        while (runningTasks <= concurrencyLimit && tasks.length > 0) {                     
            tasks.shift()((err, result) => {
                if (!err) {
                    results.push(result);
                }
                //completedTasks++;
                runningTasks--;
                if (tasks.length > 0) {
                    runTask();
                }
                else if (runningTasks === 0) {
                    callback(null, results);
                    return;
                }            
            });

            runningTasks++;
        }
    }

  runTask()
}

// Usage


const concurrencyLimit = 2; // Dynamic concurrency limit
function final (err, results) {
    if (err) {
        console.error('Error during semi-parallel execution:', err);
    } else {
        console.log('Limited-parallel execution completed:', results);
    }
} 
semiParallelExecution(concurrencyLimit, my_tasks, final);

Breakdown: The `semiParallelExecution` function executes `asyncTasks` concurrently, then waits for both a task whichever to complete before executing the next asyncTask.

The `concurrencyLimit` determines the maximum number of concurrent tasks allowed to run simultaneously. We adjust the limit dynamically based on the available system resources or other factors.

The `runTask` function recursively schedules tasks while respecting the concurrency limit. Once a task completes, it triggers the execution of the next task within the limit.

Advantages of Semi-Parallel Execution
  • Improved performance by executing independent tasks concurrently.
  • Maintains control over task execution concurrency, there by increasing execution speed and limiting resource consumption.

Limitations of Semi-Parallel Execution
  • Increased complexity compared to sequential execution.
  • Dependency management can become challenging as the number of tasks grows.
3. Full-Parallel Execution

Full-parallel execution involves executing all tasks concurrently without imposing any specific order or dependencies. This approach maximizes performance but requires careful consideration of potential concurrency issues.

Let's see how to quickly set that up.

// import Fake async opeartions
const my_tasks = require('./fakeAsync');

let results = [];
const taskLength = my_tasks.length;

function fullParallelExecution () {
    my_tasks.forEach(task => {
        task((err, result) => {
            if (!err) {
                results.push(result);
            }
            if (results.length >= taskLength) {
                final(null, results)
            }
        })
    })
}


// Usage
function final (err, results) {
    if (err) {
        console.error('Error during semi-parallel execution:', err);
    } else {
        console.log('Limited-parallel execution completed:', results);
    }
} 

fullParallelExecution();

The above is self-explanatory as this is the default strategy used by many developers even without being aware of it😉.

Advantages of Full-Parallel Execution
  • Maximum utilization of hardware resources, leading to improved performance.
  • Particularly effective for scenarios with independent and CPU-bound tasks.
Limitations of Full-Parallel Execution
  • Lack of control over task execution order may lead to unpredictable results.
  • Care must be taken to avoid resource contention and bottlenecks.

Choosing the Right Execution Strategy

The choice of execution strategy depends on factors such as task dependencies, performance requirements, and resource constraints.

Developers should carefully evaluate the trade-offs and choose the most appropriate strategy for their specific use case.

Conclusion

In this series part, we explored asynchronous flow control in nodeJS, covering execution strategies, error handling, dynamic concurrency control, and best practices.

By understanding these concepts, you can build efficient, scalable, and reliable applications. Asynchronous programming is essential for modern Node.js development, enabling responsiveness and scalability.

In the next part, we will look at blocking and non-blocking IO.

All Chapter Parts for NodeJs In Theory, An absolute Beginner’s Overview
  1. Chapter 1 , Part 1 : Introduction to NodeJS

    In this series part, I introduce nodeJS and some technical concepts associated with it. I also show how easy it is to setup and start a simple nodeJS web server.

  2. Chapter 1 , Part 2 : How to Install and Setup NodeJS

    In this series part, I run you through the various ways to install nodeJS. I also discuss how to install nvm and use it to switch between different node versions.

  3. Chapter 1 , Part 3 : How much JavaScript do you need to learn NodeJS

    In this series part, we explore the nuanced relationship between JavaScript and NodeJS, highlighting some subtle distinctions between the two environments.

  4. Chapter 1 , Part 4 : The v8 Engine and the difference Between NodeJS and the browser

    In this series part, we explore the V8 engine and how it interacts with nodeJS. We also discuss node’s event loop and uncover the mystery behinds node’s ability to handle concurrent operations.

  5. Chapter 1 , Part 5 : NPM, the NodeJS package manager

    Discover the essentials of npm, the powerful package manager for Node.js. Learn installation, management, publishing, and best practices

  6. Chapter 1 , Part 6 : NodeJS in Development Vs Production

    Explore how Node.js behaves differently in development and production environments. Learn key considerations for deploying Node.js applications effectively.

  7. Chapter 2 , Part 1 : Asynchronous Flow Control

    In this series part, we'll explore various aspects of asynchronous flow control in Node.js, from basic concepts to advanced techniques.

  8. Chapter 2 , Part 2 : Blocking vs Non-blocking I/O

    Explore the differences between blocking and non-blocking I/O in Node.js, and learn how to optimize performance and scalability.

  9. Chapter 2 , Part 3 : Understanding NodeJS Event loop

    Exploring the Node.js event loop by understanding its phases, kernel integration, and processes enabling seamless handling of asynchronous operations in your applications.

  10. Chapter 2 , Part 4 : The NodeJS EventEmitter

    Explore the power of Node.js EventEmitter: an essential tool for building scalable and event-driven applications. Learn how to utilize it effectively!

  11. Chapter 3 , Part 1 : Working with files in NodeJS

    Gain comprehensive insights into file management in Node.js, covering file stats, paths, and descriptors, to streamline and enhance file operations in your applications.

  12. Chapter 3 , Part 2 : Reading and Writing Files in NodeJS

    Uncover the fundamentals of reading and writing files in nodeJS with comprehensive examples and use cases for some widely used methods.

  13. Chapter 3 , Part 3 : Working with Folders in NodeJS

    Unlock the secrets of folder manipulation in Node.js! Explore essential techniques and methods for working with directories efficiently

  14. Chapter 4 , Part 1 : Running NodeJS Scripts

    Master the command line interface for executing nodeJS scripts efficiently. Learn common options and best practices for seamless script execution

  15. Chapter 4 , Part 2 : Reading Environment Variables in NodeJS

    Learn how to efficiently manage environment variables in nodeJS applications. Explore various methods and best practices for security and portability

  16. Chapter 4 , Part 3 : Writing Outputs to the Command Line in NodeJS

    Learn essential techniques for writing outputs in nodeJS CLI. From basic logging to formatting and understanding stdout/stderr.

  17. Chapter 4 , Part 4 : Reading Inputs from the Command Line in NodeJS

    Learn the various ways and strategies to efficiently read command line inputs in nodeJS, making your program more interactive and flexible.

  18. Chapter 4 , Part 5 : The NodeJS Read, Evaluate, Print, and Loop (REPL)

    Explore the power of nodeJS's Read, Evaluate, Print, and Loop (REPL). Learn how to use this interactive environment for rapid prototyping, debugging, and experimentation.

  19. Chapter 5 , Part 1 : Introduction to Testing in NodeJS

    Discover the fundamentals of testing in nodeJS! Learn about testing types, frameworks, and best practices for building reliable applications.

  20. Chapter 5 , Part 2 : Debugging Tools and Techniques in NodeJS

    Explore essential debugging tools and techniques in Node.js development. From built-in options to advanced strategies, and best practices for effective debugging.

  21. Chapter 6 , Part 1 : Project Planning and Setup

    Discuss the planning and design process for building our interactive file explorer in Node.js, focusing on core features, UI/UX design, and implementation approach and initial setup.

  22. Chapter 6 , Part 2 : Implementing Basic functionalities

    In this guide, we'll implement the basic functionalities of our app which will cover initial welcome and action prompts.

  23. Chapter 6 , Part 3 : Implementating Core Features and Conclusion

    In this guide, we'll complete the rest of the more advanced functionalities of our app including, create, search, sort, delete, rename and navigate file directories.