Fostering innovation and success through diversity, equity, and inclusion at Nimi
Tharushi Kuruppu
February 19, 2025

Geospatial data is becoming increasingly important in a variety of applications, from mapping and navigation to location-based services and geospatial analytics. MongoDB, with its support for geospatial indexes and queries, provides an excellent platform for storing and querying geographical data. In this article, we’ll dive into MongoDB’s geospatial features and show how you can leverage them with Node.js.

Setting Up the Data

Before we explore the queries, we need to populate our MongoDB collection with geospatial data. I have uploaded the necessary JSON files: sl-places.json and sl-areas.json to my GitHub repository. These files contain geographical data for various places and areas in Sri Lanka, respectively.

First, let’s write a seeder script to load this data into MongoDB. You can find the full code in My repository:

const fs = require("fs");
const { getDB, closeDB } = require("./db");

(async () => {
  const initialSLPlaces = JSON.parse(
    fs.readFileSync("sl-places.json", "utf-8")
  );
  const initialSLAreas = JSON.parse(fs.readFileSync("sl-areas.json", "utf-8"));
  
  const db = await getDB();
  
  // Delete existing records to avoid duplication
  await db.collection("sl-places").deleteMany();
  
  // Insert new data into the collection
  await db
    .collection("sl-places")
    .insertMany([...initialSLPlaces, ...initialSLAreas]);
  
  // Create a 2dsphere index to enable efficient geospatial queries
  await db.collection("sl-places").createIndex({ location: "2dsphere" });
  
  // Close the database connection
  closeDB();
})();

Explanation:

  • getDB: Connects to the MongoDB database.
  • deleteMany: Clears the collection before inserting new data to prevent duplicates.
  • insertMany: Inserts the data from sl-places.json and sl-areas.json into the sl-places collection.
  • createIndex({ location: “2dsphere” }): Creates a 2dsphere index on the location field to enable efficient geospatial queries.

Why Is Creating an Index Important?

Indexes are critical for improving the performance of database queries, especially when dealing with large datasets. In the context of geospatial data, MongoDB uses 2dsphere indexes to support queries that deal with spherical geometry, such as finding points within a certain distance from a location or finding points within a polygon.

Without a 2dsphere index on the location field, geospatial queries would be much slower, as MongoDB would need to scan every document in the collection to perform the query. By creating a 2dsphere index, MongoDB can efficiently query geospatial data by using the index to quickly find relevant documents based on their geographic location.

1. Searching Places Near a Location

One of the most common geospatial queries is finding places that are near a specific location. In MongoDB, we can use the $near operator to perform this search. The following Node.js code demonstrates how to search for places within 500 meters of Kirinda Beach (coordinates: [81.2570, 6.2155]).

const { getDB, closeDB } = require("./db");

(async () => {
  const db = await getDB();

  // Search near Kirinda Beach within 500m
  const places = await db
    .collection("sl-places")
    .find({
      location: {
        $near: {
          $geometry: {
            type: "Point",
            coordinates: [81.2570, 6.2155], // Coordinates for Kirinda Beach
          },
          $maxDistance: 500, // Max distance in meters
        },
      },
    })
    .toArray();

  console.log(places); // Display the found places
  closeDB(); // Close the database connection
})();

Explanation:

  • $near: The $near operator finds documents within a specified distance from a point. In this case, we are searching for places within 500 meters of the given coordinates.
  • $geometry: Defines the point around which we are searching.
  • $maxDistance: Limits the search to a maximum distance, in this case, 500 meters.

Use Case:

This query is ideal for applications that need to find nearby places, such as a location-based service for tourists, or a navigation app that shows nearby points of interest.

2. Searching for Places Within a Polygon

In addition to searching near a location, MongoDB allows you to search for places within a specific geographic area defined by a polygon. This can be useful for querying places that lie within a predefined region or boundary.

The following example demonstrates how to search for places within a rectangular polygon defined by four corners:

const { getDB, closeDB } = require("./db");

(async () => {
  const db = await getDB();

  const places = await db
    .collection("sl-places")
    .find({
      location: {
        $geoWithin: {
          $geometry: {
            type: "Polygon",
            coordinates: [
              [
                [81.2, 6.2], // Bottom-left corner
                [81.6, 6.2], // Bottom-right corner
                [81.6, 6.5], // Top-right corner
                [81.2, 6.5], // Top-left corner
                [81.2, 6.2], // Closing the polygon
              ],
            ],
          },
        },
      },
    })
    .toArray();

  console.log(places); // Display places within the polygon
  closeDB(); // Close the database connection
})();

Explanation:

  • $geoWithin: The $geoWithin operator is used to find documents within a specified geometry, which can be a polygon, circle, or other shapes.
  • $geometry: Defines the geometry (in this case, a polygon) to use for the search. The coordinates are provided in an array of longitude and latitude points that define the boundaries of the polygon.

Use Case:

This query is particularly useful for applications that need to filter results within a geographical region, such as finding all points of interest within a park, city, or administrative region.

3. Searching for Areas that Intersect with a Point

Another geospatial operator in MongoDB is $geoIntersects, which allows you to search for areas that intersect with a specific point. This is useful for cases where you need to find out which regions or zones contain a given location.

Below is an example of how to search for areas that intersect with the point [81.7302, 7.2801]:

const { getDB, closeDB } = require("./db");

(async () => {
  const db = await getDB("playground");

  const area = await db
    .collection("sl-places")
    .find({
      area: {
        $geoIntersects: {
          $geometry: { type: "Point", coordinates: [81.7302, 7.2801] },
        },
      },
    })
    .toArray();

  console.log(area); // Display areas that intersect with the point
  closeDB(); // Close the database connection
})();

Explanation:

  • $geoIntersects: The $geoIntersects operator finds documents whose geometry intersects with the specified geometry.
  • $geometry: Defines the geometry for the intersection query. In this case, we are searching for areas that intersect with a specific point.

Use Case:

This query can be used to determine which administrative regions or zoning areas intersect with a given point, such as determining which district a particular location belongs to.

Conclusion

MongoDB’s geospatial capabilities make it an excellent choice for applications that deal with location-based data. By using the 2dsphere index and geospatial operators like $near, $geoWithin, and $geoIntersects, developers can build powerful, location-aware applications.

In this article, we’ve demonstrated how to perform basic geospatial queries with MongoDB and Node.js. We also covered the importance of creating a 2dsphere index, which is crucial for ensuring fast and efficient geospatial queries on large datasets.

By integrating MongoDB’s geospatial queries into your application, you can create more intelligent, context-aware experiences for your users.

// Checkout complete code in GitHub, Don’t forget to give me a star :)

ASMohamedFaheemAnver/BasicGeoSearchInMongoD
As developers, we often find ourselves juggling between writing code, testing it, and finally pushing it live for users to experience. Managing these steps manually can be stressful and time-consuming. Luckily, tools like AWS and GitHub Actions take much of this burden off our shoulders, making the process smoother and more efficient. Let’s dive into how AWS streamlines developer workflows, from continuous integration (CI) to continuous delivery (CD), making our lives easier.
Continuous Integration (CI) — Catching Bugs Early
Imagine you’re working on a feature with your team, and everyone pushes their code to a shared repository. Now, how do we make sure that new code doesn’t break the app? This is where Continuous Integration (CI) shines. In simple terms, CI is a practice where developers frequently integrate their code into a central repository. Each time someone pushes code, automated tests and builds run to ensure nothing’s broken.
For example, let’s say you’re using GitHub Actions as your CI tool. When you push your code, GitHub Actions kicks off a series of tests and checks to make sure your new code integrates well with the existing codebase. Within minutes, you receive feedback on whether the tests have passed or failed.
Why is CI Important?
Catch bugs early: Developers get immediate feedback on their changes.
Speedy delivery: Code is automatically tested, enabling faster rollouts.
Happier teams: No one is blocked waiting for feedback or fixing broken builds last minute.
With CI tools like GitHub Actions or AWS CodeBuild, the code is tested almost instantly after each change, so everyone on the team can focus on writing more code rather than firefighting bugs.
Continuous Delivery (CD) — Moving from Tested to Ready
Once your code has passed all the tests in the CI phase, the next step is to make sure it’s ready for deployment. Continuous Delivery (CD) helps developers automate this process so that the code is always in a deployable state.
AWS offers AWS CodePipeline, which automates the entire process — from pulling in code from a source repository, running it through build and test stages, and finally deploying it to production. Whether you’re making a small fix or rolling out a big feature, the pipeline ensures the process is smooth, fast, and reliable.
Stages in AWS CodePipeline
Here’s how AWS CodePipeline works:
  • Source: The pipeline starts when you push new code to a repository like GitHub, CodeCommit, or S3.
  • Build: Code is compiled, and tests are run. This is where tools like AWS CodeBuild come in to do the heavy lifting of building the code and running tests.
  • Deploy: Once the code is tested and verified, it’s deployed to your desired environment, be it a staging or production environment.
Each stage in the pipeline communicates seamlessly, ensuring that the tested and built code moves swiftly between stages, thanks to AWS’s powerful automation.
Artifacts: Building Blocks of Your Pipeline
During the build and deployment process, various artifacts (like compiled code, test results, or configuration files) are created. These artifacts are essential outputs at different stages of your CI/CD pipeline. AWS stores them in S3 buckets, allowing the next stage in the pipeline to use them.
Imagine this as passing a baton in a relay race; each stage builds upon the previous one, passing critical information (artifacts) to the next stage for a flawless transition.
Troubleshooting with AWS CodePipeline
If anything goes wrong, AWS provides excellent tools for troubleshooting. The pipeline interface visually shows where issues arise. Logs are captured in CloudWatch, and you can even set up notifications via email or Slack. If you face permission issues, double-checking the IAM roles can help resolve most hiccups.
AWS CodeBuild — Your Build Engine
Let’s dive a bit deeper into AWS CodeBuild. Think of it as your automated builder. It compiles your source code, runs tests, and produces deployable packages. What’s cool is that it’s fully managed — you just tell it what to do via a simple buildspec.yml file, and it takes care of the rest.
What’s in a BuildSpec?
A buildspec.yml file is essentially a script that defines what AWS CodeBuild should do at each phase. For example, you can tell it to:
  • Install: any dependencies.
    Pre_build: Prepare the environment before the actual build.
    Build: Compile the code and run tests.
    Post_build: Final steps like zipping the output or creating packages.
A buildspec.yml file is essentially a script that defines what AWS CodeBuild should do at each phase. For example, you can tell it to:
Wrapping It Up: From Code to Cloud
In the world of modern software development, speed, reliability, and automation are key. By integrating tools like AWS CodePipeline, AWS CodeBuild, and GitHub Actions, developers can easily take their code from the first commit all the way to a production-ready deployment.
These AWS services take the repetitive, manual work out of our hands, allowing us to focus on what we do best — building great applications. So next time you push that code to GitHub, sit back, relax, and let AWS do the heavy lifting!
What is Machine Learning?
In just the last five or ten years, Machine Learning has a critical way, arguably the most important way, most parts of AI are done,” said MIT Sloan professor Thomas W. Malone, the founding director of MIT Center for Collective Intelligence.
Machine Learning is not just image processing, chatbot, or Social Media suggestions. It is a powerful field that transforms industries and impacts nearly every part of our daily lives. From diagnosing diseases in healthcare to automatic tasks in finance and even predicting environmental changes, machine learning is applied in diverse areas beyond common uses like recommendation engines or digital assistants. By enabling computers to learn from data, ML opens possibilities for innovation and efficiency in areas like personalized education, smart cities, supply chain optimization, and even climate modeling.
Machine Learning (ML) is a branch of artificial intelligence that enables computers to learn from data, recognize patterns, and make decisions without being explicitly programmed with step-by-step instructions. Instead of following fixed rules, machine learning models improve over time through exposure to new information and experiences, making them adaptable to new scenarios.
layman’s Definition of Machine Learning
Imagine teaching a computer not by giving detailed instructions but by letting it “learn” from examples. In machine learning, computers analyze a large set of data, find patterns, and use those insights to make predictions or decisions, This means that rather than being told precisely what to do, the computer becomes capable of figuring things out on its own by learning from experience.
Example: Email Spam Filters
Consider how email spam filters work. Traditionally, you’d need to tell the program exactly what spam looks like. With machine learning, however, the model analyzes thousands of email samples — some labeled as spam and others as regular mail. By studying common patterns and keywords in the spam emails, the machine learns to differentiate spam from legitimate messages, Over time, as it processes more examples, the filter improves, becoming more effective at catching spam emails, even as spammers change their tactics.
Types of Machine Learning
Now, consider more detailed content about machine learning. Machine learning can be broadly categorized into three main types based on how models learn from data. Let me explain them one by one.
1. Supervised Learning
In supervised learning, the machine is trained on a labeled dataset, meaning the input data is paired with the correct output. The model learns by finding relationships between the input and the output values, which helps it make accurate predictions on new data.
  • Spam Detection: Classifying emails as spam or not based on labeled data.
  • Medical Diagnosis: Predict diseases based on past patient data and diagnoses.
  • Stock Market Prediction: Estimating future stock prices using historical data trends.
2. Unsupervised Learning
Unlike supervised learning, unsupervised learning deals with unlabeled data. The model tries to find hidden patterns, relationships, or structures within the data without any predefined label.
  • Customer Segmentation: Grouping customers based on purchasing behavior.
  • Anomaly Detection: Identifying fraudulent transactions in banking systems.
  • Market Basket Analysis: Finding product purchase patterns in retail stores.
3. Reinforcement Learning
Reinforcement learning is a type of machine learning where an agent learns by interacting with its environment and receiving feedback in the form of rewards or penalties. The goal is to maximize long-term rewards by making a sequence of decisions.
  • Self-Driving Cars: Learning to navigate roads by trial and error.
  • Game Playing AI: AI systems like AlphaGo learn to play board games by competing with themselves.
  • Robotics: Training robots to optimize movement and complete tasks efficiently.
How do Machine Learning Models Work?
Machine learning models learn from data through an iterative process that involves understanding patterns, improving accuracy, and making predictions or decisions based on experience. The general workflow of machine learning, from data collection to deployment, mirrors the way humans learn by observing, analyzing, and adapting based on feedback. Let’s break down each step and explain how ML models learn at every stage.
1. Data Collection
Machine learning models rely on large amounts of data to learn effectively. The more diverse and high-quality the data, the better the model can generalize to new situations.
How the model learns from data:
  • Models analyze raw data from sources like databases, sensors, or web scraping.
  • Through preprocessing (removing inconsistencies, missing values, and duplicates), models ensure they are learning from clean and meaningful data.
  • The model observes input patterns to identify correlations and trends that might not be obvious to humans.
Example: In fraud detection, data from user transactions (e.g. timestamps, locations, amounts) is collected. The model learns the normal spending behavior of users and identifies anomalies as potential fraud.
3. Model Selection and Training
Training is where the actual learning happens. The model is exposed to historical data and adjusts its internal parameters (weight and biases) to recognize patterns and relationships.
How the model learns from data:
  • Models learn by adjusting their predictions based on labeled data (in supervised learning) or by discovering patterns on their own (in unsupervised learning).
  • Algorithms like neural networks use iterative optimization techniques (e.g. gradient descent) to minimize prediction.
  • Training involves feeding the model with input-output pairs repeatedly until it learns to make accurate predictions.
Example: In spam detection, the model is trained on a thousand labeled emails to understand what features (e.g. specific keywords, sender addresses) correlate with spam messages.
4. Evaluation
Once the model is trained, it needs to be evaluated to ensure it generalizes well to new, unseen data. Evaluation helps detect issues like overfitting (memorizing training data) or underfitting (failing to learn).
How the model learns from data:
  • The model is tested on unseen data (validation/test sets) o measure accuracy, precision, recall, and other metrics.
  • Fine-tuning is done by adjusting hyperparameters (e.g. learning rate, number of layers) to enhance performance.
  • Evaluation provides feedback to improve learning by highlighting gaps in knowledge.
Example: A self-driving car model is tested in different environments to check how well it adapts to various road conditions.
5. Deployment and Monitoring
Once the model is deployed in real-world scenarios it continues to learn and improve time by processing new incoming data. Continuous monitoring ensures the model remains accurate and relevant.
How the model learns from data:
  • Models update themselves using real-world data, detecting changes in patterns over time (concept drift).
  • Feedback loops allow users to correct errors, helping the model learn from mistakes.
  • Retraining is performed periodically with fresh data to avoid degradation in performance.
Example: In a recommendation system like Netflix, the model adapts to user performance changes by analyzing new viewing habits and interactions.
Conclusion
Machine learning has revolutionized the way we interact with technology and solve complex problems across industries. From healthcare and finance to entertainment and beyond, ML enables systems to learn from data, adapt over time, and make intelligent decisions with minimal human interaction.
As we’ve explored, machine learning models follow a structured learning process — starting from data collection and feature engineering to model training, evaluation, and deployment. Each step plays a crucial role in ensuring that the model is accurate, efficient, and continuously improving.
Understanding how machine learning works is essential for anyone looking to harness its potential, whether you’re a business professional, developer, or enthusiast. With advancements in AI and data science, the possibilities of ML are expanding rapidly, opening new doors for automation, personalization, and innovation.
Thank you for reading.
I hope it helped.
Working in the tech industry is like solving a series of intriguing puzzles. As engineers, we’re tasked with addressing complex problems and crafting efficient solutions. It’s a job many find fascinating, yet it demands going the extra mile to overcome challenges. While solving these problems can be immensely rewarding, it’s no secret that the journey often comes with its fair share of frustration.
Recently, I had the privilege of being part of our company’s End-of-Year (EOY) performance evaluation cycle. This process, which involves recognizing team members’ efforts through promotions and salary adjustments, gave me a unique perspective as a manager. It was both an exciting and humbling experience to share the outcomes with my team. The majority of my colleagues were pleased with the results, while a few took the opportunity to share constructive feedback.
This experience taught me something profound: how important it is for people to feel that their work is recognized and appreciated. Recognition isn’t just a reward — it’s a powerful motivator that inspires individuals to excel and contribute to their fullest potential.
Why Recognition Matters
In the fast-paced world of tech, where challenges are constant and burnout is a looming risk, appreciation becomes a critical currency. A simple acknowledgment of someone’s effort can have an extraordinary impact. It boosts morale, fosters a positive work environment, and motivates individuals to push through tough situations.
This is especially true in remote work settings, where isolation can make it harder for team members to feel seen or valued. Surrounding yourself with colleagues who appreciate your contributions and offer support during tough times can make all the difference.
Appreciation Beyond the Workplace
As I reflected on the importance of recognition in the workplace, I began to wonder: If appreciation plays such a vital role in our professional lives, how does it translate to our personal lives?
Coming from an Asian background, I grew up in a culture deeply rooted in family values. Our lives often revolve around our loved ones — parents, siblings, partners, and friends. But how often do we pause to express gratitude to the people who have sacrificed so much for us?
Take our parents, for example. They devote their entire lives to ensuring we have everything we need. Mothers cook meals for us daily, fathers provide stability, and their sacrifices often blend into the background of our lives. When something happens regularly, it becomes normalized — so much so that we stop seeing it as extraordinary. It’s only when these actions are no longer there that we realize their true value.
Equally important is the relationship we have with our spouse or partner. A wife or girlfriend often plays a pivotal role in our lives, offering emotional support, encouragement, and unconditional love through life’s highs and lows. Their acts of care — whether it’s listening to us vent after a tough day, celebrating our achievements, or quietly supporting our dreams — can become so seamlessly woven into our everyday existence that we may forget to acknowledge them. Yet their contributions are invaluable, and showing them appreciation isn’t just about saying “thank you” but about creating moments that make them feel seen, loved, and cherished.
A home-cooked meal, a supportive call from a friend, a helping hand from a colleague, or the unwavering love of a wife or girlfriend may feel like part of life’s routine. These acts of care often become so seamlessly woven into our daily lives that we stop noticing their significance. But when these small acts disappear, the void they leave makes us wish we had expressed our gratitude while we could. This realization reminds us: even the most “normal” gestures of care are worth recognizing and appreciating in the moment
A home-cooked meal, a supportive call from a friend, a helping hand from a colleague, or the unwavering love of a wife or girlfriend may feel like part of life’s routine. These acts of care often become so seamlessly woven into our daily lives that we stop noticing their significance. But when these small acts disappear, the void they leave makes us wish we had expressed our gratitude while we could. This realization reminds us: even the most “normal” gestures of care are worth recognizing and appreciating in the moment
A Simple Thank You Goes a Long Way
Whether it’s in the workplace or at home, appreciation is a universal need. It’s a small gesture that can have a lasting impact. A heartfelt “thank you,” a warm hug, or even a simple acknowledgment of someone’s effort can strengthen relationships and foster a sense of belonging.
As we navigate our busy lives, let’s make it a point to appreciate the people who matter most — be it our team members, parents, spouses, or friends. Let’s recognize their contributions, both big and small, and make sure they know how much they are valued.
Final Thought:
Appreciation is a bridge that connects us, whether in the workplace or at home. Let’s cross it more often.
Working in the tech industry is like solving a series of intriguing puzzles. As engineers, we’re tasked with addressing complex problems and crafting efficient solutions. It’s a job many find fascinating, yet it demands going the extra mile to overcome challenges. While solving these problems can be immensely rewarding, it’s no secret that the journey often comes with its fair share of frustration.
Recently, I had the privilege of being part of our company’s End-of-Year (EOY) performance evaluation cycle. This process, which involves recognizing team members’ efforts through promotions and salary adjustments, gave me a unique perspective as a manager. It was both an exciting and humbling experience to share the outcomes with my team. The majority of my colleagues were pleased with the results, while a few took the opportunity to share constructive feedback.
This experience taught me something profound: how important it is for people to feel that their work is recognized and appreciated. Recognition isn’t just a reward — it’s a powerful motivator that inspires individuals to excel and contribute to their fullest potential.
Why Recognition Matters
In the fast-paced world of tech, where challenges are constant and burnout is a looming risk, appreciation becomes a critical currency. A simple acknowledgment of someone’s effort can have an extraordinary impact. It boosts morale, fosters a positive work environment, and motivates individuals to push through tough situations.
This is especially true in remote work settings, where isolation can make it harder for team members to feel seen or valued. Surrounding yourself with colleagues who appreciate your contributions and offer support during tough times can make all the difference.
Appreciation Beyond the Workplace
As I reflected on the importance of recognition in the workplace, I began to wonder: If appreciation plays such a vital role in our professional lives, how does it translate to our personal lives?
Coming from an Asian background, I grew up in a culture deeply rooted in family values. Our lives often revolve around our loved ones — parents, siblings, partners, and friends. But how often do we pause to express gratitude to the people who have sacrificed so much for us?
Take our parents, for example. They devote their entire lives to ensuring we have everything we need. Mothers cook meals for us daily, fathers provide stability, and their sacrifices often blend into the background of our lives. When something happens regularly, it becomes normalized — so much so that we stop seeing it as extraordinary. It’s only when these actions are no longer there that we realize their true value.
Equally important is the relationship we have with our spouse or partner. A wife or girlfriend often plays a pivotal role in our lives, offering emotional support, encouragement, and unconditional love through life’s highs and lows. Their acts of care — whether it’s listening to us vent after a tough day, celebrating our achievements, or quietly supporting our dreams — can become so seamlessly woven into our everyday existence that we may forget to acknowledge them. Yet their contributions are invaluable, and showing them appreciation isn’t just about saying “thank you” but about creating moments that make them feel seen, loved, and cherished.
A home-cooked meal, a supportive call from a friend, a helping hand from a colleague, or the unwavering love of a wife or girlfriend may feel like part of life’s routine. These acts of care often become so seamlessly woven into our daily lives that we stop noticing their significance. But when these small acts disappear, the void they leave makes us wish we had expressed our gratitude while we could. This realization reminds us: even the most “normal” gestures of care are worth recognizing and appreciating in the moment
A home-cooked meal, a supportive call from a friend, a helping hand from a colleague, or the unwavering love of a wife or girlfriend may feel like part of life’s routine. These acts of care often become so seamlessly woven into our daily lives that we stop noticing their significance. But when these small acts disappear, the void they leave makes us wish we had expressed our gratitude while we could. This realization reminds us: even the most “normal” gestures of care are worth recognizing and appreciating in the moment
A Simple Thank You Goes a Long Way
Whether it’s in the workplace or at home, appreciation is a universal need. It’s a small gesture that can have a lasting impact. A heartfelt “thank you,” a warm hug, or even a simple acknowledgment of someone’s effort can strengthen relationships and foster a sense of belonging.
As we navigate our busy lives, let’s make it a point to appreciate the people who matter most — be it our team members, parents, spouses, or friends. Let’s recognize their contributions, both big and small, and make sure they know how much they are valued.
Final Thought:
Appreciation is a bridge that connects us, whether in the workplace or at home. Let’s cross it more often.
Once upon a time, in the land of React, there lived a busy worker named Render. Render’s job was simple: to build and update a magical world known as the User Interface (UI), which people all over the world relied on for their daily tasks.
The First Call: The Initial Render
It all began when a brand-new React Component arrived in town. When the component was first born, Render was called to bring its layout to life. This is what the villagers called the Initial Render.
Render would carefully read the JSX spellbook that the component provided and translate it into real DOM elements — structures that made up the web page. Once Render was done, a beautiful piece of the UI was ready for users to see.
But React’s journey didn’t stop there…
React’s Superpower: The State
As time passed, the magical User Interface needed to stay fresh and responsive. The villagers — who interacted with the UI — loved this. But it wasn’t always easy for Render to keep the interface updated.
You see, the UI was driven by State — a hidden force that represented the component’s inner feelings. Whenever the state changed, the component’s appearance had to change, too. This called Render back into action.
But there was a problem…
Too Many Calls: Render’s Dilemma
Imagine if Render was called every single time the state changed, no matter how small the change. It would be exhausting! Imagine this:
  • The state changes when someone types in a text box.
  • The state changes again when they click a button.
  • And again, and again, with each tiny interaction.
Without a clever solution, Render would get overwhelmed, and the UI would start lagging, frustrating the users. The town was at risk of becoming slow and unresponsive.
Batching to the Rescue
Fortunately, React had a wise mentor known as Batching. Batching had the wisdom to delay calling Render until it had gathered multiple state changes together. Instead of making Render re-render the UI every time a single thing changed, Batching grouped all the changes and triggered only one render.
This saved Render a lot of work, allowing him to refresh the UI just once, even when many changes happened behind the scenes.
Let’s say someone clicked a button, which changed multiple pieces of state (like updating a counter and changing some text). Instead of calling Render twice, Batching would hold off and call Render only once when all the changes were ready.
The Second Call: Re-rendering
Now that Batching had Render’s back, life became much easier. Every time the state changed, React checked in with Render to re-run the component’s logic and refresh the UI.
But React wasn’t wasteful — if the state didn’t change, Render wouldn’t get involved at all! React was efficient, only calling Render when something important needed updating.
This is what the villagers called a Re-render. And the best part? Thanks to Batching, the UI stayed fast and snappy.
Reconciliation: The Secret Dance of the Virtual DOM
Here’s the most magical part of the story. Every time Render was called to re-render the component, React didn’t blindly tear everything down and start over. That would be wasteful!
Instead, React had a secret weapon — the Virtual DOM.
The Virtual DOM was like a blueprint of the UI. Every time Render re-ran, React used the Virtual DOM to compare the old version of the UI to the new one. This comparison process was called Reconciliation.
During reconciliation, React looked for differences between the two versions of the UI. Maybe a button’s color changed, or a paragraph of text was updated. Whatever the changes were, React would make just those updates to the real DOM, leaving the rest of the UI untouched.
By using reconciliation, React could make the UI updates surgical and efficient, keeping everything running smoothly for the users.
The Happy Ending: A Smooth, Responsive UI
And so, thanks to the teamwork between Batching, Render, and Reconciliation, React’s UI stayed fast, responsive, and delightful for the villagers.
The magical system allowed users to interact with the app, change its state, and see updates instantly — without any lag or delay. Render didn’t have to overwork, and React’s Virtual DOM kept everything running smoothly behind the scenes.
The land of React flourished, and the users were happy.
The end… or rather, the beginning of your own React journey!
Takeaways from the Story
  • Initial Render: The first time React creates the UI from scratch based on the component’s JSX.
    Batching: Groups multiple state updates into one render, preventing performance slowdowns.
    Re-render: Happens when state or props change; React calls the component again to refresh the UI.
    Reconciliation: React’s diffing algorithm that compares the old and new virtual DOM to update only what’s changed in the real DOM.
This is the magic behind React’s performance, making sure your apps feel fast and responsive. So next time you use React, remember that Batching, Render, and Reconciliation are working hard behind the scenes to give you a great experience!
This is the magic behind React’s performance, making sure your apps feel fast and responsive. So next time you use React, remember that Batching, Render, and Reconciliation are working hard behind the scenes to give you a great experience!
Conclusion
And there you have it! React’s core concepts, told through the story of a tireless worker, some smart batching, and an efficient reconciliation system. Hopefully, this gives you a better grasp of what happens behind the scenes in React, and how it ensures your applications are optimized for performance.
Geospatial data is becoming increasingly important in a variety of applications, from mapping and navigation to location-based services and geospatial analytics. MongoDB, with its support for geospatial indexes and queries, provides an excellent platform for storing and querying geographical data. In this article, we’ll dive into MongoDB’s geospatial features and show how you can leverage them with Node.js.
Setting Up the Data
Before we explore the queries, we need to populate our MongoDB collection with geospatial data. I have uploaded the necessary JSON files: sl-places.json and sl-areas.json to my GitHub repository. These files contain geographical data for various places and areas in Sri Lanka, respectively.First, let’s write a seeder script to load this data into MongoDB. You can find the full code in My repository:
const fs = require("fs");
const { getDB, closeDB } = require("./db");

(async () => {
  const initialSLPlaces = JSON.parse(
    fs.readFileSync("sl-places.json", "utf-8")
  );
  const initialSLAreas = JSON.parse(fs.readFileSync("sl-areas.json", "utf-8"));
  
  const db = await getDB();
  
  // Delete existing records to avoid duplication
  await db.collection("sl-places").deleteMany();
  
  // Insert new data into the collection
  await db
    .collection("sl-places")
    .insertMany([...initialSLPlaces, ...initialSLAreas]);
  
  // Create a 2dsphere index to enable efficient geospatial queries
  await db.collection("sl-places").createIndex({ location: "2dsphere" });
  
  // Close the database connection
  closeDB();
})();
This experience taught me something profound: how important it is for people to feel that their work is recognized and appreciated. Recognition isn’t just a reward — it’s a powerful motivator that inspires individuals to excel and contribute to their fullest potential.
Explanation:
  • getDB: Connects to the MongoDB database.
  • deleteMany: Clears the collection before inserting new data to prevent duplicates.
  • insertMany: Inserts the data from sl-places.json and sl-areas.json into the sl-places collection.
  • createIndex({ location: “2dsphere” }): Creates a 2dsphere index on the location field to enable efficient geospatial queries.
Why Is Creating an Index Important?
Indexes are critical for improving the performance of database queries, especially when dealing with large datasets. In the context of geospatial data, MongoDB uses 2dsphere indexes to support queries that deal with spherical geometry, such as finding points within a certain distance from a location or finding points within a polygon.
Without a 2dsphere index on the location field, geospatial queries would be much slower, as MongoDB would need to scan every document in the collection to perform the query. By creating a 2dsphere index, MongoDB can efficiently query geospatial data by using the index to quickly find relevant documents based on their geographic location.
1. Searching Places Near a Location
One of the most common geospatial queries is finding places that are near a specific location. In MongoDB, we can use the $near operator to perform this search. The following Node.js code demonstrates how to search for places within 500 meters of Kirinda Beach (coordinates: [81.2570, 6.2155]).
const { getDB, closeDB } = require("./db");

(async () => {
  const db = await getDB();

  // Search near Kirinda Beach within 500m
  const places = await db
    .collection("sl-places")
    .find({
      location: {
        $near: {
          $geometry: {
            type: "Point",
            coordinates: [81.2570, 6.2155], // Coordinates for Kirinda Beach
          },
          $maxDistance: 500, // Max distance in meters
        },
      },
    })
    .toArray();

  console.log(places); // Display the found places
  closeDB(); // Close the database connection
})();
Explanation:
  • $near: The $near operator finds documents within a specified distance from a point. In this case, we are searching for places within 500 meters of the given coordinates.
  • $geometry: Defines the point around which we are searching.
  • $maxDistance: Limits the search to a maximum distance, in this case, 500 meters.
Use Case:
This query is ideal for applications that need to find nearby places, such as a location-based service for tourists, or a navigation app that shows nearby points of interest.
2. Searching for Places Within a Polygon
In addition to searching near a location, MongoDB allows you to search for places within a specific geographic area defined by a polygon. This can be useful for querying places that lie within a predefined region or boundary.
The following example demonstrates how to search for places within a rectangular polygon defined by four corners:
const { getDB, closeDB } = require("./db");

(async () => {
  const db = await getDB();

  const places = await db
    .collection("sl-places")
    .find({
      location: {
        $geoWithin: {
          $geometry: {
            type: "Polygon",
            coordinates: [
              [
                [81.2, 6.2], // Bottom-left corner
                [81.6, 6.2], // Bottom-right corner
                [81.6, 6.5], // Top-right corner
                [81.2, 6.5], // Top-left corner
                [81.2, 6.2], // Closing the polygon
              ],
            ],
          },
        },
      },
    })
    .toArray();

  console.log(places); // Display places within the polygon
  closeDB(); // Close the database connection
})();
Explanation:
  • $geoWithin: The $geoWithin operator is used to find documents within a specified geometry, which can be a polygon, circle, or other shapes.
  • $geometry: Defines the geometry (in this case, a polygon) to use for the search. The coordinates are provided in an array of longitude and latitude points that define the boundaries of the polygon.
Use Case:
This query is particularly useful for applications that need to filter results within a geographical region, such as finding all points of interest within a park, city, or administrative region.
3. Searching for Areas that Intersect with a Point
Whether it’s in the workplace or at home, appreciation is a universal need. It’s a small gesture that can have a lasting impact. A heartfelt “thank you,” a warm hug, or even a simple acknowledgment of someone’s effort can strengthen relationships and foster a sense of belonging.
Below is an example of how to search for areas that intersect with the point[81.7302, 7.2801]:
const { getDB, closeDB } = require("./db");

(async () => {
  const db = await getDB("playground");

  const area = await db
    .collection("sl-places")
    .find({
      area: {
        $geoIntersects: {
          $geometry: { type: "Point", coordinates: [81.7302, 7.2801] },
        },
      },
    })
    .toArray();

  console.log(area); // Display areas that intersect with the point
  closeDB(); // Close the database connection
})();
Explanation:
  • $geoIntersects: The $geoIntersects operator finds documents whose geometry intersects with the specified geometry.
  • $geometry: $geometry: Defines the geometry for the intersection query. In this case, we are searching for areas that intersect with a specific point.
Use Case:
This query can be used to determine which administrative regions or zoning areas intersect with a given point, such as determining which district a particular location belongs to.
Conclusion
MongoDB’s geospatial capabilities make it an excellent choice for applications that deal with location-based data. By using the 2dsphere index and geospatial operators like $near, $geoWithin, and $geoIntersects, developers can build powerful, location-aware applications.

In this article, we’ve demonstrated how to perform basic geospatial queries with MongoDB and Node.js. We also covered the importance of creating a 2dsphere index, which is crucial for ensuring fast and efficient geospatial queries on large datasets.

By integrating MongoDB’s geospatial queries into your application, you can create more intelligent, context-aware experiences for your users.

// Checkout complete code in GitHub, Don’t forget to give me a star :)
https://github.com/ASMohamedFaheemAnver/
React Native is celebrated for combining the strengths of native development with the flexibility of JavaScript. A standout feature of React Native is its Bridge, which enables smooth communication between JavaScript and native code. In this blog, I will demonstrate how I implemented the concepts of the React Native Bridge and native modules, using my repository, TheBridge, as an example.
What is the React Native Bridge?
The React Native Bridge acts as a translator that facilitates communication between the JavaScript thread and the native threads (iOS/Android). React Native operates in three main threads:
  1. JavaScript Thread: Where your React Native JavaScript code runs.
  2. Native Modules Thread: Where the native code executes.
  3. UI Thread: Responsible for rendering the user interface.
When JavaScript needs to interact with native APIs (e.g., device sensors, file system access), it uses the bridge to send messages to the native code.
Why Use Native Modules?
While React Native offers many built-in components and APIs, certain features may require access to platform-specific functionalities that are not exposed by default. For instance, accessing low-level hardware features or third-party native libraries necessitates creating custom native modules.
How I Implemented TheBridge Repository
TheBridge repository demonstrates how to build a React Native bridge and native modules. Here’s a detailed breakdown of how I implemented it.

npx @react-native-community/cli@latest init exampleProject
This generated a boilerplate React Native project. I then set up the necessary files for implementing the native module.
Step 2: Implementing the Android Native Module
i. Creating a Native Module in Kotlin
In the android directory of the project,I navigated to the java/com/exampleproject folder and created a new Kotlin class, CounterModule.kt. Below is the implementation:
package com.exampleproject

import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.bridge.ReactContextBaseJavaModule
import com.facebook.react.bridge.ReactMethod
import com.facebook.react.bridge.Callback
import com.facebook.react.bridge.Promise

class CounterModule(reactContext: ReactApplicationContext) :
    ReactContextBaseJavaModule(reactContext) {
    private var counter = 0

    override fun getName(): String {
        // The name we can access inside native modules
        return "Counter"
    }

    // Expose increment method
    @ReactMethod
    fun increment(callback: Callback) {
        counter++
        // Call the callback with the updated counter value
        callback.invoke(counter)
    }

    // Expose decrement method as a Promise
    @ReactMethod
    fun decrement(promise: Promise) {
        if (counter > 0) {
            counter--
            promise.resolve(counter) // Resolve the promise with the updated counter value
        } else {
            promise.reject("COUNTER_ERROR", "Counter value cannot be less than 0")
        }
    }
}
In this implementation, I created a CounterModule class that includes two methods:
  • increment: Increases the counter value and returns it via a callback.
  • decrement: Decreases the counter value if it is greater than 0 and resolves it as a promise. If the counter is already at 0, it rejects the promise with an error.
ii. Registering the Module
Next, I registered the native module by creating a Package class in Kotlin:
package com.exampleproject

import android.view.View
import com.facebook.react.ReactPackage
import com.facebook.react.bridge.NativeModule
// Module register import
import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.uimanager.ReactShadowNode
import com.facebook.react.uimanager.ViewManager

// Extend ReactPackage to register module
class CounterPackage : ReactPackage {
    override fun createNativeModules(
        reactContext: ReactApplicationContext
    ): MutableList<NativeModule> = listOf(CounterModule(reactContext)).toMutableList()

    override fun createViewManagers(
        reactContext: ReactApplicationContext
    ): MutableList<ViewManager<View, ReactShadowNode<*>>> = mutableListOf()
}
Finally, I linked the package in MainApplication.kt:
override fun getPackages(): List<ReactPackage> =
    PackageList(this).packages.apply {
        // Packages that cannot be autolinked yet can be added manually here, for example:
        add(CounterPackage())
    }
Step 3: Implementing the IOS Native Module
In addition to the Android implementation, let’s walk through how to implement the Counter native module for iOS using Swift. This will allow your app to access native functionality on both platforms seamlessly.
i. Creating a Native Module in Swift
For iOS, we need to create a CounterModule.swift file that contains the native code. Here's how I implemented it:
// Counter.swift
// exampleProject

import Foundation

// This to make sure to export these class/function to object c runtime
@objc(Counter)
class CounterModule: NSObject {
  private var count = 0

  //  _ is to get the first param, callback to get the second param
  @objc
  func increment(_ callback: RCTResponseSenderBlock) {
    count += 1
    //    print(count);
    callback([count])
  }

  @objc
  func decrement(_ resolve: RCTPromiseResolveBlock, reject: RCTPromiseRejectBlock) {
    if count == 0 {
      let error = NSError(domain: "Counter", code: 200, userInfo: nil)
      reject("ERROR_COUNT", "count cannot be negative", error)
    } else {
      count -= 1
      resolve(count)
    }
  }

  // This means we are asking React Native to initialize these modules before the JS main thread starts executing
  // If returns false, this means it's okay if we initialize the module in the background thread
  @objc
  static func requiresMainQueueSetup() -> Bool {
    return true
  }
}
This Swift code defines a simple CounterModule with two functions:
  • increment: Increases the counter and returns the new value to JavaScript via a callback.
  • decrement: Decreases the counter, returning a promise to resolve the new value or reject it with an error if the counter is already 0.
ii. Creating the Objective-C Bridge (Counter.m)
In order to expose the Swift class to React Native, we need to create a bridging header in Objective-C. This is done by adding a Counter.m file in the iOS project:
// Counter.m
// exampleProject

#import <Foundation/Foundation.h>

// This will help us export the function to React Native
#import "React/RCTBridgeModule.h"

// Expose the Counter object
@interface RCT_EXTERN_MODULE(Counter, NSObject)

// Expose increment method
RCT_EXTERN_METHOD(increment : (RCTResponseSenderBlock)callback)

// Expose decrement promise
RCT_EXTERN_METHOD(decrement : (RCTPromiseResolveBlock)resolve 
                  reject : (RCTPromiseRejectBlock)reject)

@end
The RCT_EXTERN_MODULE and RCT_EXTERN_METHOD macros are used to expose the Swift methods to React Native. These methods are now available for use in the JavaScript code.
Step 4: Accessing the Native Module in JavaScript
Once the native module was registered, I accessed it in JavaScript as follows:
import { NativeModules } from 'react-native';

const { Counter } = NativeModules;

// Increment the counter
Counter.increment((newCounterValue) => {
  console.log(`Counter incremented: ${newCounterValue}`);
});

// Decrement the counter
Counter.decrement()
  .then((newCounterValue) => {
    console.log(`Counter decremented: ${newCounterValue}`);
  })
  .catch((error) => {
    console.error(`Error: ${error.message}`);
  });
This code imports the Counter module from NativeModules and calls the increment and decrement methods to manage the counter state.
Step 5: Testing the Integration
  • The application was run on an Android emulator and IOS simulator.
  • The increment and decrement methods were invoked via button presses in the React Native app.
  • The counter values were confirmed by checking both the console logs and the display in the app.
Conclusion
Creating custom native modules using the React Native bridge allows developers to unlock the full potential of platform-specific features. By following the steps I outlined above, you can implement your own native modules for your app’s requirements.
// Checkout complete code in GitHub, Don’t forget to give me a star :)
https://github.com/ASMohamedFaheemAnver/-
Imagine you’re in a coffee shop. You order a coffee, and instead of standing at the counter waiting until it’s done, you sit down and continue working or chatting. When the coffee is ready, the barista calls your name, and you collect your coffee. In the meantime, you haven’t wasted any time waiting — you’ve been doing other things.
This is how asynchronous operations work in JavaScript. They allow you to start a task (like ordering coffee), continue doing other things (like working on something else), and when that task finishes, a callback is triggered to handle the result (like the barista calling your name).
Why Asynchronous Operations Matter
In the world of JavaScript, asynchronous operations are crucial for building fast and responsive applications. Here’s why:
  1. Non-blocking: JavaScript runs on a single thread, meaning it can only handle one task at a time. Without asynchronous operations, long-running tasks (like waiting for your coffee) would block everything else. Async operations let JavaScript handle other tasks while waiting for things like network requests or timers.
  2. Improved performance: Instead of waiting for tasks like fetching data or loading resources, the app can continue responding to user actions, making it feel faster and smoother.
  3. Concurrency: Async tasks can run in parallel. For example, while you’re waiting for coffee, you can read a book, chat with friends, or check your email.
  4. Efficient I/O tasks: Web development often involves tasks like fetching data from a server. Asynchronous operations allow this data fetching to happen without freezing the rest of your application.
The Role of Callbacks
In the early days of JavaScript, we used callbacks for handling asynchronous tasks. A callback is just a function that gets passed into another function, which gets called once the task is complete.
Let’s go back to our coffee shop example:
  1. Non-blocking: JavaScript runs on a single thread, meaning it can only handle one task at a time. Without asynchronous operations, long-running tasks (like waiting for your coffee) would block everything else. Async operations let JavaScript handle other tasks while waiting for things like network requests or timers.
  2. Improved performance: Instead of waiting for tasks like fetching data or loading resources, the app can continue responding to user actions, making it feel faster and smoother.
  3. Concurrency: Async tasks can run in parallel. For example, while you’re waiting for coffee, you can read a book, chat with friends, or check your email.
  4. Efficient I/O tasks: Web development often involves tasks like fetching data from a server. Asynchronous operations allow this data fetching to happen without freezing the rest of your application.
The Role of Callbacks
In the early days of JavaScript, we used callbacks for handling asynchronous tasks. A callback is just a function that gets passed into another function, which gets called once the task is complete.
Let’s go back to our coffee shop example:
But there was a problem…
Here’s how this plays out:
  • You place the order (“Order placed”).
  • While the coffee is being brewed, you continue with other tasks.
  • After 2 seconds, the callback function (`handleOrder`) runs, letting you know your coffee is ready.
The Problem with Callbacks: “Callback Hell”
While callbacks work, they can get messy quickly. As the complexity grows, you might end up with deeply nested functions that are hard to read and maintain, commonly called callback hell:
This quickly becomes a nightmare to manage. Thankfully, JavaScript has evolved, introducing Promises and later async/await, which allow us to write cleaner and more readable code.
Promises: A Better Way
A Promise in JavaScript is like saying, “I promise to do something when this task finishes.” You can attach .then() to a promise to handle the result and .catch() to deal with any errors.
Here’s the previous coffee shop example using Promises:
With promises, the code becomes more linear and easier to read. You can chain .then() to handle sequential tasks.
Async/Await: The Cleanest Approach
Finally, the introduction of async/await made asynchronous code even simpler to work with. It allows us to write asynchronous code that looks like synchronous code — more natural and readable.
Here’s how you’d rewrite the coffee shop example using async/await:
With async/await, the code looks cleaner, as if the asynchronous operations are happening one after the other, even though they’re still non-blocking in nature.
The Engine Behind Asynchronous JavaScript: The Event Loop
Behind the scenes, JavaScript uses something called the event loop to handle asynchronous tasks. It’s like a smart traffic controller, making sure all tasks are executed in the right order without blocking each other.
Here’s how it works in a nutshell:
  1. When you run a piece of code, tasks go into the call stack, which is where synchronous tasks are handled.
  2. f an asynchronous task (like a network request or a timer) is encountered, it gets pushed to the event queue after it completes.
  3. The event loop continuously checks the call stack and event queue. When the call stack is empty, it takes tasks from the event queue and processes them.
Wrapping Up
Asynchronous programming is at the heart of modern JavaScript, allowing developers to build fast, responsive applications that can handle multiple tasks at once. While callbacks were the original tool, they often led to messy code. Promises and async/await have since provided cleaner, more readable ways to handle asynchronous tasks.
Whether you’re fetching data, handling timers, or making a coffee shop order, understanding async in JavaScript helps you write code that’s efficient and maintainable.
React Native is celebrated for combining the strengths of native development with the flexibility of JavaScript. A standout feature of React Native is its Bridge, which enables smooth communication between JavaScript and native code. In this blog, I will demonstrate how I implemented the concepts of the React Native Bridge and native modules, using my repository, TheBridge, as an example.
What is the React Native Bridge?
In the world of JavaScThe React Native Bridge acts as a translator that facilitates communication between the JavaScript thread and the native threads (iOS/Android). React Native operates in three main threads:
ript, asynchronous operations are crucial for building fast and responsive applications. Here’s why:
  1. JavaScript Thread: Where your React Native JavaScript code runs.
  2. Native Modules Thread: Where the native code executes.
  3. UI Thread: Responsible for rendering the user interface.
When JavaScript needs to interact with native APIs (e.g., device sensors, file system access), it uses the bridge to send messages to the native code.
Why Use Native Modules?
While React Native offers many built-in components and APIs, certain features may require access to platform-specific functionalities that are not exposed by default. For instance, accessing low-level hardware features or third-party native libraries necessitates creating custom native modules.
How I Implemented TheBridge Repository
The Bridge repository demonstrates how to build a React Native bridge and native modules. Here’s a detailed breakdown of how I implemented it.
Step 1: Setting Up the React Native Project
npx @react-native-community/cli@latest init exampleProject
This generated a boilerplate React Native project. I then set up the necessary files for implementing the native module.
Step 2: Implementing the Android Native Module
  1. i. Creating a Native Module in Kotlin
  2. In the android directory of the project, I navigated to the java/com/exampleproject folder and created a new Kotlin class, CounterModule.kt. Below is the implementation:
package com.exampleproject

import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.bridge.ReactContextBaseJavaModule
import com.facebook.react.bridge.ReactMethod
import com.facebook.react.bridge.Callback
import com.facebook.react.bridge.Promise

class CounterModule(reactContext: ReactApplicationContext) :
    ReactContextBaseJavaModule(reactContext) {
    private var counter = 0

    override fun getName(): String {
        // The name we can access inside native modules
        return "Counter"
    }

    // Expose increment method
    @ReactMethod
    fun increment(callback: Callback) {
        counter++
        // Call the callback with the updated counter value
        callback.invoke(counter)
    }

    // Expose decrement method as a Promise
    @ReactMethod
    fun decrement(promise: Promise) {
        if (counter > 0) {
            counter--
            promise.resolve(counter) // Resolve the promise with the updated counter value
        } else {
            promise.reject("COUNTER_ERROR", "Counter value cannot be less than 0")
        }
    }
}
In this implementation, I created a CounterModule class that includes two methods:
  • increment: Increases the counter value and returns it via a callback.
  • decrement: Decreases the counter value if it is greater than 0 and resolves it as a promise. If the counter is already at 0, it rejects the promise with an error.
ii. Registering the Module
How I Implemented TheBridge Repository
The Bridge repository demonstrates how to build a React Native bridge and native modules. Here’s a detailed breakdown of how I implemented it.
Here’s how this plays out:
npx @react-native-community/cli@latest init exampleProject
This generated a boilerplate React Native project. I then set up the necessary files for implementing the native module.
The Problem with Callbacks: “Callback Hell”
While callbacks work, they can get messy quickly. As the complexity grows, you might end up with deeply nested functions that are hard to read and maintain, commonly called callback hell:
This quickly becomes a nightmare to manage. Thankfully, JavaScript has evolved, introducing Promises and later async/await, which allow us to write cleaner and more readable code.
Promises: A Better Way
A Promise in JavaScript is like saying, “I promise to do something when this task finishes.” You can attach .then() to a promise to handle the result and .catch() to deal with any errors.
Here’s the previous coffee shop example using Promises:
With promises, the code becomes more linear and easier to read. You can chain .then() to handle sequential tasks.
Async/Await: The Cleanest Approach
Finally, the introduction of async/await made asynchronous code even simpler to work with. It allows us to write asynchronous code that looks like synchronous code — more natural and readable.
Here’s how you’d rewrite the coffee shop example using async/await:
With async/await, the code looks cleaner, as if the asynchronous operations are happening one after the other, even though they’re still non-blocking in nature.
The Engine Behind Asynchronous JavaScript: The Event Loop
Behind the scenes, JavaScript uses something called the event loop to handle asynchronous tasks. It’s like a smart traffic controller, making sure all tasks are executed in the right order without blocking each other.
Here’s how it works in a nutshell:
  1. When you run a piece of code, tasks go into the call stack, which is where synchronous tasks are handled.
  2. f an asynchronous task (like a network request or a timer) is encountered, it gets pushed to the event queue after it completes.
  3. The event loop continuously checks the call stack and event queue. When the call stack is empty, it takes tasks from the event queue and processes them.
Wrapping Up
Asynchronous programming is at the heart of modern JavaScript, allowing developers to build fast, responsive applications that can handle multiple tasks at once. While callbacks were the original tool, they often led to messy code. Promises and async/await have since provided cleaner, more readable ways to handle asynchronous tasks.
Whether you’re fetching data, handling timers, or making a coffee shop order, understanding async in JavaScript helps you write code that’s efficient and maintainable.
Building an enterprise web application is a complex undertaking, and the challenges that come with it can be daunting, even for experienced developers.
In 2022 we built a capital allocation app called Echelon. Today we will look back on our experience with designing and building Echelon.
When I look back at my notes, the primary challenge that stands out when building the product is creating a custom Gantt chart from scratch. We tried several free libraries, but they did not fulfill the requirements. After several attempts, our technical leader finally decided to start from scratch, which required extensive research and testing.
Gantt chart
When we started building this feature, we realized that the top 3 challenges we had to overcome were complexity, performance, and scalability. We had to ensure that the feature fulfilled all of its requirements and that it would be up to its standards.
1. Complexity
In terms of complexity, we had to integrate features like drag and drop and Gantt charts from scratch, which required extensive research and testing. When developing these features, we had to research different libraries that could be used to fulfill the requirements. Based on the pros and cons of each library, we had to make quick and smart decisions without wasting valuable time. We also had to implement various validations to ensure users could not drop items in invalid positions or overlap them. Additionally, rendering objectives that spanned multiple months required significant effort to ensure they were correctly displayed and updated across different devices and screen sizes.
2. Performance
Regarding performance, we faced significant challenges when implementing drag-and-drop functionality and loading Gantt charts with many objectives. To address these issues, we used reusable components to streamline our code base. We extracted all the objectives into the parent component when the page loaded, reducing load times and improving the user experience.
3. Scalability
Finally, scalability became a concern as our application grew in size and complexity. We had to ensure that our application was flexible enough to accommodate changes in requirements and capable of handling increasing traffic and user demand.
Apart from all the top challengers, one of our key concerns was getting the application ready for enterprise-level users, which required heavy testing and bug-fixing rounds. We had to test our application in various situations and scenarios that promise to give the user the optimal experience throughout the application.
Building an enterprise web application is a complex and challenging task that requires careful planning, testing, and collaboration. However, leveraging the right tools and approaches makes it possible to overcome the challenges and deliver a successful application that meets the needs of the business and its users.
In the fast-paced world of software development, test automation has become a game-changer. It ensures faster delivery and higher-quality products, yet many teams struggle to implement it effectively. The good news? By mastering key strategies and leveraging the right tools, you can revolutionize your QA process.
Imagine cutting testing time in half while maintaining flawless accuracy. Sounds impossible? It isn’t. This guide reveals test automation secrets with practical examples that remain relevant as technologies evolve.
Why Test Automation Is a Must-Have in QA
The Importance of Automation in Modern Development
Manual testing is labor-intensive and prone to human error.
Test automation solves these challenges by:
  1. Enhancing speed and efficiency.
  2. Reducing repetitive tasks.
  3. Increasing test coverage.
Stat Alert: Companies using test automation reduce testing time by up to 50% on average.
Real-World Impact
Picture this: A global e-commerce platform reduced production bugs by 40% after automating its regression suite. The result? Happier users and skyrocketing sales.
10 Secrets to Mastering Test Automation
  1. Choose the Right Tool for Your Needs
    The success of automation hinges on selecting the right tool.
    Consider factors like:
    • Programming language compatibility.
    • Cross-platform support.
    • Integration with CI/CD pipelines.
    Top Tools to Explore:
    • Selenium (open-source, versatile)
    • Cypress (ideal for modern web apps)
    • Playwright (fast and reliable cross-browser testing)
    Example:
    import webdriver
    
    driver = webdriver.Chrome()
    driver.get("https://example.com")
    assert driver.getTitle().contains("Example Domain");
    driver.quit() 
  2. Start Small and Scale Gradually
    Focus on automating high-priority, repetitive test cases first, such as:
    • Login functionality.
    • Form validations.
    • Regression tests.
  3. Integrate Automation with CI/CD
    Combine test automation with your CI/CD pipeline to:
    • Detect issues early.
    • Enable seamless deployments.
    Example with Jenkins:
    pipeline {
        stages {
            stage('Test') {
                steps {
                    sh 'pytest tests/'
                }
            }
        }
    }
  4. Design Tests for Reusability
    Reusable test scripts save time and effort.
    Implement:
    • Parameterized tests.
    • Modular frameworks.
    Example with Pytest:
    import pytest
    
    @pytest.mark.parametrize("username,password", [("user1", "pass1"), ("user2", "pass2")])
    def test_login(username, password):
        assert login(username, password) == "Success"
  5. Emphasize Maintenance
    Automated tests can break with changing requirements.
    Ensure maintainability by:
    • Regularly reviewing test scripts.
    • Using version control.
  6. Balance Automation and Manual Testing
    Not all tests are suited for automation.
    Use manual testing for:
    • Exploratory testing.
    • UX evaluations.
  7. Leverage Reporting and Analytics
    Use tools like Allure or ExtentReports for:
    • Clear test results.
    • Actionable insights.
  8. Optimize Test Execution Time
    Speed up execution with:
    • Parallel testing.
    • Selective test execution.
  9. Continuously Learn and Adapt
    Stay updated with:
    • New tools and frameworks.
    • Best practices in test automation.
  10. Celebrate Small Wins
    Recognize the impact of automation on your projects. Celebrating success boosts team morale and reinforces best practices.
Common Pitfalls to Avoid
Over-Automation
  • Trying to automate every test case can lead to:
    • Increased complexity.
    • Diminished returns.
Neglecting Test Data Management
  • Unreliable test data can derail your efforts. Use tools like Faker for dynamic data generation:
    Example from faker:
    fake = Faker()
    print(fake.email())
Ignoring Team Collaboration
Foster open communication between QA and development teams for smoother implementations.
The Secret Sauce to Explosive Business Growth with Webflow
Imagine this: You’ve built an incredible business, but your website isn’t attracting the right customers. You’ve tried ads, social media, and even word-of-mouth marketing, yet your outreach remains stagnant. Sound familiar? You’re not alone.
The truth is, your website is either your most powerful marketing tool or your biggest bottleneck. And if you’re an SME in the U.S. looking to expand your reach, Webflow could be your secret weapon.
In this blog, we’ll uncover 10 Webflow marketing secrets that will transform your website into a high-converting, lead-generating machine. These strategies are timeless, actionable, and designed to work across industries. Let’s dive in!
1. Optimize for Speed and SEO (Google Loves It!)
Did you know 53% of visitors leave a website if it takes more than 3 seconds to load?
A fundamental expectation of the Nimi experience is that our employees are “Growing With Webflow’s clean code and lightning-fast hosting, your website can outperform competitors with ease. But speed alone isn’t enough — you need SEO magic:”, and not at some arbitrary point in the future. To ensure this, we created a certification program in 2022 where we would reimburse any educational expenses for our employees when they take and successfully complete an exam.
  • Use high-ranking, low-competition keywords in your headers, titles, and meta descriptions. (Example: “best marketing strategy for small business” instead of “marketing tips.”)
  • Compress images and use Webflow’s built-in lazy loading.
  • Structure your content with H1, H2, H3 tags for better readability and ranking.
Pro Tip: Use Webflow’s auto-generated sitemaps and Open Graph settings for seamless search engine visibility.
2. Leverage Interactive and Animated Elements (But Keep It Subtle!)
Nobody wants to browse a static, outdated site. Webflow allows you to add subtle animations, interactive buttons, and engaging hover effects that make your site feel dynamic and modern.
What works best?
  • Scroll-triggered animations (for storytelling!)
  • Hover effects on CTAs (boosts clicks!)
  • Smooth page transitions (improves user experience!)
3. Build Landing Pages That Convert Like Crazy
Your homepage isn’t always enough. Webflow makes it easy to design targeted landing pages that speak directly to your audience’s needs.
A high-converting landing page should have:
  • A clear, bold headline that grabs attention.
  • A short, compelling sub headline that explains the offer.
  • Trust signals (testimonials, logos, social proof).
  • A single, strong CTA (e.g., “Get a Free Consultation!”).
Example: Instead of a generic “Contact Us” page, create a landing page like “Boost Your Sales with Our Custom Marketing Strategies, Book a Free Call Today!”
4. Use Webflow’s CMS for Content Marketing Power
Content marketing is one of the most effective outreach strategies, and Webflow’s CMS (Content Management System) makes blogging effortless.
Why it works:
  • Consistently publishing blogs increases organic traffic by 126%.
  • Long-form, value-driven content positions you as an authority.
  • Blogs rank for multiple keywords, attracting diverse audiences.
Example Blog Topics:
  • “How to Automate Lead Generation for Your Small Business”
  • “5 Webflow Features That Help You Scale Without a Developer”
  • “Case Study: How [Your Company] Grew Revenue by 200% Using Webflow”
Pro Tip: Integrate social sharing buttons to amplify your content reach!
5. Master Internal & External Linking
Google loves structured, well-linked websites. Use Webflow to:
  • Link relevant internal pages to improve navigation.
  • Build high-authority external backlinks for credibility.
  • Optimize anchor text to make links feel natural.
6. Add Video Content (It’s a Game-Changer!)
Websites with video increase conversions by 86%!
Where to use video?
  • Homepage (a 60-second brand intro!)
  • Service pages (explainer videos work wonders!)
  • Testimonials (real customer stories = trust boost!)
7. Mobile Optimization = More Leads
Over 60% of web traffic comes from mobile. Webflow’s responsive design ensures your site looks flawless on any device.
Quick Mobile Optimization Wins:
  • Use readable font sizes.
  • Service pages (explainer viCompress images & optimize layout.deos work wonders!)
  • Test buttons to ensure they’re thumb-friendly.
8. Create Lead Magnets That Capture Emails
Email marketing isn’t dead, it’s one of the highest ROI strategies. Webflow’s forms and integrations make it easy to capture leads with lead magnets like:
  • Free PDFs, eBooks, or templates.
  • Exclusive industry reports.
  • Free webinars or consultations.
9. Add Live Chat & Social Proof
Engagement skyrockets when visitors can chat in real-time or see credible testimonials.
Best tools to integrate with Webflow:
  • Live chat apps like Tidio or Drift.
  • Customer reviews & case studies.
  • Trust badges (SSL, guarantees, awards).
10. Track, Test & Optimize with Webflow Analytic
Marketing without tracking is shooting in the dark. Use Webflow’s integrations with Google Analytics and Hotjar to:
  • Identify high-exit pages & optimize them.
  • Track CTA clicks and form submissions.
  • Improve weak-performing landing pages.
Take Action Now!
You now have 10 powerful Webflow marketing strategies to elevate your SME’s outreach and turn your website into a lead-generating powerhouse. But knowledge alone won’t cut it — execution is everything!
What’s your next step?
Need help optimizing your Webflow site for growth? Let’s chat! Book a free consultation today.
Join the conversation!
What’s your biggest marketing challenge right now? Let’s chat!
Share this with a fellow entrepreneur who needs to see this!
The Digital Race Is On; Are You Keeping Up?
Imagine launching a website that doesn’t just sit pretty but actively works to increase your sales, rank higher on Google, and make customers fall in love with your brand.
Sounds too good to be true? Well, Webflow makes it a reality.
Whether you’re an entrepreneur, marketer, or business owner, your website is the backbone of your online presence. And if you’re still stuck with slow-loading pages, frustrating design limitations, or clunky integrations, you’re leaving money on the table.
Let’s dive into why Webflow is the most powerful, scalable, and efficient way to supercharge your sales and online presence no matter your niche!
1. Instant Load Speeds = More Sales
Did you know a 1-second delay in page load time can slash conversions by 7%? Webflow’s ultra-fast hosting ensures that your website loads at lightning speed, reducing bounce rates and keeping visitors engaged.
  • Use high-ranking, low-competition keywords in your headers, titles, and meta descriptions. (Example: “best marketing strategy for small business” instead of “marketing tips.”)
  • Compress images and use Webflow’s built-in lazy loading.
  • Structure your content with H1, H2, H3 tags for better readability and ranking.
2. Total Design Freedom | No Coding Needed
Say goodbye to cookie-cutter templates! Unlike WordPress, Wix, or Shopify, Webflow allows you to create pixel-perfect, custom websites without touching a single line of code.
  • Drag-and-drop builder for effortless customization.
  • Fully responsive designs without extra hassle
  • No plugins required everything just works
3. SEO That Puts You on Page #1
Webflow is built with SEO-first principles, meaning your site is structured for Google’s algorithm from the start.
SEO Superpowers:
  • Clean, semantic code (Google loves this!)
  • Custom meta tags, alt texts, and schema markup
  • Lightning-fast mobile performance
Pro Tip: Use Webflow’s built-in SEO tools to preview your page in search results before publishing.
4. Effortless E-Commerce to Skyrocket Sales
If you run an online store, Webflow’s e-commerce functionality is a game-changer. Unlike Shopify, where you’re stuck with rigid templates, Webflow lets you build a fully customized, high-converting shopping experience.
Why Webflow Wins for E-Commerce:
  • Fully customizable product pages
  • Seamless checkout experiences
  • No forced transaction fees
5. Built-in Security (No Plugins Required!)
Cyberattacks and website breaches are on the rise, costing businesses millions annually. Webflow eliminates these risks with enterprise-level security — no extra plugins or updates needed.
Your Site is Safe With:
  • Free SSL certificate
  • Automated backups
  • No third-party vulnerabilities (like outdated plugins!)
6. Mobile-First Designs That Convert
Over 60% of all web traffic comes from mobile devices. If your website isn’t optimized for smartphones and tablets, you’re losing customers.
Pro Tip: Webflow automatically adjusts your site for mobile, but you can tweak layouts and interactions for a flawless experience on any device.
7. Effortless Integrations for Ultimate Efficiency
From CRM tools to payment processors, Webflow integrates seamlessly with the tools you already use.
Popular Integrations Include:
  • HubSpot & Salesforce
  • Zapier automation
  • Stripe & PayPal
8. No Maintenance Headaches
Tired of constantly updating plugins and dealing with technical issues? Webflow is fully managed, meaning no more maintenance stress.
  • Automatic updates
  • No need to hire developers for fixes
  • Better performance with zero downtime
9. Interactive Animations That Wow Your Visitors
Want a website that feels alive? Webflow’s animation and interaction features let you create stunning, eye-catching effects that keep visitors engaged.
Ideas to Try:
  • Smooth scrolling effects
  • Hover animations for buttons
  • Engaging micro-interactions
10. Future-Proof Scalability
Marketing withWebflow is built for growth. Whether you’re a small business today or a global brand tomorrow, your site can scale without limitations.out tracking is shooting in the dark. Use Webflow’s integrations with Google Analytics and Hotjar to:
Perfect for:
  • Startups looking to grow fast
  • Enterprises needing custom solutions
  • Agencies wanting control & flexibility
Time to Take Action | Supercharge Your Website Today!
Your website isn’t just a digital business card, it’s your 24/7 sales machine. If you’re serious about boosting sales, growing your online presence, and future-proofing your brand, Webflow is the tool you need.
Ready to transform your website? Let’s build something extraordinary.
  • Contact Us to get started
  • Share this article with someone struggling with their website
Let’s make your website work FOR you; not against you!
Introduction
In modern applications, managing database credentials securely and dynamically is crucial, especially in environments where secrets rotate frequently. This post addresses a common issue faced when using Prisma ORM: handling dynamic database credential rotation without breaking existing connections or affecting application stability.
We’ll walk through the problem, the challenges, and a robust solution leveraging Prisma’s @prisma/adapter-pg and AWS Secrets Manager on the NestJS project.Additionally, we’ll explore deployment considerations, including handling Prisma migrations and seed operations during the build and deploy stages, ensuring that your application runs smoothly in production.
The Problem
Prisma is a powerful ORM, but it does not natively support dynamic fetching of database credentials.
This becomes a challenge when:
10 Secrets to Mastering Test Automation
  1. Database Secrets Rotate: In cloud environments like AWS, secrets often rotate for enhanced security. A new password might invalidate existing connections.
  2. Application Downtime: When credentials change, existing database connections fail, leading to potential downtime.
  3. Manual Reinitialization: Reinitializing the Prisma client every time credentials rotate can be cumbersome and error-prone.
Key Challenges:
  1. Avoiding downtime during credential rotation.
  2. Ensuring existing connections finish gracefully.
  3. Seamlessly using new credentials for new connections.
The Solution
We solve this problem by combining:
  1. Prisma’s @prisma/adapter-pg: Allows integration with the pg library, giving fine-grained control over connection pooling.
  2. AWS Secrets Manager: Securely stores and rotates database credentials.
  3. Dynamic Pool Initialization: Using pg's connection pooling capabilities to dynamically fetch credentials and manage connections.
Implementation Steps
  1. Install Dependencies Ensure you have the necessary libraries: Also need to Upgraded Prisma and @prisma/client to the latest versions to enable compatibility with the @prisma/adapter-pg adapter and support dynamic connection management.
    npm install @prisma/adapter-pg @aws-sdk/client-secrets-manager pg
  2. Update Prisma Schema Enable driverAdapters preview feature in schema.prisma:
    generator client {
      provider        = "prisma-client-js"
      previewFeatures = ["driverAdapters"]
    }
    Generate the Prisma client:
    npx prisma generate
    
  3. SecretsService for Fetching Credentials A service to retrieve credentials from AWS Secrets Manager:
    import { Injectable } from '@nestjs/common';
    import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';
    
    @Injectable()
    export class SecretsService {
      private secretsManagerClient: SecretsManagerClient;
    
      constructor() {
        this.secretsManagerClient = new SecretsManagerClient({
          region: 'YOUR_AWS_REGION',
        });
      }
    
      // Fetch database URL from AWS Secrets Manager
      async getDatabaseUrl(): Promise<string> {
        try {
          const secretId = 'YOUR_AWS_SECRET_ID';
          const command = new GetSecretValueCommand({ SecretId: secretId });
          const secret = await this.secretsManagerClient.send(command);
          if (!secret.SecretString) {
            throw new Error('SecretString is empty');
          }
          const credentials = JSON.parse(secret.SecretString);
          return credentials.password;
    
        } catch (error) {
          throw error;
        }
      }
    }
  4. DatabaseService with Dynamic Connection Management This service integrates Prisma with dynamic credential fetching:
    import { Injectable, OnModuleInit } from '@nestjs/common';
    import { PrismaClient } from '@prisma/client';
    import { PrismaPg } from '@prisma/adapter-pg';
    import { SecretsService } from '../secrets/secrets.service';
    import { Pool } from 'pg';
    
    @Injectable()
    export class DatabaseService extends PrismaClient implements OnModuleInit {
      private pool: Pool;
    
      constructor(
        private readonly secretsService: SecretsService,
      ) {
        // Dynamically fetch the password and set up the pg Pool
        const pool = new Pool({
          host: process.env.DB_HOST,
          user: process.env.DB_USER,
          database: process.env.DB_NAME,
          port: 5432,
          password: async () => {
            return await this.secretsService.getDatabaseUrl();
          },
          ssl: { rejectUnauthorized: false },
        });
        const adapter = new PrismaPg(pool);
        super({ adapter });
      }
    
      // Connect to the database
      async onModuleInit() {
        try {
          await this.$connect();
        } catch (error) {
          throw error;
        }
      }
    
      // Close the database connection pool when the application shuts down
      async onModuleDestroy() {
        try {
          await this.pool.end();
        } catch (error) {
          throw error;
        }
      }
    }
Why Use ssl: { rejectUnauthorized: false }?
When connecting to a database over SSL, the rejectUnauthorized option determines whether the client verifies the database server's SSL certificate. By setting rejectUnauthorized: false, the client skips this validation, which can be helpful during local development or testing when certificates might not be properly configured. However, in production, it's better to ensure secure communication by using properly configured SSL certificates and enabling rejectUnauthorized: true. This prevents man-in-the-middle attacks and ensures the authenticity of the database server.
If both your EC2 instance (or Elastic Beanstalk application) and RDS instance are in the same Virtual Private Cloud (VPC), the communication happens over a secure, private network. In this case, the risk of man-in-the-middle (MITM) attacks is significantly reduced, even if rejectUnauthorized is set to false
Why This Works
  1. Dynamic Credential Fetching: The SecretsService ensures the latest credentials are always fetched.
  2. Connection Pool Management: Using pg's Pool ensures connections finish gracefully before new credentials are used.
  3. No Downtime: Existing queries complete, while new ones use the updated credentials.
Key Benefits
  1. Security: Leverages AWS Secrets Manager for secure credential storage and rotation.
  2. Scalability: Handles multiple connections efficiently with pg's pooling.
  3. Reliability: Ensures smooth transitions during credential rotations.
Prisma Schema URL Limitation
It’s important to note that the changes for dynamically updating the database credentials at runtime do not affect the schema URL used during the initial setup of Prisma. This means that operations like migrations and seeding, which rely on the schema URL, will still use the static database URL defined during the initial Prisma configuration.
How to Handle This in Deployment:
  1. Build-Time Schema URL: During the build stage of your deployment pipeline (e.g., using AWS CodePipeline), you can generate the database connection URL dynamically. For example:
    Example for encoding:
    DATABASE_URL="postgresql://user:$(encodeURIComponent 'password')@host:5432/dbname"
    
  2. Migrations and Seeding in the Deploy Stage: In the deploy stage, use the generated schema URL to perform necessary database migrations and seed operations. These operations ensure the database schema is up-to-date and properly seeded before the application starts.
  3. Runtime Dynamic Updates: Once the deployment and migration processes are complete, the application can leverage the runtime dynamic updates to handle credential rotation and other dynamic database connection updates seamlessly. This runtime functionality is particularly beneficial for handling API requests and ensuring the application remains functional even when database credentials are rotated.
Conclusion
By integrating @prisma/adapter-pg with AWS Secrets Manager and pg, we solved the challenge of dynamic database credential rotation in Prisma. This approach ensures high availability, security, and scalability for modern applications.
In this series of blog posts, I want to share my journey of learning React with you. This is not an article about how I learned React in a few hours or days, it took several months to reach the point where I am right now. I remember how much I struggled in the beginning because of the series of mistakes I made during that time. In these articles, I want to share how to avoid those mistakes if you are new to React. I’ll share my experience and some tips you can follow when you start learning React for the first time.
As the first step of my journey with React, I followed YouTube videos as everyone does, and searched for react crash courses that are suitable for beginners. On YouTube, there were several React tutorials and they made it difficult to find a suitable one for me as a beginner. I was having no idea about the difference between the two types of components which made it more and more complex. If you are new to React I highly recommend you avoid watching any video tutorials before 2020, because those tutorials may contain some outdated concepts and they may lead you to learn class components only. I highly encourage you to start by learning functional components and hooks, as they are the way to go nowadays.
On YouTube, some of the videos I found were more than 3 hours long, which was a quite long time for me as a developer with some hands-on experience in HTML and Vanilla JavaScript. Also, I found several tutorials mixed with both class components and functional components which made it more complex to understand the concept correctly. In my opinion, when you start learning something new, you should not spend much more time learning complex things, at first. You should try to find a short crash course that only teaches you the fundamentals of React like setting up the environment, components, JSX syntax, props, etc. There are several React starting guides available on the internet, but you should visit the official react documentation and follow the Intro to React tutorial first. Before going into the deep, you can try to build a simple web page with the knowledge you gathered. It will make you feel comfortable with React.
By looking into several video tutorials I learned some React concepts like state, hooks, virtual DOM, JSX, and the lifecycle of a component. For the very first time, I was not able to understand these concepts profoundly and how they are used in React. But I observed some knowledge and build some test applications using React.
After that only I figured out that to use React better, I must have a better understanding of HTML, CSS, and JavaScript. During my university period, I primarily used Java and PHP languages for academic submissions and projects. So, at that time I had only basic knowledge of JavaScript, HTML, and CSS. When it comes to learning new technologies, I was just like the kid in this picture, trying to leap into the top stair without stepping on the first which led me to lots of struggles later.
To begin with React you should be hands-on with JavaScript. It doesn’t mean that you need to have complete knowledge of all the JS concepts from legacy to modern. Since it might be useless because React syntax is based on modern JavaScript, which was mostly introduced in ECMAScript 6. When I started learning React, I was not much comfortable with modern JavaScript concepts like promises and arrow functions. So when I was using them, I was always in a doubt with these features whether those are related to JavaScript or related to React. It made learning and understanding even more difficult.
These are a few important ES6 concepts you need to know before learning React. If you are good with these things, it will make learning React easy and painless.
  • Map Function — This Allows you to iterate over an array and modify its elements.
  • Find Function — Returns the first element in the provided array that satisfies the condition of the function.
  • Callback Function — Allows a function to call another function.
  • Promises — This technique is used to handle asynchronous operations.
  • Async / Await — This makes it easier to write promises
  • De-structuring — This allows getting data from objects and arrays to set them into new variables.
  • Rest Operator — Allows a function to accept an indefinite number of arguments as an array
  • Spread Operator— allow a copy of all parts of the existing array into another array (object work as same)
I highly recommend you allocate some time to understand the theoretical parts of React before starting coding in a real-world project. It will save you a lot of time in the future when you are implementing complex concepts in your project. I know it’s kind of boring to understand the core of React, but it is important to know how React behaves and why you use React in your project instead of vanilla JavaScript or other front-end frameworks, and what are the benefits you can get using React JS. It took me several months to understand correctly what I am doing with React because I did not put much effort into learning these things.
If you feel comfortable with modern JavaScript and the basic concepts of React you are good to go with learning complex React concepts. I’ll share what is the correct road map for learning React hooks and more complex React concepts in my next blog post.
In React, we use states to render and update dynamic values within single or multiple components. State management is built into the object. When the state changes, the components which used those states also re-render. If a state is used in multiple components we call it global state and unstated-next is a simple library that we can use to manage these global states.
Global state management
Techniques such as prop drilling are used to pass states and state update functions to child components, but it is harder when we have to pass the state through a deeply nested component tree.

Passing state through deeply nested components can be challenging.

Moreover it’s going to be a mess when we have to share state between non-connected components.

Sharing state between non-connected components gets messier.

We can solve this problem by lifting user state up to the App component. But this can make our code messy when handling many states throughout the app.

For global state management we can use the React context API or libraries such as Redux or Recoil. In this post

In this post we do not go deeper into these topics and we will see how can we do this using unstated-next.

Unstated-next : Unstated-next is a simple light-weight library created based on React’s context API. We can use its simple API methods to manage global states throughout the app.

Step by step guide

I’ll demonstrate the capabilities through a basic React app which has few components. Before proceeding, go ahead and create or clone a simple app.

Folder structure of the sample project

Here in the app, Products.js and UserProfile.js represent separate pages and SideBar.js and TopBar.js components are used in both pages.

1. Data Collection
Machine learning models rely on large amounts of data to learn effectively. The more diverse and high-quality the data, the better the model can generalize to new situations.
How the model learns from data:
  • Models analyze raw data from sources like databases, sensors, or web scraping.
  • Through preprocessing (removing inconsistencies, missing values, and duplicates), models ensure they are learning from clean and meaningful data.
  • The model observes input patterns to identify correlations and trends that might not be obvious to humans.
Example: In fraud detection, data from user transactions (e.g. timestamps, locations, amounts) is collected. The model learns the normal spending behavior of users and identifies anomalies as potential fraud.
3. Model Selection and Training
Training is where the actual learning happens. The model is exposed to historical data and adjusts its internal parameters (weight and biases) to recognize patterns and relationships.
How the model learns from data:
  • Models learn by adjusting their predictions based on labeled data (in supervised learning) or by discovering patterns on their own (in unsupervised learning).
  • Algorithms like neural networks use iterative optimization techniques (e.g. gradient descent) to minimize prediction.
  • Training involves feeding the model with input-output pairs repeatedly until it learns to make accurate predictions.
Example: In spam detection, the model is trained on a thousand labeled emails to understand what features (e.g. specific keywords, sender addresses) correlate with spam messages.
4. Evaluation
Once the model is trained, it needs to be evaluated to ensure it generalizes well to new, unseen data. Evaluation helps detect issues like overfitting (memorizing training data) or underfitting (failing to learn).
How the model learns from data:
  • The model is tested on unseen data (validation/test sets) o measure accuracy, precision, recall, and other metrics.
  • Fine-tuning is done by adjusting hyperparameters (e.g. learning rate, number of layers) to enhance performance.
  • Evaluation provides feedback to improve learning by highlighting gaps in knowledge.
Example: A self-driving car model is tested in different environments to check how well it adapts to various road conditions.
5. Deployment and Monitoring
Once the model is deployed in real-world scenarios it continues to learn and improve time by processing new incoming data. Continuous monitoring ensures the model remains accurate and relevant.
How the model learns from data:
  • Models update themselves using real-world data, detecting changes in patterns over time (concept drift).
  • Feedback loops allow users to correct errors, helping the model learn from mistakes.
  • Retraining is performed periodically with fresh data to avoid degradation in performance.
Example: In a recommendation system like Netflix, the model adapts to user performance changes by analyzing new viewing habits and interactions.
Conclusion
Machine learning has revolutionized the way we interact with technology and solve complex problems across industries. From healthcare and finance to entertainment and beyond, ML enables systems to learn from data, adapt over time, and make intelligent decisions with minimal human interaction.
As we’ve explored, machine learning models follow a structured learning process — starting from data collection and feature engineering to model training, evaluation, and deployment. Each step plays a crucial role in ensuring that the model is accurate, efficient, and continuously improving.
Understanding how machine learning works is essential for anyone looking to harness its potential, whether you’re a business professional, developer, or enthusiast. With advancements in AI and data science, the possibilities of ML are expanding rapidly, opening new doors for automation, personalization, and innovation.
Thank you for reading.
I hope it helped.
In today’s globalised world, the significance of diversity, equity, and inclusion in the workplace cannot be overstated. DEI initiatives are not just buzzwords but essential elements that drive innovation, creativity, and overall organizational success. At Nimi, we firmly believe in and practice DEI principles to foster a thriving work environment.
One of the key aspects of DEI at Nimi is our commitment to maintaining a gender balance with a 1:1 ratio. This approach ensures that both men and women have equal opportunities and representation within our company. Gender diversity brings diverse perspectives to the table, leading to better problem-solving and decision-making. It also helps in creating a more inclusive environment where everyone feels valued and respected.
At Nimi, we understand that career growth is not solely about the work employees do within the office. We actively promote professional development through various initiatives, such as organizing “Ask Me Anything” (AMA) sessions. These webinars provide a platform for employees to engage with experts, ask questions, and gain insights into different aspects of their careers. Furthermore, we encourage our employees to attend industry events for learning and development. By exposing them to new ideas and trends, we ensure that our workforce remains competitive and innovative. This commitment to continuous learning helps employees grow both personally and professionally.
Recognizing the importance of work-life balance, Nimi offers a hybrid work model that provides employees with the flexibility to work from home. This approach not only helps in accommodating personal commitments but also enhances productivity and job satisfaction. By trusting our employees to manage their time and work environment, we create a culture of responsibility and autonomy.
Quarterly meet-ups are another cornerstone of our DEI strategy. These gatherings are designed to promote employee engagement and strengthen the sense of community within Nimi. By bringing everyone together regularly, we ensure that employees feel connected to the company’s goals and values. These events also provide an opportunity for employees to share their experiences, celebrate achievements, and build relationships.
Our commitment to work-life balance, employee engagement, and professional development through DEI initiatives sets Nimi apart as a forward-thinking organization. By fostering an inclusive and equitable workplace, we not only enhance our employees’ well-being but also drive the overall success and growth of our company.

Grow your Revenue