Why Node.js Still Reigns Supreme
Let’s face it — in the chaotic jungle of web technologies, Node.js is that rare beast that does exactly what it promises (most of the time). At Kanhasoft, we’ve tried it all: Ruby that refused to stay polished, Java that brewed endless boilerplate, and PHP… well, let’s not talk about the time we accidentally broke our own billing system with a foreach
. But Node.js? It just clicked — like LEGO bricks (the satisfying kind, not the ones you step on at 2 AM).
So, why is Node.js still a top pick for building web apps? Three words: Speed, Scalability, JavaScript. It runs on Chrome’s V8 engine, which makes it absurdly fast — like “blink and your API is ready” fast. It uses non-blocking I/O, meaning your app doesn’t freeze like your uncle’s Windows XP laptop. And yes, it’s all JavaScript — which means front-end and back-end developers can finally speak the same language (no more awkward silences at team lunches).
We’ve seen Node.js scale beautifully in production. Whether you’re building a real-time chat app or a sprawling REST API, Node’s event-driven architecture and massive ecosystem mean you’ll spend less time reinventing wheels — and more time building apps users love.
Welcome to the club. The coffee’s hot, the logs are colorful, and the runtime? Blazing.
Setting Up Node.js (the Right Way)
Before you build your app of dreams (or nightmares — we’ve seen some Git histories…), you need to set up Node.js properly. And no, dragging a Node.js .exe
into your system and hoping for the best is not “setup” — that’s a developer horror story in the making.
At Kanhasoft, we recommend installing Node.js using Node Version Manager (nvm). Why? Because one day you’ll wake up and need Node 14 for a legacy project, Node 20 for the new one, and Node 18 for that open-source thing you forgot you signed up for. With nvm
, switching versions is as easy as nvm use 18
.
To install nvm
, follow the official GitHub guide. Then run:
nvm install --lts
nvm use --lts
This gives you the latest LTS (Long-Term Support) version — stable, secure, and supported by most frameworks.
Once Node is ready, verify with node -v
and npm -v
. Pro tip: install npx too — it lets you run Node tools without installing them globally (because who needs another globally installed clutter monster?). Setup might sound boring, but trust us — do it right now, save hours later.
Now that we’re prepped, let’s tool up with npm and dive into the good stuff.
Understanding npm & npx
If Node.js is the engine, npm is the fuel pump. It’s how you get everything from Express.js to that oddly specific library that parses phone numbers in Antarctica. But — and we’ve all done it — installing packages globally with npm install -g
can quickly turn your environment into dependency spaghetti.
Here’s how we like to roll. Use npm
to install dependencies locally, project by project. Keep your package.json
tidy. Add meaningful scripts (no more dev: nodemon index.js
buried in terminal history).
And then there’s npx — the MVP you didn’t know you needed. It lets you run binaries from your node_modules
folder, or even from the registry, without installing them. Running npx create-react-app
or npx prisma migrate dev
is clean, efficient, and avoids global clutter.
We’ve had devs spend half a day debugging a globally installed CLI tool — only to find it clashed with another version elsewhere. Lesson learned: trust npx
like you trust your favorite coffee mug — it won’t let you down.
And remember — always check package-lock.json
into version control. It’s not just a lockfile; it’s your time-travel ticket to a known-good build.
Express.js: The Swiss Army Knife
Let’s get this out of the way: Express.js is the Beyoncé of Node.js frameworks. We’ve flirted with others—Koa, Fastify, even that one experimental framework we won’t name—but we keep coming back to Express. It’s minimal, flexible, and just lets you get stuff done.
Need to spin up a REST API? app.get('/api/data', handler)
— done. Want to build middleware to filter requests from your QA team’s IP (because yes, they broke staging again)? Easy peasy. Express gives you just enough structure to be useful, without boxing you into a corner of magical abstractions and unreadable stack traces.
At Kanhasoft, we treat Express like the sturdy scaffolding that holds up everything else. Routing? Check. Middleware? Love it. Integration with templating engines or databases? You bet. It’s also battle-tested — powering everything from MVPs to enterprise apps. And with its huge community, you’ll find plugins and help faster than you can say “unexpected token.”
One tip though: keep it modular. We split routes, controllers, and middleware into separate files — a tiny effort upfront that saves hours of code archaeology later. Also, always use async handlers with try-catch. Trust us, debugging an unhandled rejection at 2 AM is not a vibe.
Express doesn’t just help you build apps — it helps you build apps you’ll still understand next month.
dotenv: Because Secrets Should Stay Secret
If your API keys are hardcoded, please stop reading and go delete them from your repo right now. (Seriously. We’ll wait.)
Okay, now let’s talk about dotenv — the package that keeps your secrets, well, secret. It lets you store sensitive configuration in a .env
file instead of hardcoding them into your app. Things like database URIs, third-party API keys, or any token you’d rather not expose to the world. dotenv reads that file and loads the variables into process.env
at runtime.
At Kanhasoft, we treat .env
like a vault — never committed to Git, always included in .gitignore
. Every environment (local, staging, production) gets its own tailored .env
file. That way, your dev database doesn’t accidentally nuke production data (yes, that’s a thing… yes, it happened once).
Here’s what your setup might look like:
PORT=3000
DATABASE_URL=mongodb://localhost/myapp
JWT_SECRET=superdupersecret
And then in code:
require('dotenv').config();
const port = process.env.PORT || 3000;
It’s clean, secure, and keeps your app flexible. If you ever find yourself SSH-ing into production just to change a port, dotenv is your new best friend. Pro move: use libraries like dotenv-safe
to enforce required variables. Because missing secrets shouldn’t lead to silent crashes.
chalk: Make the Terminal Pretty Again
Let’s be honest — debugging Node apps in a plain white terminal is like watching paint dry in grayscale. Enter chalk, the color-splashing hero your logs never knew they needed.
chalk is a tiny library that lets you colorize console output. Sounds trivial? Maybe. But trust us, when you’re squinting through 300 lines of logs at 2 AM, a splash of red for errors or green for successes is chef’s kiss. It turns your terminal from chaos into a readable story.
Here’s how we use chalk at Kanhasoft:
const chalk = require('chalk');
console.log(chalk.green('✓ Server started on port 3000'));
console.log(chalk.yellow('⚠ Warning: API rate limit approaching'));
console.log(chalk.red('✗ Failed to connect to database'));
You can even chain styles: chalk.bold.bgMagenta.white('Whoa there')
. Want rainbow text? Yes, someone made that too. (Please use responsibly.)
It’s especially helpful during development and testing. For production logging, we pair it with Winston (which we’ll cover later), but for quick-and-dirty debugging, chalk is unbeatable.
Pro tip: don’t overdo it. Color is like seasoning — sprinkle it in the right places and everything tastes better. Dump a whole palette in your logs and your console turns into a circus.
cors: Your Friendly Neighborhood Gatekeeper
Ah yes, CORS — the polite bouncer standing between your frontend and backend, making sure nobody sneaks into the party without an invite. Or, put less dramatically, it’s what allows your browser-based app to talk to your API across different domains.
Node apps using Express need the cors
package to handle Cross-Origin Resource Sharing requests. Without it, browsers will throw a fit whenever your frontend at localhost:3000
tries to access your backend at localhost:5000
. (Yes, this happens every single time you forget it.)
At Kanhasoft, we use the cors middleware like this:
const cors = require('cors');
app.use(cors()); // Open to all origins (dev-only)
Or, for tighter control:
app.use(cors({
origin: ['https://your-frontend.com'],
methods: ['GET', 'POST'],
credentials: true
}));
It’s simple, powerful, and — when misconfigured — painful. We’ve seen apps fail silently because someone misspelled “Authorization” in allowed headers. (Looking at you, Tim.)
cors helps you secure your API while enabling communication across domains. Just don’t leave it wide open in production unless you want your backend hugged by strangers.
nodemon: Save, Refresh, Repeat
If you’re still restarting your Node.js server manually every time you update your code — are you okay? Blink twice if you need help.
Enter nodemon — the unsung hero that watches your files like a hawk and automatically restarts your app when you make changes. No more Ctrl + C
, up arrow, enter. Just save and boom, your app refreshes itself like a good intern.
Here’s the setup we use at Kanhasoft:
npm install --save-dev nodemon
And then in your package.json
:
"scripts": {
"dev": "nodemon index.js"
}
Run it with npm run dev
, make changes, and enjoy the serenity of a self-restarting development environment.
We like to configure it a bit further with a nodemon.json
file:
{
"ext": "js,json",
"ignore": ["node_modules"],
"exec": "node index.js"
}
Trust us, nodemon becomes addictive. The only thing faster is coffee — and even that won’t restart your app when you add a new route. Bonus: pair it with concurrently if you’re running both frontend and backend side by side. nodemon will gladly play along — no complaints, just performance.
ESLint + Prettier: Linting is Caring
Let’s be honest: developers have opinions. Strong ones. About semicolons. Tabs vs spaces. Quotes. Brackets. And yes, it gets… heated.
To prevent World War JavaScript, we at Kanhasoft use ESLint and Prettier. ESLint catches actual problems — unused variables, weird scoping, ==
when you meant ===
. Prettier, on the other hand, formats your code to a consistent style automatically. They work like Batman and Alfred — stylish and strict.
Here’s our usual setup:
npm install --save-dev eslint prettier eslint-config-prettier eslint-plugin-prettier
Then create an .eslintrc
file:
{
"extends": ["eslint:recommended", "plugin:prettier/recommended"],
"env": {
"node": true,
"es2021": true
}
}
And a .prettierrc
:
{
"singleQuote": true,
"semi": false
}
Boom. Now every dev on the team writes code that looks the same, even if they all secretly hate each other’s preferences. Use VS Code extensions for both tools and enable format-on-save. Suddenly, your repo is cleaner, your merges are easier, and your pull requests don’t turn into code style debates.
Linting isn’t about rules — it’s about peace.
nvm: One Node to Rule Them All
Have you ever had a project scream “Node v16 only!” while another demands v18 like it’s the latest fashion trend? Yeah, us too. That’s why we swear by nvm — Node Version Manager — the Gandalf of version control for your runtime.
With nvm
, you can install and switch between Node.js versions like it’s nothing:
nvm install 18
nvm use 18
nvm install 16
nvm use 16
Boom — now you can jump between projects like a time-traveling developer. At Kanhasoft, we even include an .nvmrc
file in each repo with the required Node version. That way, all you do is:
nvm use
And just like that, you’re in the right version — no guesswork, no why isn't this package working
, no tears.
Why do we love it? Because package compatibility is no joke. Some tools (we’re looking at you, node-gyp
) are extremely picky about their Node version. Use the wrong one, and you’ll be Googling errors for hours.
nvm keeps environments clean, consistent, and predictable. And when things go wrong, you can always blame the other version (developers’ rule #7).
Standard Folder Structure: Avoid Future You’s Wrath
There’s nothing worse than opening a project after two months and whispering, “What fresh hell is this?” That’s why we’re big fans of clean, consistent folder structures — not just for you, but for your future teammates, too.
Here’s our go-to layout at Kanhasoft for Node.js apps:
/src
/controllers
/routes
/models
/middleware
/utils
index.js
.env
Each folder has its purpose:
-
controllers/ for logic,
-
routes/ for API endpoints,
-
models/ for schemas,
-
middleware/ for functions like auth or logging,
-
utils/ for helpers that don’t fit anywhere else.
Keep your root folder clean. Don’t dump files there like it’s a digital junk drawer. Use a config.js
file if needed for environment-specific variables.
We also prefix filenames consistently: userController.js
, authMiddleware.js
, dbConfig.js
. That way, you know what each file does before you even open it.
Consistency is kindness. It helps onboard new devs, simplifies debugging, and avoids spaghetti nightmares. Remember: your project isn’t just for today — it’s for tomorrow-you, who won’t remember why utils3.js
even exists.
Jest: Like the Name, but Less Funny
You know what’s not funny? A production bug caused by an untested edge case at 3:00 AM on a Sunday. That’s why we take testing seriously — and Jest is our weapon of choice.
Jest is fast, modern, and built by the same folks who maintain React — but it works beautifully with any Node.js app. We use it to write unit tests that validate our logic, cover edge cases, and save our reputation (and sleep schedule). At Kanhasoft, we’ve written Jest tests for everything from tiny utility functions to full API workflows.
Here’s a taste:
test('adds 2 + 2 to equal 4', () => {
expect(2 + 2).toBe(4)
})
With jest.mock()
, you can even fake external services like email or Stripe, which means faster and safer tests. And the coverage report? It’s like a report card — if you flunk, you’ll know exactly where.
Want to level up? Combine Jest with Supertest to test routes, or ts-jest
if you’re TypeScript-ing your way through 2025.
We keep tests in a __tests__/
folder or alongside the modules they cover. Run them with npm test
, and bask in that sweet, green checkmark. Testing might not be glamorous — but when your CI passes with zero bugs, you’ll feel like a superhero.
Supertest: For When APIs Need a Trial
So you’ve built your shiny REST API. It returns JSON, handles errors, and even uses fancy status codes. But here’s the thing: unless you test it, it’s just theory — and your frontend devs will be the first to prove it wrong.
That’s where Supertest comes in. It lets you write integration tests for Express routes like a boss. It spins up your app in memory, fires real HTTP requests, and checks the responses. No browser. No Postman and mercy.
Here’s a real-world example from Kanhasoft:
const request = require('supertest')
const app = require('../index')
describe('GET /api/health', () => {
it('should return status 200 and message OK', async () => {
const res = await request(app).get('/api/health')
expect(res.statusCode).toBe(200)
expect(res.body.message).toBe('OK')
})
})
With Supertest, you’re not just testing logic — you’re testing the experience. Does the endpoint respond fast? Return what it should? Handle edge cases?
When paired with Jest or Mocha, Supertest turns your test suite into a fortress. And believe us — that’s exactly what you’ll want when someone makes a typo in the frontend fetch call. Again.
Mongoose for MongoDB: Fluent but Fierce
Working with MongoDB directly is fine — until it isn’t. Raw queries can get messy fast. That’s why we reach for Mongoose, the ODM that gives MongoDB some much-needed structure, like a nice blazer over a hoodie.
Mongoose lets you define schemas, enforce validation, and perform queries with fluent, chainable methods. At Kanhasoft, we’ve built everything from to-do apps to full CRMs using Mongoose — and the patterns always pay off.
Here’s a quick example:
const mongoose = require('mongoose')
const userSchema = new mongoose.Schema({
name: String,
email: { type: String, required: true },
createdAt: { type: Date, default: Date.now }
})
module.exports = mongoose.model('User', userSchema)
With User.find()
, User.updateOne()
, and middleware hooks like pre('save')
, you can build robust data layers without writing repetitive query logic.
Pro tip: watch out for schema bloat. Keep them lean. Use .lean()
in queries for performance. And always — always — handle connection errors properly (Mongoose will silently fail and not even leave a breakup note).
Also, Mongoose 6+ has some great features like native promises, stricter defaults, and improved performance. Upgrade when you can — future you will say thanks.
Helmet: Keep Hackers at Arm’s Length
Your API might be fast. It might be beautiful. But if it’s not secure, you’re basically hosting an open bar for hackers. That’s where Helmet comes in.
Helmet is a middleware that helps set HTTP headers for security. And while that might sound boring, it’s the digital equivalent of locking your doors and windows before going to bed.
At Kanhasoft, it’s usually one of the first things we install in any Express project:
const helmet = require('helmet')
app.use(helmet())
That one line disables things like X-Powered-By
headers (because no one needs to know your stack), adds Content Security Policies, and helps prevent clickjacking, XSS, and more.
You can customize it too:
app.use(
helmet({
contentSecurityPolicy: false // Customize as needed
})
)
Helmet won’t stop a DDoS or brute-force attack, but it will reduce your surface area dramatically. Pair it with tools like rate limiting, input sanitization, and HTTPS — and now you’re running a fortress, not a food truck.
Security isn’t optional. It’s the difference between “Oops, data leak” and “We’re still open for business.” Helmet helps you stay in the latter group.
Rate Limiting with express-rate-limit
Here’s a sobering thought: if your API doesn’t have rate limiting, congratulations — you’ve just built the internet’s new favorite playground for bots, scrapers, and aspiring hackers. Not quite the party you intended, huh?
Enter express-rate-limit — the digital bouncer who checks IDs and caps how many times someone can knock on your API’s door. It helps prevent abuse, brute-force attacks, and accidental overloads. We use it across almost every production Express app at Kanhasoft.
Getting started is deliciously simple:
const rateLimit = require('express-rate-limit')
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per window
message: 'Too many requests, please try again later.'
})
app.use('/api/', limiter)
Want finer control? Create different rate limits for public vs. authenticated routes. Log excessive hits. Block abusive IPs automatically.
Here’s the thing — rate limiting is not just for massive platforms. Even in smaller apps, it buys time when something goes wrong. And when bots attack (they will), your logs will thank you for not turning into a 1,000-line panic novel.
We’ve been on both sides: with and without rate limits. One results in uptime. The other… well, let’s just say it was a learning moment.
PM2: The Production Guardian Angel
When you deploy a Node.js app, it doesn’t magically keep running forever (unless you believe in fairy devmothers). Node processes crash — memory leaks, unhandled errors, or that “undefined is not a function” bug that slipped into prod.
That’s why we use PM2 — a process manager that keeps your app running, restarts it when it crashes, and even monitors performance.
Basic usage?
npm install pm2 -g
pm2 start index.js --name my-app
You can also save and reload your entire process list:
pm2 save
pm2 startup
Need logs? pm2 logs
. Want cluster mode? One flag. Monitoring dashboard? Built-in.
At Kanhasoft, PM2 is part of every deployment script. It takes care of environment variables, crash recovery, and even metrics reporting. Heck, it even has an ecosystem.config.js file where you can define your entire app’s behavior.
module.exports = {
apps: [{
name: 'api-server',
script: 'index.js',
instances: 'max',
env: {
NODE_ENV: 'production'
}
}]
}
Deploying without PM2 is like skydiving without a parachute — brave, but not wise.
Debugging Tools: No More Guess-and-Check
If you’ve ever spent four hours debugging only to realize the issue was a missing comma… welcome to the club. The Debug module and VS Code debugger are here to rescue your sanity.
debug module
This handy library lets you log scoped messages that you can toggle on and off using environment variables.
const debug = require('debug')('app:startup')
debug('Server is starting...')
Then run:
DEBUG=app:* node index.js
It’s clean, efficient, and miles better than scattering console.log
across your app like confetti at a wedding.
VS Code Debugger
VS Code’s built-in debugger makes inspecting values, stepping through code, and tracing bugs as easy as clicking a breakpoint. Just add a launch.json
file and hit F5
.
{
"type": "node",
"request": "launch",
"name": "Debug App",
"program": "${workspaceFolder}/index.js"
}
Combined with nodemon
, you’ve got hot reload and real-time inspection. You’ll wonder how you ever lived without it.
Debug smart, not hard.
Git Hooks with Husky
Let’s be honest — developers forget things. Like adding tests. Or formatting their code. Or pushing code without even linting it. (Not naming names, but we know who you are.)
Husky is the watchdog that saves your repo from chaos by running scripts at key Git lifecycle events. Want to lint before every commit? Run tests before a push? Husky’s your guy.
Setup is simple:
npm install husky --save-dev
npx husky install
Add a pre-commit hook:
npx husky add .husky/pre-commit "npm run lint"
Now every commit runs your linting rules. Want tests before push?
npx husky add .husky/pre-push "npm test"
We use it at Kanhasoft to enforce standards across all projects — so no one can sneak messy code into the main branch (looking at you again, Tim).
Pro tip: combine Husky with lint-staged to lint only staged files — fast and efficient.
With Git hooks, your team ships cleaner code, breaks fewer builds, and sleeps a little better at night.
Docker: For Containers You Actually Want
Remember the “but it works on my machine” excuse? Docker exists so you’ll never have to say (or hear) that again. It wraps your entire app — code, dependencies, configs — into a neat, isolated container that runs the same everywhere. No more “dependency hell,” no more OS-specific quirks.
At Kanhasoft, Docker is our go-to for consistent environments, especially across dev, staging, and production. Here’s a super basic Dockerfile
for a Node.js app:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Then run:
docker build -t my-node-app .
docker run -p 3000:3000 my-node-app
Boom — containerized app, ready to rock.
Pro tip: Don’t forget to add a .dockerignore
file to avoid bloating your image with stuff like node_modules
or .git
.
Docker takes a little getting used to, but once it clicks, you’ll wonder how you ever shipped without it. Bonus: combine with Docker Compose (next section) for multi-container workflows — like databases, queues, or frontend apps running together in harmony.
Docker Compose for Local Dev Bliss
If Docker is the toolbox, Docker Compose is the magic wand that ties all your tools together. It lets you define multiple containers (like your app + database + Redis) in one tidy YAML file — then spin them all up with a single command.
Here’s a simple docker-compose.yml
we use for Node + Mongo:
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
depends_on:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
Start it up:
docker-compose up
Everything works together. No config chaos. No “how do I start Mongo again?” Slack messages from the intern.
Docker Compose is a blessing for local dev and staging. You can also use it with volumes for persistent data, or networks for more complex microservices. At Kanhasoft, we spin up entire stacks with Compose — APIs, queues, workers, DBs — and everything just clicks.
Trust us: once you taste the power of docker-compose up
, there’s no going back.
Deploying with Heroku (a Love-Hate Saga)
Ah, Heroku — the cloud platform that makes deployment so easy, you’ll swear it’s magic. Until you hit the free tier limits, get rate-limited, and suddenly start hearing the Jaws theme when your app scales.
That said, for MVPs, staging environments, and demo projects? Heroku is chef’s kiss.
Here’s how we deploy Node.js apps in under 5 minutes:
-
Initialize Git and push to GitHub.
-
Create a Heroku app:
heroku create my-node-app
3. Deploy:
git push heroku main
Heroku auto-detects Node, installs dependencies, and starts your server. Just make sure your package.json
includes:
"scripts": {
"start": "node index.js"
}
Want environment variables? Use:
heroku config:set JWT_SECRET=secret123
Is it perfect? Nope. But when you need to demo an app or test it in the wild, Heroku lets you go live in minutes.
We’ve had some love-hate moments with Heroku, but it’s still one of the most beginner-friendly platforms out there. Just don’t expect it to handle your entire SaaS business. Use it to validate ideas — then scale elsewhere.
Vercel for Frontend & APIs
If Heroku is the friendly all-rounder, Vercel is the cool, modern kid built specifically for frontend frameworks — but with a twist: it handles serverless API routes too. Yes, your backend logic can live alongside your frontend code. Cue dramatic music.
We use Vercel mostly for frontend deployments (Next.js, React, etc.), but its ability to deploy Node.js functions as API endpoints is 🔥.
Structure your app like this:
/api
/hello.js
/pages
/index.js
And Vercel will auto-magically create GET /api/hello
for you. It’s perfect for small functions, forms, or integrations. No servers to manage. No deployments to configure.
To deploy:
-
Connect your GitHub repo to Vercel.
-
Push your code.
-
Vercel builds and deploys automatically. Boom.
It handles custom domains, HTTPS, edge caching, and rollback with zero config. It’s like Heroku’s cooler cousin who wears shades at night.
Just don’t try to run a full Express app on it — that’s not what Vercel’s made for. Use it for frontend hosting + light API logic, and pair it with something like Supabase, Firebase, or a real backend.
In short: for modern apps with a frontend-first stack, Vercel is pure joy.
GitHub Actions: Your New Dev Intern
Want to impress your team and scare your bugs into submission? Meet GitHub Actions, your free, tireless CI/CD assistant that lives right inside your repo.
At Kanhasoft, we use GitHub Actions to automate testing, linting, deployment, and even sending Slack updates — all triggered by a simple git push
. Think of it as a robot intern that never sleeps, never forgets, and doesn’t drink all the office coffee.
Here’s an example workflow (.github/workflows/node.yml
):
name: Node CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v4
with:
node-version: 18
- run: npm install
- run: npm test
This one runs tests on every push — you’ll know instantly if someone broke the app (again, probably Tim).
You can add deployment steps, run linters, publish to Docker Hub — basically anything you can script, GitHub Actions can automate.
Start small. Test your app. Then automate like a boss. And if something goes wrong, just scroll through the logs — no more “it worked locally” excuses. GitHub Actions sees all.
Swagger UI: Self-Documenting APIs
Every API should come with a map — not a cryptic Slack message from six months ago. Enter Swagger UI (powered by OpenAPI) — the interactive documentation your APIs deserve.
Swagger generates live, beautiful docs from your API specs. Your endpoints show up in a neat browser UI, complete with parameters, responses, and a “Try it out” button that makes frontend devs weep tears of joy.
Here’s how we add it at Kanhasoft:
- Install Swagger tools:
npm install swagger-ui-express yamljs
2. Create a swagger.yaml
file (or use JSON).
3. Hook it up in Express:
const swaggerUi = require('swagger-ui-express')
const YAML = require('yamljs')
const swaggerDocument = YAML.load('./swagger.yaml')
app.use('/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument))
Visit /api-docs
, and boom — instant API portal.
Clients love it. Frontend teams worship it. PMs use it to sound technical. Everyone wins.
Swagger makes your APIs discoverable, testable, and maintainable. Use it from day one — your future self will send you a thank-you cookie.
REST Client in VS Code
Look, Postman is cool. Thunder Client? Also decent. But sometimes, all you want is to hit an endpoint without ever leaving your editor. That’s where the REST Client extension for VS Code shines.
Here’s how it works:
-
Install REST Client extension.
-
Create a
.http
file in your project. -
Write:
### Get Health
GET http://localhost:3000/api/health
### Post Login
POST http://localhost:3000/api/login
Content-Type: application/json
{
"email": "test@example.com",
"password": "123456"
}
Hover over the line, click “Send Request” — and boom, response in your sidebar.
We use it at Kanhasoft to document endpoints, test auth flows, and debug headers — all without opening another app. It’s fast, lightweight, and version-controllable (unlike Postman collections).
You can even add environment variables, use cookies, and chain requests.
It’s like having Postman in your codebase — minus the tab clutter and login prompts. Try it once and you’ll wonder why you didn’t use it sooner.
Code Quality and Coverage: Istanbul & SonarQube
You wrote tests — great. But are you sure they’re covering everything? That’s where Istanbul (via nyc
) and SonarQube step in to keep you honest.
Istanbul/nyc
Istanbul measures how much of your code is covered by tests — line by line, function by function.
npm install --save-dev nyc
Then in package.json
:
"scripts": {
"test": "nyc mocha"
}
Run it:
npm test
Get a report in /coverage/
. You’ll quickly see what you’re missing — and it’s often those sneaky edge cases that break things.
SonarQube
This one’s a bit more enterprise-y, but wow is it powerful. SonarQube scans your codebase for bugs, vulnerabilities, and code smells. Integrate it with GitHub Actions, and you’ve got CI feedback that actually teaches you something.
We run it for large client projects where quality and security really matter. It’s like a linter, security scanner, and reviewer all rolled into one.
And trust us — shipping with 85% test coverage feels way better than debugging in prod.
Common Pitfalls and How to Avoid Them
Ah, the classic developer rite of passage — falling face-first into a Node.js pitfall, then pretending you meant to do that. At Kanhasoft, we’ve stepped on our fair share of landmines — and we’re here to save you the limp.
Callback Hell
Yes, Node started with callbacks. And yes, they get messy faster than spaghetti in a blender. Nesting them too deep leads to unreadable chaos. We’ve seen code that looked more like an archaeological dig than an API.
Fix: Embrace Promises or, better yet, async/await. Cleaner, more readable, and much easier to debug.
Unhandled Promise Rejections
We once lost an afternoon (and a bit of our dignity) chasing a silent failure because someone forgot .catch()
on a promise. Modern Node throws warnings, but still — be vigilant.
Fix: Always handle errors with try/catch around await
, or attach .catch()
to every promise.
Memory Leaks
They’re silent, sneaky, and absolutely soul-crushing. Maybe it’s a forgotten event listener. Maybe it’s a huge object stored in global scope.
Fix: Use tools like Chrome DevTools, Node Inspector, or heapdump
to monitor memory usage.
Every pitfall teaches something — and every fix makes your toolbelt stronger. Learn, adjust, and keep shipping better code.
What We Learned the Hard Way at Kanhasoft
Let’s be honest: we didn’t build our best practices overnight. Some were forged in the fires of bad deploys, surprise bugs, and “why is production down” moments that still haunt us.
Like that time a junior dev (no names, but they know) hardcoded the staging database URI into the production .env
file. Yep — wiped all staging data. Fun day.
Or when we pushed code with console.log(password)
in a debug loop. Thankfully, it was internal — but still, it made it into logs. Ouch.
Then there was that moment when we deployed a feature with missing validation, and users started submitting emojis in name fields. Our backend choked. Support had a field day.
What did we learn?
-
Automate your tests.
-
Lint before you commit.
-
Never trust user input. Ever.
-
Backups exist for a reason.
Now, every bug becomes a documented lesson. Every crash makes the next build better. The goal isn’t perfection — it’s progress, discipline, and fewer angry Slack messages at 1 AM.
And that’s how we sharpen our Node.js sword — one mistake at a time.
The “It Works on My Machine” Chronicles
Picture this: staging is broken. The team’s stressed. But one dev calmly mutters, “It works on my machine.” Congratulations — you’ve entered the Twilight Zone of software development.
At Kanhasoft, this used to happen often. One dev had different Node versions. Another forgot to run migrations. Someone had Docker, someone didn’t. It was chaos wrapped in confusion.
Here’s how we fixed it:
-
.nvmrc
files: Now everyone uses the same Node version. -
Docker: Same environment for all. No more “why does it crash on Windows?”
-
Scripts for everything: Want to seed the DB? Run
npm run seed
. Want to start locally?npm run dev
. -
Pre-commit hooks: No one sneaks in broken code.
And the kicker? A README so good, even new interns don’t have to ask us how to get started.
The “it works on my machine” problem isn’t about the code — it’s about the environment. Solve that, and suddenly, your whole team is in sync (and much less grumpy).
Future-Proofing Your Stack
Technology moves fast. Yesterday’s bleeding edge is today’s tech debt. At Kanhasoft, we don’t chase shiny objects — but we do believe in future-proofing like our sanity depends on it (because it does).
Here’s how we do it:
Stay LTS
We always build on the LTS (Long-Term Support) version of Node. No need to ride the chaos of the latest release unless there’s a compelling reason.
Upgrade Dependencies Proactively
Outdated packages are a security risk. We use npm audit
, npm-check-updates
, and Renovate bots to stay ahead — without breaking everything.
Separate Business Logic
Decouple routes, controllers, and services. You’ll thank yourself when switching databases, frameworks, or scaling to microservices.
Use TypeScript (Eventually)
We won’t lie — TypeScript has a learning curve. But it helps eliminate whole classes of runtime bugs. For complex apps, it’s worth the upfront cost.
Document Everything
From Swagger docs to good READMEs, future-you needs clues. Don’t leave them hanging.
The future’s coming — whether you’re ready or not. Build today like someone else will take over tomorrow. Because eventually… someone will.
Conclusion: The Only Toolbelt That Won’t Let You Down
Look, building web apps is hard. There’s syntax, state, servers, CI/CD pipelines, and that mysterious node_modules
folder that somehow weighs more than your laptop. But with the right tools, it’s not just bearable — it’s actually fun.
At Kanhasoft, our Node.js toolbelt wasn’t crafted in a weekend. It was forged in the fire of real client projects, last-minute deployments, and many (many) “why is this working in dev but not in prod?” debugging sessions. We’ve banged our heads so you don’t have to.
The beauty of Node.js is in its flexibility — but that can also lead to chaos. The trick is to be intentional. Choose tools that solve problems, automate the boring stuff, and help your team move fast without breaking things.
We hope this guide arms you with everything you need to confidently start (and finish) your next Node.js web app — whether it’s a microservice, monolith, or MVP. And remember: keep things clean, test like a maniac, and always, always use dotenv.
See you on the terminal side.
FAQs
What is the best way to structure a Node.js app for scalability?
Start with clear folder separation: routes, controllers, models, middleware. Follow the MVC pattern, decouple business logic, and use environment-specific configurations. Modularize early — your future self will thank you.
Is Node.js still good for web development in 2025?
Absolutely. With its performance, massive ecosystem, and active community, Node.js remains a top pick for scalable web apps — especially when you want JavaScript across the stack.
Which is better: Docker or PM2 for deployment?
Use both! Docker handles environment consistency, while PM2 manages runtime processes. Together, they make a robust, scalable deployment strategy.
What tools help prevent bugs in production Node.js apps?
Linting with ESLint, formatting with Prettier, testing with Jest/Supertest, and monitoring with PM2 or New Relic are key. CI/CD with GitHub Actions also catches issues before they hit production.
Do I need to learn TypeScript with Node.js?
You don’t have to, but we recommend it for larger apps. TypeScript adds static typing that can prevent runtime errors and improve code readability.
How can I improve API documentation in my Node.js projects?
Use Swagger (OpenAPI) to generate interactive docs. Combine it with Postman collections for detailed testing. Bonus: include a README.md
with examples and setup instructions.