code examples
code examples
Build Scalable Bulk SMS Broadcasting with Fastify, Sinch API, and BullMQ
Complete guide to building a production-ready bulk SMS system using Node.js, Fastify v5, Sinch API, BullMQ v5.x queue, Prisma ORM, and PostgreSQL. Includes error handling, retries, rate limiting, database tracking, and scalability patterns for sending thousands of SMS messages reliably.
Build Scalable Bulk SMS Broadcasting with Fastify, Sinch API, and BullMQ
Sending bulk SMS messages – whether for marketing campaigns, notifications, or alerts – requires more than just a simple loop calling an API. A production-ready system needs to handle potential failures, manage rate limits, provide status tracking, and scale reliably.
<!-- DEPTH: Introduction lacks concrete use case examples showing real-world applications (Priority: Medium) -->This comprehensive tutorial shows you how to send bulk SMS using Node.js with the high-performance Fastify framework, leveraging the Sinch SMS API for message delivery and Redis with BullMQ for robust background job queuing. Learn how to build a scalable SMS broadcasting system that handles thousands of messages with automatic retries, error handling, and real-time status tracking.
What you'll build:
- A Fastify API endpoint to accept bulk SMS requests (list of recipients and a message body).
- A background job queue (using BullMQ and Redis) to process each SMS message individually, ensuring reliability and resilience.
- A worker process that picks up jobs from the queue and sends SMS messages via the Sinch API.
- Database persistence (using Prisma and PostgreSQL) to track the status of each message and the overall batch.
- An API endpoint to check the status of a submitted bulk messaging batch.
- Robust error handling, retries, logging, and basic security measures.
Core Technologies:
- Node.js: The JavaScript runtime environment. Required: Node.js v20 LTS "Iron" (Maintenance LTS through April 2026) or v22 LTS "Jod" (Active LTS through October 2025, Maintenance LTS through April 2027). Node.js v18 reached End of Life on April 30, 2025 and no longer receives security updates.
- Fastify: A high-performance, low-overhead web framework for Node.js. Fastify v5 (requires Node.js v20+) offers 5 – 10% performance improvements over v4 and includes security updates including fixes for CVE-2025-32442. Chosen for its speed, extensibility, and focus on developer experience. Fastify v4 is supported until June 30, 2025.
- Sinch SMS API: The third-party service used to send SMS messages. Sinch provides direct connections to 600+ mobile operators worldwide with 95% US population coverage. You'll interact with its REST API. API regions include US (
us.sms.api.sinch.com) and EU, chosen based on data protection requirements. - Redis: An in-memory data structure store, used here as the backend for your message queue.
- BullMQ: A robust, fast, and reliable Redis-based queue system for Node.js. BullMQ v5.x (latest v5.60.0 as of 2025) requires explicit connection objects and no longer supports integer job IDs. Essential for handling background jobs like sending individual SMS messages.
- Prisma: A modern database toolkit for Node.js and TypeScript, simplifying database access, migrations, and type safety. Prisma ORM v6.x (latest v6.16.0) includes Rust-free preview for PostgreSQL and requires PostgreSQL v9.6 minimum (v12+ recommended). Used here with PostgreSQL.
- PostgreSQL: A powerful, open-source object-relational database system.
- Axios: A promise-based HTTP client for making requests to the Sinch API.
System Architecture:
+-----------+ +-----------------+ +--------------+ +----------------+ +-----------+
| Client |----->| Fastify API |----->| Redis Queue |----->| Node.js Worker|----->| Sinch API |
| (Browser/ | | (Receives Bulk | | (BullMQ) | | (Processes Job)| | (Sends SMS)|
| App) | | Request, Queues | +--------------+ +----------------+ +-----------+
+-----------+ | Individual | | |
| Jobs) | | |
+-----------------+ | |
| ^ | |
| | Check Status | Update Status |
v | v v
+-----------------------------------------------------------+
| PostgreSQL Database (Prisma) |
| (Stores Batch Info, Individual Message Status, Errors) |
+-----------------------------------------------------------+Prerequisites:
- Node.js v20 LTS or v22 LTS and npm/yarn. Download Node.js. Important: Node.js v18 reached End of Life on April 30, 2025 and no longer receives security updates.
- Access to a terminal or command prompt.
- A Sinch account with SMS API credentials (Service Plan ID, API Token) and a configured Sinch phone number in E.164 format (e.g., +12025550100).
- Docker installed and running (for easily running Redis and PostgreSQL locally) or access to separate Redis v7+ and PostgreSQL v12+ instances.
- Basic understanding of Node.js, asynchronous programming, REST APIs, and databases.
curlor a tool like Postman for testing the API.
Final Outcome:
By the end of this guide, you'll have a scalable and reliable Node.js application capable of accepting requests to send thousands of SMS messages, processing them reliably in the background, handling errors gracefully, and allowing clients to check the status of their bulk sends.
1. Setting up the project
Initialize your Node.js project and install the necessary dependencies.
1. Initialize the project:
Open your terminal, create a project directory, and navigate into it.
mkdir fastify-sinch-bulk-sms
cd fastify-sinch-bulk-sms
npm init -yThis creates a basic package.json file.
2. Install dependencies:
Install Fastify for the web server, Axios for HTTP requests, BullMQ for the queue, ioredis as the Redis client for BullMQ, dotenv for environment variables, Prisma for database interaction, and pino-pretty for human-readable logs during development.
npm install fastify axios bullmq ioredis dotenv @prisma/client pino-pretty @fastify/rate-limit @fastify/helmet
npm install --save-dev prisma nodemonfastify: The web framework.axios: To make requests to the Sinch API.bullmq: The job queue library.ioredis: Redis client required by BullMQ and potentially rate limiting.dotenv: To load environment variables from a.envfile.@prisma/client: Prisma's database client.pino-pretty: Makes Fastify's logs readable during development.@fastify/rate-limit: Plugin for API rate limiting.@fastify/helmet: Plugin for setting security headers.prisma(dev): The Prisma CLI for migrations and generation.nodemon(dev): Automatically restarts the server during development when files change.
3. Configure Development Scripts:
Open your package.json and modify the scripts section:
// package.json
{
// … other configurations
"scripts": {
"start": "node src/server.js",
"dev": "nodemon --watch src --exec 'node -r pino-pretty src/server.js'",
"worker": "node src/worker.js",
"dev:worker": "nodemon --watch src src/worker.js",
"prisma:migrate": "prisma migrate dev",
"prisma:generate": "prisma generate",
"test": "echo \"Error: no test specified\" && exit 1"
},
// … other configurations
}start: Runs the main API server for production.dev: Runs the API server in development mode usingnodemonandpino-pretty.worker: Runs the background job worker for production.dev:worker: Runs the worker in development mode usingnodemon.prisma:migrate: Applies database schema changes.prisma:generate: Generates the Prisma Client based on your schema.
4. Set up Project Structure:
Create the following directory structure within your project root:
fastify-sinch-bulk-sms/
├── prisma/
├── src/
│ ├── config/
│ ├── lib/
│ ├── routes/
│ ├── workers/
│ ├── server.js
│ └── worker.js
├── .env
├── .gitignore
├── docker-compose.yml
└── package.jsonprisma/: Will contain your database schema and migrations.src/: Contains all the application source code.src/config/: Configuration files (e.g., queue setup).src/lib/: Shared libraries/utilities (e.g., Prisma client, Sinch client).src/routes/: Fastify route handlers.src/workers/: Background worker logic.src/server.js: Entry point for the Fastify API server.src/worker.js: Entry point for the BullMQ worker process..env: Stores environment variables (API keys, database URLs, etc.)..gitignore: Specifies files/directories to ignore in Git.docker-compose.yml: Defines local development services (Postgres, Redis).
5. Create .gitignore:
Create a .gitignore file in the project root to avoid committing sensitive information and unnecessary files:
# .gitignore
node_modules
dist
.env
*.log
# Prisma
prisma/migrations/*/*.sql
prisma/dev.db*
# OS specific
.DS_Store
Thumbs.db6. Set up Environment Variables (.env):
Create a .env file in the project root. This file will hold your secrets and configuration. Never commit this file to version control.
# .env
# Sinch API Credentials
# Get these from your Sinch Customer Dashboard under SMS -> APIs
SINCH_SERVICE_PLAN_ID="YOUR_SERVICE_PLAN_ID"
SINCH_API_TOKEN="YOUR_API_TOKEN"
SINCH_NUMBER="YOUR_SINCH_VIRTUAL_NUMBER" # e.g., +12025550100
# Sinch API Base URL - adjust region if needed (e.g., us.sms.api.sinch.com)
# See: https://developers.sinch.com/docs/sms/api-reference/regions/
SINCH_API_BASE_URL="https://us.sms.api.sinch.com"
# Redis Connection URL (for BullMQ & Rate Limiting)
# Example for local Docker Redis: redis://localhost:6379
# Can include auth: redis://:yourpassword@localhost:6379
REDIS_URL="redis://localhost:6379"
# Database Connection URL (for Prisma)
# Example for local Docker PostgreSQL: postgresql://user:password@localhost:5432/mydb?schema=public
DATABASE_URL="postgresql://postgres:password@localhost:5432/bulk_sms_db?schema=public"
# Important Security Note: The example uses 'password' for the database.
# NEVER use default or weak passwords in production. Use strong, unique
# passwords and manage secrets securely (e.g., via environment variables
# injected by your deployment platform or a secrets manager), even locally.
# Server Configuration
PORT=3000
HOST="0.0.0.0"
# Queue Name
SMS_QUEUE_NAME="sms_sending_queue"How to obtain Sinch Credentials:
- Log in to your Sinch Customer Dashboard.
- Navigate to SMS in the left-hand menu, then select APIs.
- You will find your Service plan ID and API token listed here. Click Show to reveal the API token.
- To find your Sinch Number, click on the Service plan ID link. Scroll down the service plan details page to find the phone numbers assigned to your account. Use one of these numbers (in E.164 format, e.g.,
+12xxxxxxxxxx). - Note the Region your service plan is associated with (e.g., US, EU) and ensure the
SINCH_API_BASE_URLreflects this.
7. Set up Local Database and Redis (Using Docker):
For local development, Docker Compose is an excellent way to manage dependencies like Redis and PostgreSQL. Create a docker-compose.yml file in the project root:
# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:15
container_name: bulk_sms_postgres
environment:
POSTGRES_DB: bulk_sms_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password # Match DATABASE_URL password (use a better password!)
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: bulk_sms_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
restart: unless-stopped
volumes:
postgres_data:
redis_data:Run docker-compose up -d in your terminal to start the database and Redis containers in the background. Ensure your .env file's REDIS_URL and DATABASE_URL match the credentials and ports defined here (and change the default password!).
2. Implementing core functionality (Queuing)
Directly sending SMS messages within the API request handler is inefficient and unreliable for bulk operations. A loop sending messages one by one would block the server, timeout easily, and offer no mechanism for retries or status tracking if the server crashes.
<!-- DEPTH: Section lacks concrete performance comparison showing why queuing is necessary (Priority: Medium) -->The solution is a background job queue. The API endpoint will quickly validate the request and add individual SMS sending tasks (jobs) to a Redis-backed queue (BullMQ). A separate worker process will then consume these jobs independently.
1. Configure BullMQ Queue:
Create a file to manage the queue instance.
// src/config/queue.js
import { Queue } from 'bullmq';
import { config } from 'dotenv';
config(); // Load .env variables
const redisUrl = process.env.REDIS_URL;
if (!redisUrl) {
throw new Error('REDIS_URL environment variable is not set.');
}
// Use URL constructor for robust parsing of the Redis URL
let connection;
try {
const parsedUrl = new URL(redisUrl);
connection = {
host: parsedUrl.hostname || 'localhost',
port: parseInt(parsedUrl.port || '6379', 10),
password: parsedUrl.password || undefined, // Handle optional password
// Note: BullMQ can often accept the URL string directly,
// but providing the object ensures clarity and compatibility.
};
} catch (error) {
console.error(`Invalid REDIS_URL format: ${redisUrl}`, error);
throw new Error(`Invalid REDIS_URL: ${error.message}`);
}
// Create a reusable queue instance
// We define the 'data' type expected for jobs in this queue
const smsQueue = new Queue(process.env.SMS_QUEUE_NAME || 'sms_sending_queue', {
connection, // Use the parsed connection object
defaultJobOptions: {
attempts: 3, // Retry failed jobs 3 times
backoff: {
type: 'exponential', // Exponential backoff strategy
delay: 1000, // Initial delay of 1 second
},
removeOnComplete: true, // Automatically remove jobs when completed
removeOnFail: 500, // Keep last 500 failed jobs for debugging
},
});
console.log(`SMS Queue '${smsQueue.name}' initialized.`);
console.log(`Connected to Redis at ${connection.host}:${connection.port}`);
// Optional: Event listeners for monitoring queue health
smsQueue.on('error', (error) => {
console.error(`Queue Error: ${error.message}`);
});
smsQueue.on('waiting', (jobId) => {
// console.log(`Job ${jobId} is waiting in the queue`);
});
smsQueue.on('active', (job) => {
// console.log(`Job ${job.id} has started`);
});
smsQueue.on('completed', (job, result) => {
// console.log(`Job ${job.id} has completed with result: ${JSON.stringify(result)}`);
});
smsQueue.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed with error: ${err.message}`);
});
export default smsQueue;- We load environment variables using
dotenv. - We parse the
REDIS_URLusing the built-inURLconstructor, which is more robust than basic string splitting and handles complex URLs (with auth, different ports, etc.). - We create a
Queueinstance, naming it based on the environment variable. - Crucially, we define
defaultJobOptions:attempts: 3: If a job fails (e.g., Sinch API is down), BullMQ will retry it up to 3 times.backoff: Specifies how long to wait between retries.exponentialincreases the delay after each failure (1s, 2s, 4s).removeOnComplete: Cleans up successful jobs from Redis to save space.removeOnFail: Keeps a history of the last 500 failed jobs for debugging.
- Basic event listeners are added for logging queue activity and errors.
3. Building the API layer (Fastify)
Now, let's create the Fastify server and the API endpoints.
1. Initialize Prisma:
Run the Prisma init command. This creates the prisma directory and a basic schema.prisma file.
npx prisma init --datasource-provider postgresqlMake sure the url in prisma/schema.prisma points to your DATABASE_URL environment variable:
// prisma/schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL") // Ensures it reads from .env
}
// Add models in Section 6 (Note: Section 6 is not provided in the input, but this comment remains)2. Set up Prisma Client:
Create a singleton instance of the Prisma client to be reused across the application.
// src/lib/prismaClient.js
import { PrismaClient } from '@prisma/client';
// Initialize Prisma Client
const prisma = new PrismaClient({
log: process.env.NODE_ENV === 'development' ? ['query', 'info', 'warn', 'error'] : ['error'],
});
console.log('Prisma Client initialized.');
export default prisma;3. Create the Sinch Client:
Create a module to encapsulate interactions with the Sinch API.
// src/lib/sinchClient.js
import axios from 'axios';
import { config } from 'dotenv';
config(); // Load .env variables
const sinchApi = axios.create({
baseURL: process.env.SINCH_API_BASE_URL,
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.SINCH_API_TOKEN}`,
},
// Add timeout for resilience
timeout: 10000, // 10 seconds
});
/**
* Sends a single SMS message using the Sinch API.
* @param {string} to - The recipient phone number in E.164 format (e.g., +12025550100)
* @param {string} from - The Sinch virtual number in E.164 format
* @param {string} body - The message content
* @returns {Promise<object>} - The response data from Sinch API
* @throws {Error} - If the API request fails
*/
export const sendSms = async (to, from, body) => {
const servicePlanId = process.env.SINCH_SERVICE_PLAN_ID;
const endpoint = `/xms/v1/${servicePlanId}/batches`;
try {
console.log(`Attempting to send SMS via Sinch to: ${to}`);
const response = await sinchApi.post(endpoint, {
to: [to], // Sinch batch endpoint expects an array of recipients
from: from,
body: body,
// Optional: Add delivery report request
// delivery_report: 'summary' // or 'full'
});
console.log(`Sinch API response for ${to}:`, response.status, response.data);
// You might want to return specific fields like batch_id
return response.data;
} catch (error) {
console.error(`Error sending SMS to ${to}:`, error.response?.status, error.response?.data || error.message);
// Re-throw a structured error or the original error
const err = new Error(error.response?.data?.text || `Sinch API error: ${error.message}`);
err.statusCode = error.response?.status;
err.details = error.response?.data;
throw err;
}
};- We use
axios.createto configure a base URL and default headers (including the crucialAuthorizationbearer token). A timeout is added. - The
sendSmsfunction takes the recipient, sender number, and message body. - It constructs the correct endpoint using the
SINCH_SERVICE_PLAN_ID. - It makes a
POSTrequest with the payload required by the Sinch/batchesendpoint. Note that even for a single message, thetofield expects an array. - Basic logging is included.
- Error handling catches potential issues with the API request and throws a more informative error if possible.
4. Define API Routes:
Create the route handler for submitting bulk SMS jobs.
// src/routes/smsRoutes.js
import smsQueue from '../config/queue.js';
import prisma from '../lib/prismaClient.js';
import { randomUUID } from 'crypto';
// Define JSON schema for request validation
const sendBulkSchema = {
body: {
type: 'object',
required: ['recipients', 'message'],
properties: {
recipients: {
type: 'array',
minItems: 1,
items: {
type: 'string',
// Basic E.164 format check (starts with +, followed by digits)
// Note: This regex is basic. Consider a dedicated library for production.
pattern: '^\\+[1-9]\\d{1,14}$', // Corrected: removed trailing comma, added end anchor $
},
},
message: {
type: 'string',
minLength: 1,
maxLength: 1600, // Adjust based on Sinch limits / concatenation needs
},
},
},
response: {
202: { // Use 202 Accepted status code
type: 'object',
properties: {
message: { type: 'string' },
batchId: { type: 'string', format: 'uuid' },
jobCount: { type: 'integer' },
},
},
// Add schema for potential error responses
500: {
type: 'object',
properties: {
message: { type: 'string' }
}
}
},
};
const getStatusSchema = {
params: {
type: 'object',
required: ['batchId'],
properties: {
batchId: { type: 'string', format: 'uuid' }
}
},
response: {
200: {
type: 'object',
properties: {
batchId: { type: 'string', format: 'uuid' },
status: { type: 'string', enum: ['PENDING', 'PROCESSING', 'COMPLETED', 'COMPLETED_WITH_ERRORS', 'FAILED'] }, // Added COMPLETED_WITH_ERRORS
totalJobs: { type: 'integer' },
processedJobs: { type: 'integer' },
successfulJobs: { type: 'integer' },
failedJobs: { type: 'integer' },
createdAt: { type: 'string', format: 'date-time'},
updatedAt: { type: 'string', format: 'date-time'},
jobs: {
type: 'array',
items: {
type: 'object',
properties: {
id: { type: 'string' }, // Job ID in DB
recipient: { type: 'string' },
status: { type: 'string', enum: ['PENDING', 'ACTIVE', 'COMPLETED', 'FAILED'] },
attemptsMade: { type: 'integer' },
failedReason: { type: ['string', 'null'] },
processedOn: { type: ['string', 'null'], format: 'date-time'},
finishedOn: { type: ['string', 'null'], format: 'date-time'}
}
}
}
}
},
404: {
type: 'object',
properties: {
message: { type: 'string'}
}
},
500: {
type: 'object',
properties: {
message: { type: 'string'}
}
}
}
};
async function smsRoutes(fastify, options) {
fastify.post('/send-bulk', { schema: sendBulkSchema }, async (request, reply) => {
const { recipients, message } = request.body;
const sinchNumber = process.env.SINCH_NUMBER;
const batchId = randomUUID(); // Generate a unique ID for this batch
if (!sinchNumber) {
fastify.log.error('Server configuration error: SINCH_NUMBER not set.');
reply.code(500).send({ message: 'Server configuration error: Sinch number not set.' });
return;
}
let batch;
try {
// 1. Create a Batch record in the database
batch = await prisma.smsBatch.create({
data: {
id: batchId,
totalJobs: recipients.length,
messageBody: message, // Optional: store message body with batch
status: 'PENDING', // Initial status
},
});
fastify.log.info(`Created batch ${batchId} with ${recipients.length} recipients.`);
} catch (dbError) {
fastify.log.error(`Database error creating batch ${batchId}: ${dbError.message}`);
reply.code(500).send({ message: 'Failed to initiate bulk SMS batch.' });
return;
}
// 2. Create individual job data and queue jobs
const jobPromises = recipients.map((recipient) => {
const jobId = randomUUID(); // Unique ID for the individual message job
const jobData = {
jobId, // Include our DB job ID in the data
batchId,
to: recipient,
from: sinchNumber,
body: message,
};
// Add job to the queue
// Use jobId as the BullMQ job ID for easier tracking if needed,
// though BullMQ generates its own internal ID too.
return smsQueue.add('send-single-sms', jobData, { jobId: jobId });
});
try {
const addedJobs = await Promise.all(jobPromises);
fastify.log.info(`Successfully added ${addedJobs.length} jobs to queue for batch ${batchId}`);
// Respond with 202 Accepted: The request is accepted for processing, but not yet complete.
reply.code(202).send({
message: 'Bulk SMS request accepted and queued for processing.',
batchId: batchId,
jobCount: addedJobs.length,
});
} catch (queueError) {
fastify.log.error(`Error adding jobs to queue for batch ${batchId}: ${queueError.message}`);
// Attempt to mark the batch as failed since jobs couldn't be queued
try {
await prisma.smsBatch.update({
where: { id: batchId },
data: { status: 'FAILED', failedReason: 'Failed to queue jobs' }
});
} catch (dbError) {
fastify.log.error(`Failed to update batch ${batchId} status to FAILED after queue error: ${dbError.message}`);
}
reply.code(500).send({ message: 'Failed to queue SMS jobs.' });
}
});
// --- Status Endpoint ---
fastify.get('/status/:batchId', { schema: getStatusSchema }, async (request, reply) => {
const { batchId } = request.params;
try {
const batch = await prisma.smsBatch.findUnique({
where: { id: batchId },
include: {
// Include associated message jobs
messageJobs: {
orderBy: { createdAt: 'asc' } // Order jobs if needed
},
}
});
if (!batch) {
reply.code(404).send({ message: 'Batch not found.' });
return;
}
// Calculate summary stats
const processedJobs = batch.messageJobs.filter(j => ['COMPLETED', 'FAILED'].includes(j.status)).length;
const successfulJobs = batch.messageJobs.filter(j => j.status === 'COMPLETED').length;
const failedJobs = batch.messageJobs.filter(j => j.status === 'FAILED').length;
// Determine overall batch status (potentially update DB if status changed)
let overallStatus = batch.status;
// Flag to track if DB needs update (not strictly needed for async update)
// const shouldUpdateDbStatus = false;
if (overallStatus !== 'FAILED') { // Don't override a batch-level failure
const totalJobs = batch.totalJobs; // Use stored total
if (processedJobs === totalJobs && totalJobs > 0) {
const newStatus = failedJobs > 0 ? 'COMPLETED_WITH_ERRORS' : 'COMPLETED';
if (batch.status !== newStatus) {
overallStatus = newStatus;
// Update batch status in DB asynchronously (fire and forget or await)
prisma.smsBatch.update({ where: {id: batchId}, data: { status: overallStatus, updatedAt: new Date() }})
.catch(err => fastify.log.error(`Failed to auto-update batch ${batchId} status to ${overallStatus}: ${err.message}`));
}
} else if (processedJobs > 0 || batch.messageJobs.some(j => j.status === 'ACTIVE')) {
const newStatus = 'PROCESSING';
if (batch.status === 'PENDING') { // Only transition from PENDING to PROCESSING automatically
overallStatus = newStatus;
prisma.smsBatch.update({ where: {id: batchId}, data: { status: overallStatus, updatedAt: new Date() }})
.catch(err => fastify.log.error(`Failed to auto-update batch ${batchId} status to ${overallStatus}: ${err.message}`));
} else if (batch.status !== 'PROCESSING') {
// If already COMPLETED/FAILED, don't revert to PROCESSING
// If already PROCESSING, keep it
overallStatus = batch.status;
} else {
overallStatus = 'PROCESSING'; // Stay in processing if already there
}
}
}
reply.code(200).send({
batchId: batch.id,
status: overallStatus, // Send the calculated current status
totalJobs: batch.totalJobs,
processedJobs: processedJobs,
successfulJobs: successfulJobs,
failedJobs: failedJobs,
createdAt: batch.createdAt,
updatedAt: batch.updatedAt, // Reflects last DB update time
jobs: batch.messageJobs.map(job => ({ // Format job details for response
id: job.id,
recipient: job.recipient,
status: job.status,
attemptsMade: job.attemptsMade,
failedReason: job.failedReason,
processedOn: job.processedOn,
finishedOn: job.finishedOn,
}))
});
} catch (error) {
fastify.log.error(`Error fetching status for batch ${batchId}: ${error.message}`);
reply.code(500).send({ message: 'Internal server error retrieving batch status.' });
}
});
}
export default smsRoutes;- We import the
smsQueueandprismaclient. sendBulkSchemaandgetStatusSchemause Fastify's built-in JSON Schema validation. The regex pattern insendBulkSchemais corrected to include an end anchor ($) and remove the erroneous trailing comma./send-bulkendpoint:- Generates a unique
batchId. - Creates a
SmsBatchrecord in the database first. - Loops through the
recipientsarray. - For each recipient, it creates a
jobDataobject containing all necessary info (to,from,body,batchId,jobId). - It adds the job to the
smsQueueusingsmsQueue.add(). We give the job a name ('send-single-sms') and pass thejobData. We also use our generatedjobIdas the BullMQ job ID. Promise.allwaits for all jobs to be added to the queue.- Responds with
202 Accepted, indicating the task is queued, along with thebatchId. - Includes error handling for database and queue operations.
- Generates a unique
/status/:batchIdendpoint:- Retrieves the
batchIdfrom the URL parameters. - Queries the database using Prisma to find the
SmsBatchand its associatedSmsMessageJobrecords. - Calculates summary statistics.
- Determines an overall status for the batch based on the status of individual jobs, potentially updating the database status asynchronously if it has changed (e.g., from PENDING to PROCESSING or PROCESSING to COMPLETED).
- Returns the batch details and the status of each individual message job.
- Handles the case where the
batchIdis not found (404).
- Retrieves the
5. Create the Fastify Server:
Set up the main server entry point.
// src/server.js
import Fastify from 'fastify';
import { config } from 'dotenv';
import smsRoutes from './routes/smsRoutes.js';
import prisma from './lib/prismaClient.js'; // Import prisma to ensure it connects on start
import rateLimit from '@fastify/rate-limit';
import helmet from '@fastify/helmet';
import Redis from 'ioredis'; // Needed for distributed rate limiting
config(); // Load .env variables
const fastify = Fastify({
logger: {
level: process.env.LOG_LEVEL || 'info', // Use env var for log level
// Use pino-pretty only in development for readability
transport: process.env.NODE_ENV !== 'production'
? { target: 'pino-pretty' }
: undefined,
},
// Enable trust proxy if running behind a load balancer or reverse proxy
// Example: await fastify.register(import('@fastify/trust-proxy'));
// trustProxy: true
});
// --- Register Plugins ---
// Security Headers
await fastify.register(helmet, {
// Example: Configure Content Security Policy if needed
// contentSecurityPolicy: false // Disable CSP if it causes issues initially
});
// Rate Limiting
// Ensure REDIS_URL is set for this to work effectively in scaled environments
if (process.env.REDIS_URL) {
try {
const redisClient = new Redis(process.env.REDIS_URL, {
// Optional: Add error handling for Redis connection used by rate limiter
maxRetriesPerRequest: 3, // Example option
showFriendlyErrorStack: process.env.NODE_ENV !== 'production',
});
redisClient.on('error', (err) => fastify.log.error('Rate Limit Redis Error:', err));
await fastify.register(rateLimit, {
max: 100, // Max 100 requests per window per IP address (adjust as needed)
timeWindow: '1 minute', // Time window duration
redis: redisClient, // Use Redis for distributed rate limiting
// keyGenerator: function (request) { /* custom key logic if needed */ },
// allowList: ['127.0.0.1'], // IPs to bypass rate limit
});
fastify.log.info('Rate limiting enabled with Redis backend.');
} catch (redisError) {
fastify.log.error('Failed to initialize Redis for rate limiting:', redisError);
// Decide if you want to fall back to in-memory rate limiting or exit
// Fallback (less effective when scaled):
// await fastify.register(rateLimit, { max: 100, timeWindow: '1 minute' });
// fastify.log.warn('Rate limiting falling back to in-memory store.');
// Or exit if Redis is critical:
// process.exit(1);
}
} else {
// Fallback to in-memory rate limiting if REDIS_URL is not set
await fastify.register(rateLimit, { max: 100, timeWindow: '1 minute' });
fastify.log.warn('REDIS_URL not set. Rate limiting falling back to in-memory store (less effective when scaled).');
}
// --- Register Routes ---
await fastify.register(smsRoutes, { prefix: '/api/sms' });
// --- Graceful Shutdown ---
const signals = ['SIGINT', 'SIGTERM'];
signals.forEach((signal) => {
process.on(signal, async () => {
fastify.log.info(`Received ${signal}, shutting down gracefully...`);
await fastify.close();
// Close Prisma client connection
await prisma.$disconnect();
// Optional: Close BullMQ queue connections if needed (often handled internally)
// await smsQueue.close(); // Example if direct queue instance is accessible
fastify.log.info('Server shut down.');
process.exit(0);
});
});
// --- Start Server ---
const start = async () => {
try {
const port = parseInt(process.env.PORT || '3000', 10);
const host = process.env.HOST || '0.0.0.0';
await fastify.listen({ port, host });
fastify.log.info(`Server listening on port ${port}`);
// Log routes only in development for clarity
if (process.env.NODE_ENV !== 'production') {
console.log(fastify.printRoutes());
}
// Ensure Prisma client is connected (optional check)
await prisma.$connect().then(() => {
fastify.log.info('Prisma client connected successfully.');
}).catch(err => {
fastify.log.error('Prisma client failed to connect:', err);
// Optionally exit if DB connection is critical
// process.exit(1);
});
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();Frequently Asked Questions (FAQ)
How do I send bulk SMS messages with Fastify and Sinch API?
Build a Fastify API endpoint that accepts a list of recipients and a message body, then use BullMQ to queue individual SMS jobs for background processing. Install dependencies with npm install fastify axios bullmq ioredis @prisma/client, configure Sinch API credentials (Service Plan ID, API Token, and phone number in E.164 format), set up Redis and PostgreSQL, and create a worker process that consumes jobs from the queue and sends SMS via Sinch's REST API. The system handles retries automatically using BullMQ's exponential backoff (3 attempts with 1-second initial delay) and tracks message status in PostgreSQL using Prisma ORM. Learn more about Sinch SMS API integration with Node.js in the official documentation.
Which Node.js version should I use for a Fastify v5 bulk SMS system?
Use Node.js v20 LTS "Iron" (Maintenance LTS through April 2026) or Node.js v22 LTS "Jod" (Active LTS through October 2025, Maintenance LTS through April 2027). Fastify v5 requires Node.js v20+ minimum and offers 5 – 10% performance improvements over v4. Node.js v18 reached End of Life on April 30, 2025 and no longer receives security updates. Node.js v22 is recommended for new projects as it provides Active LTS support throughout 2025 with the latest features.
How do I configure BullMQ v5 for reliable SMS job processing?
Create a BullMQ Queue instance with explicit connection object (required in v5.x), set retry attempts to 3 with exponential backoff strategy (initial 1-second delay), enable automatic job cleanup (removeOnComplete: true, removeOnFail: 500), and configure Redis connection using ioredis client. BullMQ v5.60.0 no longer supports integer job IDs – use string UUIDs instead. Set up event listeners for monitoring queue health (error, failed, completed events) and implement graceful shutdown by closing queue connections on SIGINT/SIGTERM signals.
What Sinch API credentials do I need for bulk SMS?
Obtain three credentials from your Sinch Customer Dashboard: (1) Service Plan ID found under SMS → APIs, (2) API Token (click "Show" to reveal), and (3) Sinch Virtual Number in E.164 format (e.g., +12025550100) from your service plan details. Configure the API base URL based on your region: https://us.sms.api.sinch.com for US or EU endpoint for Europe. Sinch provides direct connections to 600+ mobile operators worldwide with 95% US population coverage. Store credentials securely in .env file and never commit to version control.
How do I implement error handling and retries for bulk SMS?
Use BullMQ's built-in retry mechanism with attempts: 3 and exponential backoff (type: 'exponential', delay: 1000) to automatically retry failed jobs with increasing delays (1s, 2s, 4s). Wrap Sinch API calls in try-catch blocks, log errors with recipient context, store failure reasons in PostgreSQL via Prisma (failedReason field), and track attempt counts (attemptsMade). Implement job-level error handling in your worker processor to catch transient failures (network issues, rate limits) versus permanent failures (invalid phone numbers). Use BullMQ's failed event listener to log jobs that exceed max retry attempts.
How do I set up rate limiting for a Fastify bulk SMS API?
Install @fastify/rate-limit and configure with Redis backend for distributed rate limiting across multiple server instances. Set max requests per time window (e.g., max: 100, timeWindow: '1 minute') based on your API capacity and Sinch API rate limits. Use Redis connection for stateful tracking: redis: new Redis(process.env.REDIS_URL). Implement IP-based limiting by default or custom key generation for user-based limits. Add allowlist for trusted IPs, configure appropriate HTTP 429 responses, and log rate limit violations for monitoring.
What database schema should I use for tracking bulk SMS batches?
Implement two main tables: (1) SmsBatch with id (UUID), totalJobs, status (PENDING/PROCESSING/COMPLETED/COMPLETED_WITH_ERRORS/FAILED), messageBody, createdAt, updatedAt, and failedReason, and (2) SmsMessageJob with id (UUID), batchId (foreign key), recipient, status (PENDING/ACTIVE/COMPLETED/FAILED), attemptsMade, failedReason, processedOn, finishedOn, and timestamps. Add indexes on batchId and status for faster queries. Use Prisma ORM v6.x with PostgreSQL v12+ for type-safe database access, automatic migrations, and relationship management between batches and individual message jobs.
How do I deploy a Fastify bulk SMS system to production?
Use environment-specific configurations: set NODE_ENV=production, configure robust logging (replace pino-pretty with structured JSON logs), enable Fastify's trust proxy option if behind a load balancer, use managed Redis (AWS ElastiCache, Redis Cloud) and PostgreSQL (AWS RDS, Supabase) services, implement health check endpoints (/health) for load balancer monitoring, configure SSL/TLS termination, set up monitoring with Prometheus/Datadog for queue metrics, implement graceful shutdown handlers for SIGTERM signals, use PM2 or Docker for process management, and run separate worker instances scaled independently from API servers.
What are the cost considerations for sending bulk SMS with Sinch?
Sinch SMS pricing varies by destination country (US SMS typically $0.01 – $0.02 per message) with volume discounts available. Calculate infrastructure costs: Redis hosting ($10 – $50/month for managed instances), PostgreSQL hosting ($20 – $100/month depending on scale), Node.js hosting (serverless or container-based $20 – $200/month), and network egress fees. Optimize costs by batching messages efficiently, implementing intelligent retry logic to avoid wasted attempts on permanently failed numbers, using connection pooling for database and Redis, and monitoring queue metrics to right-size worker instances. Volume commitments with Sinch can reduce per-message costs significantly for high-volume senders.
<!-- EXPAND: Could benefit from cost comparison table for different volume tiers (Type: Enhancement, Priority: Low) -->How do I scale a Fastify bulk SMS system for millions of messages?
Scale horizontally by running multiple worker instances consuming from the same BullMQ queue, use Redis Cluster for distributed queue backend handling high throughput, implement database connection pooling with Prisma (set connection_limit in DATABASE_URL), partition large batches into smaller chunks (1,000 – 10,000 messages per batch) for better progress tracking, use database read replicas for status queries to reduce load on primary database, implement caching for frequently accessed batch status with Redis, monitor queue metrics (waiting jobs, processing rate, failed jobs) and auto-scale workers based on queue depth, and use CDN/edge caching for API documentation and status endpoints.
What security measures should I implement for a bulk SMS API?
Implement multiple security layers: (1) Use @fastify/helmet for security headers (CSP, HSTS, X-Frame-Options), (2) Configure @fastify/rate-limit with Redis backend to prevent abuse (100 requests per minute per IP), (3) Implement API key authentication middleware requiring valid tokens in Authorization header, (4) Validate all inputs using Fastify JSON Schema validation with E.164 phone number pattern (^\\+[1-9]\\d{1,14}$), (5) Store sensitive credentials in environment variables never in code, (6) Use HTTPS/TLS for all API communication, (7) Implement CORS policies restricting allowed origins, (8) Log security events (failed auth, rate limit violations) for monitoring, (9) Use prepared statements via Prisma to prevent SQL injection, and (10) Implement IP allowlisting for admin endpoints.
How do I monitor and debug BullMQ job failures?
Enable detailed logging in your worker processor with recipient context, use BullMQ's event listeners (failed, error, stalled) to capture and log job failures, query PostgreSQL for failed jobs with status: 'FAILED' and review failedReason field, implement structured logging with correlation IDs linking batch → job → API call, set up error alerting using monitoring tools (Sentry, Datadog) triggered on high failure rates, use BullMQ's removeOnFail: 500 to retain recent failures for analysis, create admin endpoints to manually retry failed jobs by re-queuing with original data, monitor Redis memory usage to prevent queue overflow, and implement dead letter queue pattern for jobs exceeding max retry attempts requiring manual intervention.
Frequently Asked Questions
How to send bulk SMS messages with Node.js?
Use a robust queuing system like BullMQ with Redis, along with a framework like Fastify and the Sinch SMS API. This architecture handles individual messages reliably in the background, preventing server overload and ensuring delivery even with temporary failures. The provided example uses a Fastify API endpoint to accept bulk requests and queue individual SMS sending tasks, processed by a dedicated Node.js worker.
What is Fastify used for in bulk SMS sending?
Fastify is a high-performance Node.js web framework. It serves as the API layer, receiving bulk SMS requests, validating input, and adding individual message sending jobs to the BullMQ queue. Fastify's speed and efficiency are beneficial for handling a high volume of requests.
Why use a message queue for bulk SMS?
A message queue like BullMQ with Redis decouples the API request from the actual SMS sending. This allows the API to respond quickly without waiting for each message to be sent, improving performance and reliability. It also provides retry mechanisms and handles failures gracefully.
What database is used for tracking SMS message status?
PostgreSQL, combined with the Prisma ORM, provides persistent storage for tracking the status of each individual message, the overall batch status, and any errors encountered during the sending process. Prisma simplifies database interaction and ensures type safety.
How does the bulk SMS system handle failures?
The system leverages BullMQ's built-in retry mechanism, allowing it to automatically retry failed messages multiple times with exponential backoff. Error logs are also stored to assist with identifying and resolving persistent issues. The system uses a database to track each message's status, supporting better error analysis and recovery.
What is the role of Redis in the bulk SMS architecture?
Redis acts as the backend for the BullMQ job queue, storing the individual SMS sending tasks until the worker process can pick them up and send the messages via the Sinch API. Its in-memory nature ensures fast queue operations.
Can I check the status of my bulk SMS messages?
Yes, the provided system includes a dedicated `/status/:batchId` API endpoint. Clients can use this endpoint to retrieve information about a specific batch of SMS messages, including the overall status and the status of each individual message within the batch.
How to install required dependencies for this bulk SMS project?
Use `npm install fastify axios bullmq ioredis dotenv @prisma/client pino-pretty @fastify/rate-limit @fastify/helmet` for production and `npm install --save-dev prisma nodemon` for development dependencies. This installs the web framework, API request library, queueing system, Redis client, and other essentials. You'll also need Docker for local Redis and PostgreSQL setup or access to standalone instances.
What Sinch credentials do I need for sending bulk SMS?
You'll need a Sinch account with a Service Plan ID, an API Token, and a configured Sinch virtual number. This information can be obtained from the Sinch Customer Dashboard under SMS -> APIs. The Sinch number should be in E.164 format (e.g. +12xxxxxxxxxx).
When should I use a bulk SMS sending system like this?
Consider using this system for applications requiring large-scale messaging, such as marketing campaigns, important notifications, or alerts. The queueing system and retry logic ensure reliable delivery, essential when reaching a wide audience. Don't use this approach for sending a small number of messages, as a simple API call would be more efficient in that case.
How to set up a local development environment with Docker?
Use the provided `docker-compose.yml` to start PostgreSQL and Redis containers locally. This simplifies the setup process and ensures consistency across development environments. Ensure your `.env` file's connection URLs match the Docker configuration.
What is the purpose of the Node.js worker process?
The worker process is dedicated to consuming jobs from the Redis queue. It fetches individual SMS sending tasks from the queue, sends the messages via the Sinch API, and updates the database with the status of each message. This asynchronous operation keeps the main API responsive and allows for high throughput.
How does rate limiting work in this bulk SMS setup?
The system uses the `@fastify/rate-limit` plugin. By default, it limits to 100 requests per minute per IP address using an in-memory store. For scaled environments, a Redis backend is highly recommended for distributed rate limiting. You can configure `REDIS_URL` in the `.env` file.
Why is Prisma used in this bulk SMS system?
Prisma is a modern database toolkit that simplifies database operations and provides type safety. It serves as the ORM (Object-Relational Mapper) for interacting with PostgreSQL, managing database migrations, and generating a type-safe client for accessing data.