Skip to main content

Command Palette

Search for a command to run...

devops

Updated
13 min read

Intuji - DevOps Final Assignmment

As part of my final assignment . I have prepared documentation of how i complete this task ste by step.


Building a Fully Containerized Full Stack Application with CI/CD, Monitoring, and Log Management

In this post, I will share my journey of developing a fully containerized full stack application using React for the frontend, Node.js for the backend, and MySQL for the primary database and replica databse. I integrated a CI/CD pipeline with Jenkins, set up monitoring using Grafana, and managed logs with the ELK/EFK stack. Let’s dive into the process step by step.


1. Setting Up the Environment

To get started, I ensured that I had the necessary tools installed on my machine:

  • Docker: To run and manage all components in containers.

  • Docker Compose: To simplify the orchestration of multi-container applications.

  • React: For building the user interface.

  • Node.js: For handling server-side logic.

  • MySQL: As the primary database and Replica Database

  • Jenkins: For automating the deployment pipeline.

  • Grafana: For monitoring application performance.

  • ELK/EFK Stack: For centralized logging and analytics.


2. Frontend (React) in a Container

I started by creating a React application that would serve as the frontend interface.

Step 1: Create a React App

Using create-react-app, I set up my frontend project:

npx create-react-app frontend
cd frontend

Step 2: Configure Environment Variables

To keep sensitive information organized, I created a .env file in the React project to store the backend API URL:

REACT_APP_API_URL='http://52.202.190.246:3005'

Step 3: Dockerize the React App

I created a Dockerfile for the frontend to containerize the application:

# Step 1: Use Node.js image to build the project
FROM node:14-alpine AS build-stage

# Set the working directory inside the container
WORKDIR /app

# Copy the package.json and package-lock.json
COPY package*.json ./

# Install the project dependencies
RUN npm install

# Copy the rest of the application source code
COPY . .

# Build the application (assuming this creates a 'build' folder)
RUN npm run build

# Step 2: Use Nginx to serve the build files
FROM nginx:alpine AS production-stage

# Copy the build output from the previous stage to Nginx's html directory
COPY --from=build-stage /app/build /usr/share/nginx/html

# Expose port 80
EXPOSE 80
# Command to run Nginx in the foreground
CMD ["nginx", "-g", "daemon off;"]

here is my src/App.js file for react frontend

import React, { useState } from 'react';
import axios from 'axios';
import './styles.css'; // Assuming styles.css is in the same folder

function App() {
    const [name, setName] = useState('');
    const [email, setEmail] = useState('');
    const [users, setUsers] = useState([]);
    const [errorMessage, setErrorMessage] = useState('');
    const [successMessage, setSuccessMessage] = useState('');

    // Load users from the backend using Axios
    const loadUsers = async () => {
        try {
            const response = await axios.get(`${process.env.REACT_APP_API_URL}/data`);
            setUsers(response.data);
        } catch (error) {
            console.error('Error loading users:', error);
            setErrorMessage('Error loading users. Please try again.');
        }
    };

    const handleSubmit = async (e) => {
        e.preventDefault();
        setErrorMessage(''); // Clear previous error message
        setSuccessMessage(''); // Clear previous success message

        try {
            // Send data to the backend to save it in the database
            await axios.post(`${process.env.REACT_APP_API_URL}/submit`, {
                name,
                email,
            });

            // Clear input fields after successful submission
            setName('');
            setEmail('');
            setSuccessMessage('User registered successfully!!');
        } catch (error) {
            console.error('Error submitting form:', error);
            setErrorMessage('Error submitting form. Please try again!!.');
        }
    };

    // Handle fetching and displaying the data when clicking "Read Data" button
    const handleReadData = () => {
        loadUsers();
    };

    return (
        <div>
            <h2>Registration Form</h2>
            <form onSubmit={handleSubmit}>
                <label htmlFor="name">Name:</label>
                <input
                    type="text"
                    id="name"
                    value={name}
                    onChange={(e) => setName(e.target.value)}
                    required
                />
                <label htmlFor="email">Email:</label>
                <input
                    type="email"
                    id="email"
                    value={email}
                    onChange={(e) => setEmail(e.target.value)}
                    required
                />
                <button type="submit">Submit</button>
            </form>

            {/* Display success or error messages */}
            {successMessage && <p className="success-message">{successMessage}</p>}
            {errorMessage && <p className="error-message">{errorMessage}</p>}

            {/* Button to read data */}
            <button onClick={handleReadData}>Read Data</button>

            <h2>Registered Users</h2>
            <table id="dataTable">
                <thead>
                    <tr>
                        <th>Name</th>
                        <th>Email</th>
                    </tr>
                </thead>
                <tbody>
                    {users.length > 0 ? (
                        users.map((user, index) => (
                            <tr key={index}>
                                <td>{user.name}</td>
                                <td>{user.email}</td>
                            </tr>
                        ))
                    ) : (
                        <tr>
                            <td colSpan="2">No registered users yet.</td>
                        </tr>
                    )}
                </tbody>
            </table>
        </div>
    );
}

export default App;
body {
    font-family: Arial, sans-serif;
    margin: 20px;
}

h2 {
    color: #4A90E2;
}

form {
    margin-bottom: 20px;
}

input[type="text"],
input[type="email"] {
    padding: 10px;
    margin: 5px 0;
    border: 1px solid #ccc;
    border-radius: 5px;
    width: 100%;
}

button {
    background-color: #4A90E2;
    color: white;
    padding: 10px 15px;
    border: none;
    border-radius: 5px;
    cursor: pointer;
}

button:hover {
    background-color: #357ABD;
}

table {
    width: 100%;
    border-collapse: collapse;
}

th, td {
    padding: 8px;
    text-align: left;
    border-bottom: 1px solid #ddd;
}

th {
    background-color: #4A90E2;
    color: white;
}

tr:hover {
    background-color: #f1f1f1;
}

3. Backend (Node.js) in a Container

Next, I set up the backend using Node.js to process requests and handle business logic.

Step 1: Create a Node.js App

In a separate directory, I initialized the backend project

mkdir backend
cd backend
npm init -y
npm install express mysql dotenv cores axios winston

Step 2: Set Up API Routes and Database Connection

I configured the server in server.js:

require('dotenv').config();
const express = require('express');
const mysql = require('mysql2/promise');
const bodyParser = require('body-parser');
const cors = require('cors');
const client = require('prom-client');
const winston = require('winston'); // Import winston for logging

// Configure the logger
const logger = winston.createLogger({
    level: 'info',
    format: winston.format.combine(
        winston.format.timestamp(),
        winston.format.json()
    ),
    transports: [
        new winston.transports.File({ filename: 'app.log' }), // Log to file
        new winston.transports.Console(), // Log to console
    ],
});

const dbHostPrimary = process.env.DB_HOST; // Primary DB host
const dbHostReplica = process.env.DB_REPLICA_HOST; // Replica DB host (Add this to your .env)
const dbUser = process.env.DB_USER;
const dbPassword = process.env.DB_PASSWORD;
const dbName = process.env.DB_NAME;

const app = express();
app.use(cors());
app.use(bodyParser.json());

// Prometheus metrics setup
const register = new client.Registry();
const httpRequestDurationMicroseconds = new client.Histogram({
    name: 'http_request_duration_seconds',
    help: 'Duration of HTTP requests in seconds',
    labelNames: ['method', 'route', 'code'],
    registers: [register],
});

// Middleware to record request duration
app.use((req, res, next) => {
    const end = httpRequestDurationMicroseconds.startTimer();
    res.on('finish', () => {
        end({ method: req.method, route: req.route ? req.route.path : req.path, code: res.statusCode });
    });
    next();
});

// Endpoint to expose metrics
app.get('/metrics', async (req, res) => {
    res.set('Content-Type', register.contentType);
    res.end(await register.metrics());
});

let primaryPool; // Declare primary pool variable
let replicaPool; // Declare replica pool variable

// Check if the database exists and create it if it doesn't
const checkAndCreateDatabase = async (dbName) => {
    const connection = await mysql.createConnection({
        host: dbHostPrimary,
        user: dbUser,
        password: dbPassword,
    });

    try {
        const [rows] = await connection.query('SHOW DATABASES LIKE ?', [dbName]);
        if (rows.length === 0) {
            await connection.query(`CREATE DATABASE ${dbName}`);
            logger.info(`Database '${dbName}' created.`); // Log database creation
        } else {
            logger.info(`Database '${dbName}' already exists.`); // Log existence
        }
    } catch (error) {
        logger.error('Error checking or creating database:', error); // Log error
    } finally {
        connection.end();
    }
};

// Create users table if it does not exist
const createUsersTable = async (dbName) => {
    const connection = await primaryPool.getConnection();
    try {
        await connection.query(`CREATE TABLE IF NOT EXISTS ${dbName}.users (
            id INT AUTO_INCREMENT PRIMARY KEY,
            name VARCHAR(100) NOT NULL,
            email VARCHAR(100) NOT NULL UNIQUE
        );`);
        logger.info('Users table created or already exists.'); // Log users table creation
    } catch (err) {
        logger.error('Error creating users table:', err.message); // Log error
    } finally {
        connection.release();
    }
};

const initDatabase = async (dbName) => {
    await checkAndCreateDatabase(dbName);
    await createUsersTable(dbName);
};

// Start the Express server and initialize the database
const startServer = async () => {
    const DB_NAME = 'aditya'; // Specify your database name
    await checkAndCreateDatabase(DB_NAME); // Check and create database first

    // Create the connection pools after ensuring the database exists
    primaryPool = mysql.createPool({
        host: dbHostPrimary,
        user: dbUser,
        password: dbPassword,
        database: DB_NAME,
        waitForConnections: true,
        connectionLimit: 10,
        queueLimit: 0,
    });

    replicaPool = mysql.createPool({
        host: dbHostReplica, // Use the replica host
        user: dbUser,
        password: dbPassword,
        database: DB_NAME,
        waitForConnections: true,
        connectionLimit: 10,
        queueLimit: 0,
    });

    await createUsersTable(DB_NAME); // Create the users table

    const PORT = 3001;
    app.listen(PORT, () => {
        logger.info(`Server is running on http://localhost:${PORT}`); // Log server start
    });
};

// Endpoint to submit data to the primary database
app.post('/submit', async (req, res) => {
    const { name, email } = req.body;
    const connection = await primaryPool.getConnection(); // Use primary pool
    try {
        const result = await connection.query('INSERT INTO users (name, email) VALUES (?, ?)', [name, email]);
        logger.info('Inserted user:', result[0].insertId); // Log user insertion
        res.status(201).json({ id: result[0].insertId, name, email });
    } catch (error) {
        logger.error('Error submitting user:', error); // Log error
        res.status(500).json({ error: 'Internal Server Error' });
    } finally {
        connection.release();
    }
});

// Endpoint to get users from the replica database
app.get('/data', async (req, res) => {
    const connection = await replicaPool.getConnection(); // Use replica pool
    try {
        const [rows] = await connection.query('SELECT * FROM users');
        res.json(rows);
    } catch (err) {
        logger.error('Error fetching users:', err); // Log error
        res.status(500).send('Error fetching users');
    } finally {
        connection.release();
    }
});

// Start the server
startServer();

Step 3: Configure Environment Variables

In the backend directory, I created a .env file:

DB_HOST=db_container
DB_USER=root
DB_PASSWORD=Aditya123!
DB_NAME=aditya
MYSQL_ROOT_PASSWORD=Aditya123!
MYSQL_DATABASE=aditya
DB_REPLICA_HOST=db_replica

Step 4: Dockerize the Backend

I created a Dockerfile for the backend:

# Use Node.js as the base image
FROM node:18

# Install MySQL client
RUN apt-get update && \
    apt-get install -y default-mysql-client netcat-openbsd && \
    rm -rf /var/lib/apt/lists/*

# Create a directory for the backend
RUN mkdir -p /backend

# Set the working directory inside the container
WORKDIR /backend

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install all dependencies listed in package.json
RUN npm install cors express mysql2 dotenv axios winston

# Copy the rest of the application code
COPY . .

# Add a script to wait for the database to be ready
#COPY wait-for-it.sh ./
#RUN chmod +x wait-for-it.sh

# Expose the backend port
EXPOSE 3001

# Start the application, ensuring it waits for both databases
CMD ["node", "server.js"]

4. Database (MySQL) in a Container

For data management, I set up MySQL in a container.

Step 1: Run MySQL in Docker

In the docker-compose.yml file, I added a MySQL service. I have also provided full compose file below.

services:
  db_container:
    container_name: database
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_SERVER_ID: 1
    volumes:
      - mysql_data:/var/lib/mysql
    networks:
      - app-network
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5
    command: --server-id=1 --log-bin=mysql-bin
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

I also configured a replica database for redundancy:

db_replica:
    container_name: database_replica
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_SERVER_ID: 2
    volumes:
      - mysql_replica_data:/var/lib/mysql
    networks:
      - app-network
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5
    command: --server-id=2
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

in order to replicate from primary db to replica db

in primary db :

SHOW MASTER STATUS;

Note down the file and position .

in replica db:

STOP SLAVE;
RESET SLAVE ALL;
CHANGE MASTER TO
  MASTER_HOST='db_container',
  MASTER_USER='root',
  MASTER_PASSWORD='Aditya123!',
  MASTER_LOG_FILE='mysql-bin.000018',
  MASTER_LOG_POS=1087;

START SLAVE;

Now primary db and replica db are synchronized and replica db is replicated.

5. CI/CD Pipeline with Jenkins

Next, I set up a CI/CD pipeline using Jenkins for automation.

Step 1: Run Jenkins in a Docker Container

I added a Jenkins service to the docker-compose.yml:

jenkins:
    container_name: jenkins
    image: jenkins/jenkins:lts
    user: root
    restart: always
    ports:
      - "8080:8080"
      - "50000:50000"
    volumes:
      - jenkins_home:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      JENKINS_ADMIN_ID: aditya
      JENKINS_ADMIN_PASSWORD: aditya
    privileged: true
    networks:
      - app-network
    depends_on:
      - backend
      - frontend
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

Step 2: Configure Jenkins Jobs

In Jenkins, I created jobs for building and deploying the frontend, backend, and database. Each job is triggered on code commits.

After creating user and password in jenkins i have installed necessary plugins alongwith following plugins.

  • Slack notification: for slack notification handling

  • Github Integration: For integrating with github

Provide necessary credential for github:

As my github repo is private i have to provide PAT (personal access token) and slack token as credential in jenkins.

Create token in github

put your PAT in secret field.

Make sure you have workspace and channel in slack

Step 3: Set Up Notifications

To keep track of build status, I integrated Slack notifications for build successes and failures.

Note down the APP credential

put your OAuth token of slack in secret field below.

Make sure you have workspace and channel in slack

in system- go to slack configuration

Now create new item for cerating our pipeline

Added following code to setup stage in pipeline

pipeline {
    agent any  // This will run on any available agent

    environment {
        GITHUB_CREDENTIALS = credentials('github-pat')  // Use your GitHub credentials
        SLACK_CHANNEL = 'C07SR8XAK97'  // Replace with your Slack channel ID
        SLACK_CREDENTIALS = credentials('slack-token')  // Slack credentials for notifications
        //EMAIL_RECIPIENTS = 'aditya.infisec@gmail.com'  // Replace with recipient emails
    }

    triggers {
        // This listens to the webhook
        githubPush()
    }

    stages {
        stage('Checkout') {
            steps {
                script {
                    withCredentials([string(credentialsId: 'github-pat', variable: 'GITHUB_TOKEN')]) {
                        // Clone the repository using the token in the URL
                        git branch: 'main', 
                            url: "https://${GITHUB_TOKEN}@github.com/Adityakafle/full-stack-project.git"
                    }
                }
            }
        }

        stage('Install and Verify Docker') {
            steps {
                script {
                    // Update, upgrade system, install Docker, start and enable Docker service, install Docker Compose, and check versions
                    sh '''
                    apt-get update -y
                    apt-get upgrade -y
                    apt-get install -y docker.io

                    # Check Docker version
                    docker --version

                    # Install Docker Compose
                    curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep 'tag_name' | cut -d '"' -f 4)/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
                    chmod +x /usr/local/bin/docker-compose

                    # Check Docker Compose version
                    docker-compose --version
                    '''
                }
            }
        }

        stage('Build Database') {
            steps {
                script {
                    echo 'Starting database containers...'

                    // Start the database containers without dependencies
                    sh 'docker-compose up --build -d --no-deps db_container'
                    sh 'docker-compose up --build -d --no-deps db_replica'

                    // Wait for both containers to be healthy
                    waitUntil {
                        script {
                            // Check the health status of the db_container
                            def dbContainerStatus = sh(script: 'docker inspect -f "{{.State.Health.Status}}" $(docker-compose ps -q db_container)', returnStdout: true).trim()
                            echo "db_container health status: ${dbContainerStatus}"

                            // Check the health status of the db_replica
                            def dbReplicaStatus = sh(script: 'docker inspect -f "{{.State.Health.Status}}" $(docker-compose ps -q db_replica)', returnStdout: true).trim()
                            echo "db_replica health status: ${dbReplicaStatus}"

                            // Return true only when both containers are healthy
                            return dbContainerStatus == 'healthy' && dbReplicaStatus == 'healthy'
                        }
                    }

                    echo 'Both db_container and db_replica are healthy!'
                }
            }
        }

        stage('Build Backend') {
            steps {
                script {
                    echo 'Starting backend container...'
                    // Build and start the backend container
                    sh 'docker-compose up --build -d --no-deps backend'
                }
            }
        }

        stage('Build Frontend') {
            steps {
                script {
                    echo 'Starting frontend container...'
                    // Build and start the frontend container
                    sh 'docker-compose up --build -d --no-deps frontend'
                }
            }
        }

        stage('Test and Deploy') {
            steps {
                script {
                    echo 'Running tests...'
                    // Placeholder for your test command
                    // Example: sh 'docker-compose run --rm test_container'
                    echo 'Tests completed, deploying application...'
                    // Deploy commands can be added here if needed
                }
            }
        }
    }

    post {
        success {
            script {
                // Send success notification to Slack
                def message = "Deployment succeeded!"
                sh """
                    curl -X POST -H 'Authorization: Bearer ${SLACK_CREDENTIALS}' \
                    -H 'Content-Type: application/json; charset=utf-8' \
                    -d '{"channel": "${SLACK_CHANNEL}", "text": "${message}"}' \
                    https://slack.com/api/chat.postMessage
                """
            }
        }
        failure {
            script {
                // Send failure notification to Slack
                def message = "Deployment failed!"
                sh """
                    curl -X POST -H 'Authorization: Bearer ${SLACK_CREDENTIALS}' \
                    -H 'Content-Type: application/json; charset=utf-8' \
                    -d '{"channel": "${SLACK_CHANNEL}", "text": "${message}"}' \
                    https://slack.com/api/chat.postMessage
                """
            }
        }
    }
}

Save the configuration

Also i have added correct payload url :

Now , my jenkins is setup to automate my build process of three tier app. i will create 4 containers from compose file when any changes happen i.e if someone committed or pushed code.

if there’s no any error it will pass all the stages

6. Monitoring with Prometheus, Grafana, and cAdvisor

For real-time monitoring and metrics collection, I integrated Prometheus, Grafana, and cAdvisor.

Step 1: Adding Prometheus and cAdvisor to Docker Compose

Prometheus collects metrics from cAdvisor, which monitors container resource usage.

Here’s the setup for Prometheus and cAdvisor in docker-compose.yml:

 cadvisor:
    image: gcr.io/cadvisor/cadvisor
    container_name: cadvisor
    ports:
      - "8082:8080"
    volumes:
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker:/var/lib/docker:ro
    networks:
      - app-network

  prometheus:
    image: prom/prometheus
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    networks:
      - app-network
    restart: unless-stopped

  grafana:
    image: grafana/grafana
    container_name: grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=aditya
      - GF_SECURITY_ADMIN_PASSWORD=aditya
    volumes:
      - grafana_data:/var/lib/grafana
    networks:
      - app-network
    depends_on:
      - prometheus

I configured Grafana to connect with Prometheus and set up dashboards for real-time monitoring.

7. Log Management with ELK/EFK Stack

For centralized logging, I used Elasticsearch, Logstash, and Kibana (ELK) to collect, process, and visualize logs.

Step 1: Adding Elasticsearch and Kibana to Docker Compose

Here’s the Elasticsearch and Kibana setup:

Architecture