Java Applications with MongoDB on AWS

👉 Versione in italiano.

This article describes an architecture for deploying Java applications on AWS, designed to ensure scalability, security, and automation. Using Docker for packaging, AWS ECS with Fargate for orchestration, and MongoDB Atlas for the database, the infrastructure supports a blue-green deployment model, minimizing downtime. The CI/CD pipeline, built with BitBucket and Maven, integrates unit and integration tests to ensure stability with each release, providing a secure and scalable environment. pipeline The diagram above summarizes the main flows described in the article.

Application Packaging with Docker

The architecture relies on Docker to create images of Java applications, ensuring portability and consistency across environments. This approach helps eliminate issues related to dependencies and configurations, maintaining consistency across development, testing, and production. Docker images are automatically generated in BitBucket pipelines and stored in a private registry on AWS ECR, ready for deployment on AWS ECS.

Build and Testing with Maven

The build process is automated with Maven, which handles both code compilation and test execution. Each commit or push to the repository triggers a CI/CD pipeline in BitBucket, which initiates a series of operations:

  1. Compilation and Dependency Verification: Maven verifies that all project dependencies are present and up to date, ensuring a complete build environment.
  2. Unit Testing: A series of automated unit tests is launched using JUnit. Unit tests are essential for verifying the behavior of individual code units in isolation, ensuring each application component works as expected.
  3. Integration Testing: Once the unit tests are complete, the pipeline proceeds with integration tests. Integration tests check that various application components work correctly when combined, covering interactions between services, databases, and other external dependencies.

This testing phase allows proactive detection and resolution of errors, ensuring a stable version before Docker images are created.

Container Orchestration with AWS ECS and Fargate

To manage and orchestrate containers, AWS Elastic Container Service (ECS) in Fargate mode is used. Fargate eliminates the need to configure and manage the underlying infrastructure. This approach allows only the necessary resources to be specified for each container, delegating security and resource allocation management to AWS.

Each service running on ECS is defined by a Docker task that specifies allocated resources (memory, CPU) and network configurations. This setup allows containers to scale automatically in response to traffic, maintaining efficient resource management.

Network Security with VPC and VPC Peering

The entire ECS cluster is isolated within a Virtual Private Cloud (VPC) and communicates with the MongoDB Atlas VPC via VPC-peering. This approach prevents any exposure of the database to the internet, protecting sensitive resources. Communication between ECS containers and MongoDB occurs exclusively within this private network. The only open ports are 80 (HTTP) and 443 (HTTPS), accessible via an Application Load Balancer (ALB) that routes traffic to ECS. This level of protection ensures that only authorized traffic can reach containers, maintaining a high standard of security.

Managed Database with MongoDB Atlas

For database management, MongoDB Atlas is used, a managed platform that offers scalable and simplified infrastructure for MongoDB databases. Atlas provides integrated backup, restore, and monitoring features, ensuring a high level of reliability and advanced scalability. Thanks to VPC peering, traffic between ECS and MongoDB is entirely contained within the private network, improving connection security without requiring advanced configurations.

Continuous Integration and Continuous Deployment with BitBucket Pipelines

BitBucket hosts the source code and manages the CI/CD process through an automated pipeline. Each push to the repository triggers the BitBucket pipeline, which covers the entire release lifecycle:

  1. Build and Testing: As described, Maven compiles the code, executes unit and integration tests. If all tests pass, a Docker image of the application is created.
  2. Docker Image Distribution: The Docker image is then uploaded to a private Docker repository (AWS ECR), ready for deployment.
  3. Blue-Green Deployment on ECS: Finally, the image is deployed on ECS using a blue-green deployment approach. In this model, the system creates a parallel version of the new application in a staging environment (green), while keeping the current version active in production (blue). Once the new application version starts responding to ping messages, traffic is gradually redirected to it. This approach allows a quick rollback by redirecting traffic back to the blue cluster in case of issues with the new version.

Advantages of Blue-Green Deployment

Blue-green deployment reduces downtime and simplifies rollback in case of errors. By always maintaining two parallel environments, updates can be made in production without interruptions, and new versions can be safely tested. When the new version is ready, simply redirect traffic to the green cluster, with the option to quickly revert to the blue cluster if necessary.

Conclusions

The described architecture provides a secure and scalable solution for deploying Java containerized applications on AWS, combining Docker, AWS ECS with Fargate, MongoDB Atlas, and BitBucket Pipelines. This setup offers:

  • A structured build and testing process with Maven and JUnit.
  • Automated, secure, and scalable deployment.
  • An isolated and protected infrastructure through VPC and VPC-peering management.
  • A continuous update system and fast rollback with the blue-green deployment model.

This configuration keeps the infrastructure agile and secure, efficiently meeting the deployment needs of distributed applications in cloud environments.

References