College District Deploys ERP System in AWS for Auto-scaling and DevOps

CUSTOMER SUCCESS

COMMUNITY COLLEGE DISTRICT DEPLOYS ERP SYSTEM IN AWS FOR AUTO-SCALING & DEVOPS CAPABILITIES

About the Company

The Ventura County Community College District (VCCCD) is the college district serving all of Ventura County, California. The district is a member of the California Community College system that includes 113 community colleges statewide.

The Vision and Challenge

Ventura County Community College District (VCCCD) was already a Banner ERP solution user and needed a strategic service provider that could design and build out several Banner 9 ERP environments within the AWS Cloud. The environments needed to be architected to allow for future growth and re-usability of automated build scripts as future AWS Cloud Banner 9 environments are created for test, pre-production, and production.

Steps required to ensure the smooth transition of Banner 9 ERP solution included:

    • Needed to prepare the core servers and services according to VCCCD requirements
    • Needed to migrate databases
    • Needed to build servers
    • Needed to coordinate with their app dev team to containerize application components into Docker
    • Needed to automate the Application Deployment pipeline
    • Needed to implement SSO with Banner 9
    • Needed to implement New Relic as the monitoring solution

The Outcome

VCCCD selected InterVision to migrate its Banner 9 environment to AWS because of our extensive cloud services experience using our prescriptive Cloud Migration Lifecycle Assurance (CMLA) program and validation in the education space. We proposed and executed a multi-phased approach to the migration:

      • Phase 1 – InterVision replicated the existing VCCCD environment into several new AWS environments.
      • Phase 2 – We created the architectural documentation and migration plan. The architecture included AWS Account Strategy, VPC design and strategy, IP and DNS Strategy, Directory Services, Security Architecture, Auditing, Tagging, CloudFormation and Automation, VPN and WAN Design and Strategy, Compute and Storage, Disaster Recovery, Application Monitoring, and CloudCheckr setup.
      • Phase 3 – We tailored our prebuilt Automation Framework and automation scripts to match their unique needs.
      • Phase 4 – We implemented the AWS architecture, including the creation of the AWS Accounts, VPCs, Tagging, and IAM.
      • Phase 5 – We built and tailored the application pipeline scripts and tooling for five Banner 9 environments: Sandbox, Dev, Test, Pilot, Production.
      • Phase 6 – We migrated the applications into AWS and performed functional and performance testing.

The migration included the following major components using our CMLA program:

Oracle Database Server: The Oracle Database server was migrated (replatformed) on an AWS EC2 M4, a large instance with EBS volumes of 800 gigs running Oracle Linux. M4 instances provided a balance of compute, memory, and network resources, and were a good choice for many of the client’s applications. Reporting from the database was offloaded to a Read Replica (i.e. reporting type queries can be run on the Read Replica instead of the production primary database). As a result of Read Replica, the customer no longer needs to perform updates in real time.

Banner Job Server: This server, which was rehosted into AWS, allowed the customer to run Banner backend jobs/programs, some of which were written in C++ and COBOL. This server was migrated in a way that was immutable and used a shared file drive (i.e. Amazon Elastic File System (Amazon EFS)).

Load Balancing: Amazon Elastic Load Balancing service was used to provide fault tolerance for the EC2 Job Server by automatically balancing traffic across the Amazon EC2 instances and Availability Zones while ensuring only healthy targets received traffic. If the EC2 Job Server in a single Availability Zone was unhealthy, the AWS Elastic Load Balancing service would route traffic to a healthy instance in another Availability Zone. Once the target had returned to a healthy state, load balancing would automatically resume to the original target.

Amazon Elastic File System (Amazon EFS): AWS EFS service was used to provide simple, scalable file storage for VCCCD`s Amazon EC2 instances in AWS. Amazon EFS offered a simple interface that allowed quick and easy creation and configuration of file systems.  Amazon EFS’s storage capacity was elastic, growing and shrinking automatically. This permitted the team to add and remove files, so that applications had the storage they needed, when they needed it. The Tomcat servers that required input files or generated significant output other than reading or writing to a database could also use the AWS Amazon Elastic File System (EFS). New servers and containers came online automatically attached the EFS Shared File System and read/wrote files as needed.

Amazon Elastic Container Service (Amazon ECS): Amazon EC2 Container Service (ECS) was used to create two Docker clusters (i.e. private and public that will spread across availability zones for resiliency and dependability). The use of Amazon ECS eliminated the need to install and operate VCCCD’s own container orchestration software, with the ability to manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. With simple API calls, we could launch and stop Docker-enabled applications and query the complete state of their applications. The following components of Banner were deployed in separate containers (this was refactoring migration):

      • vcccd.edu (AppNav)
      • vcccd.edu (AdminPages)
      • vcccd.edu (SSB)
      • vcccd.edu (SSO Manager)
      • vcccd.edu (Student API)
      • CCCTC Project Glue Adapter (post implementation)

AWS Cloud services utilized in this project:

      • EC2
      • ECS
      • Boto – AWS Python Library
      • Route 53
      • EFS Shared File System
      • S3
      • Cloudwatch
      • Cloudtrail
      • Cloud Formation Scripts
      • Cloud9 – AWS Code IDE in the cloud to develop and test Python scripts.
      • Application Load Balancers
      • Auto-Scaling and load-balancing across AZs
      • KMS – All data volumes are encrypted
      • SSM – We follow best practices and store credentials in AWS’ Parameter Store.

Third-Party Products/Tools utilized in this project:

      • Docker
      • New Relic
      • Github
      • Jenkins
      • Rancher – Container Management

The project went to production in April 2019. After a successful migration of Banner 9 to AWS using InterVision’s CMLA process and our co-managed service plan, VCCCD now benefits from fully automated, continuous integration and delivery pipelines for multiple applications. VCCCD now experiences zero-downtime deployments, autoscaling, and savings on technical costs. In addition to this, there is now enterprise-wide, fully automated application and infrastructure performance monitoring built in. InterVision will be working with VCCCD to support ongoing efforts to implement CIS Standards and NIST security standards.

InterVision’s AWS Cloud Migration

Learn More