Welcome to the Azure Red Hat OpenShift (ARO) Migration Hackathon!
In this challenge, you'll be migrating an "on-premises" application to ARO while implementing modern DevOps practices using GitHub. The goal is to modernize the application deployment, enhance security, and improve the overall development workflow.
Before starting, ensure you have:
- An Azure Account with permissions to create resources
- A GitHub Account
- Docker installed locally
- Azure CLI installed
- Visual Studio Code or your preferred IDE
- Git installed
- Openshift CLI installed (https://docs.redhat.com/en/documentation/openshift_container_platform/4.2/html/cli_tools/openshift-cli-oc#cli-about-cli_cli-developer-commands)
-
Fork the repository to your GitHub account
-
Then clone:
git clone https://github.com/<yourname>/aro-migration-hackathon.git
cd aro-migration-hackathonWe've provided an interactive script that will create all the necessary Azure resources for the hackathon:
# Make the script executable
chmod +x ./scripts/setup-azure-resources.shIf using WSL then there may be a syntax error on the cluster creation. I would advise you use powershell instead with the following two commands:
(Get-Content .\scripts\setup-azure-resources.sh -Raw) -replace "`r`n", "`n" | Set-Content .\scripts\setup-azure-resources.sh -NoNewline
bash ./scripts/setup-azure-resources.sh# Run the setup script
./scripts/setup-azure-resources.shThis script will:
- Create a Resource Group
- Set up networking components
- Create an Azure Container Registry
- Optionally create an ARO cluster (or provide instructions for later creation)
- Save all configuration details to a
.envfile
The Task Manager application consists of:
- Frontend: React-based web UI
- Backend API: Node.js/Express
- Database: MongoDB
Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML file to configure your application's services and allows you to start all services with a single command.
Our Task Manager application consists of three main components:
- Frontend: React-based web UI served via Nginx
- Backend API: Node.js/Express REST API
- Database: MongoDB for data storage
- MongoDB Express: Web-based MongoDB admin interface
cd on-prem-app/deployment
docker-compose upOnce the application is running, you can access:
- Frontend: http://localhost
- Backend API: http://localhost:3001/api/tasks
- MongoDB Express: http://localhost:8081
- Open MongoDB Express at http://localhost:8081. Default credentials are username: user and password: pass
- Navigate through the interface to:
- View the database structure
- Create sample tasks
- Modify existing data
- Observe how changes affect the application
You can use tools like cURL, Postman, or your browser to test the API:
# Get all tasks
curl http://localhost:3001/api/tasks
# Create a new task
curl -X POST http://localhost:3001/api/tasks \
-H "Content-Type: application/json" \
-d '{"title":"New Task","description":"Task description","status":"pending"}'
# Update a task (replace TASK_ID with actual ID)
curl -X PUT http://localhost:3001/api/tasks/TASK_ID \
-H "Content-Type: application/json" \
-d '{"status":"completed"}'
# Delete a task (replace TASK_ID with actual ID)
curl -X DELETE http://localhost:3001/api/tasks/TASK_IDYour team will need to complete the following challenges:
- Build and push the container images to your Azure Container Registry
- Deploy the application to your ARO cluster using the provided Kubernetes manifests
- Configure routes to expose the application externally
- Verify the deployment and ensure it's working correctly
# Navigate back to the repository root
cd ../..
sed -i "s|\${REGISTRY_URL}/task-manager-backend|$REGISTRY_URL/task-manager-backend|g" aro-templates/manifests/backend-deployment.yaml
sed -i "s|\${REGISTRY_URL}/task-manager-frontend|$REGISTRY_URL/task-manager-frontend|g" aro-templates/manifests/frontend-deployment.yamlAlternatively, you can edit the files manually:
- Open
aro-templates/manifests/backend-deployment.yamlandfrontend-deployment.yaml - Find the line with
image: ${REGISTRY_URL}/task-manager-backend:latestor similar - Replace with your actual ACR URL, e.g.,
image: myacr.azurecr.io/taskmanager-backend:latest
If running these commands in WSL please switch to powershell as your build arg may be passed through incorrectly.
# Log in to your ACR.
echo You may need to load your environment file using the command "source .env"
source .env
az acr login --name $ACR_NAME
# Navigate to the frontend directory
cd on-prem-app/frontend
# Build using the OpenShift-specific Dockerfile
docker build -t $ACR_NAME.azurecr.io/task-manager-frontend:latest --build-arg REACT_APP_API_URL=/api -f Dockerfile.openshift .
# Push the frontend image
docker push $ACR_NAME.azurecr.io/task-manager-frontend:latest
# Navigate to the backend directory
cd ../backend
# Build and tag the backend image
docker build -t $ACR_NAME.azurecr.io/task-manager-backend:latest .
# Push the backend image
docker push $ACR_NAME.azurecr.io/task-manager-backend:latest# Log in to your ARO cluster through the UI
echo Login to the Openshift Portal here: $OPENSHIFT_CONSOLE_URL
echo You can find your username and password in your .env file.
# Navigate to your username in the top right and select "copy login token"
# Login using the command provided with your token and server
oc login --token=********** --server=**********
# Create a project for the application
oc new-project task-manager# Create a secret for pulling images from ACR
oc create secret docker-registry acr-secret \
--docker-server=$REGISTRY_URL \
--docker-username=$REGISTRY_USERNAME \
--docker-password=$REGISTRY_PASSWORD
# Link the secret to the service account
oc secrets link default acr-secret --for=pull
# Add security context constraints to allow service account to create frontend pod with custom security contexts for nginx.conf
oc adm policy add-scc-to-user anyuid -z default -n task-manager# Edit the deployment manifests to use your ACR
# Replace ${YOUR_ACR_URL} with your actual ACR URL in the manifests
# Apply the manifests
cd ../..
oc apply -f aro-templates/manifests/mongodb-deployment.yaml
oc apply -f aro-templates/manifests/backend-deployment.yaml
oc apply -f aro-templates/manifests/frontend-deployment.yaml# Check if pods are running
oc get pods
# Check the created routes
oc get routes
# Test the backend API and expect an empty response (Just no errors)
curl http://$(oc get route backend-api -o jsonpath='{.spec.host}')/api/tasks
# Open the frontend URL in your browser
echo "Frontend URL: http://$(oc get route frontend-route -o jsonpath='{.spec.host}')"Now create a task and check the frontend has called the backend and put the entry in the db.
# Connect to MongoDB pod
MONGO_POD=$(kubectl get pods -l app=mongodb -o jsonpath='{.items[0].metadata.name}')
# Open MongoDB shell
kubectl exec -it $MONGO_POD -- mongosh taskmanager
# In the MongoDB shell, list all tasks
db.tasks.find().pretty()- Navigate back to the Openshift Console and go to Workloads > Pods to see if all pods are running
- Go to Networking > Routes to find URLs for your application
- Open the frontend route URL in your browser
- Test the application by creating, editing, and deleting tasks
- Set up GitHub Actions for continuous integration and deployment
- Configure GitHub Secrets for secure pipeline execution
- Implement automated testing in the pipeline
- Create a workflow that deploys to ARO when changes are pushed to main
Your CI/CD pipeline will build and push your container image. We will also add some of our ARO cluster details as we may want to use them later!
You will need to add the following secrets from your env file to your repo.
Required GitHub Secrets:
REGISTRY_URL: The URL of your Azure Container Registry (e.g., myregistry.azurecr.io)REGISTRY_USERNAME: Username for your container registry (usually the registry name)REGISTRY_PASSWORD: Password or access key for your container registryOPENSHIFT_SERVER: The API server URL of your ARO clusterOPENSHIFT_TOKEN: Authentication token for your ARO cluster (Found in Openshift Portal)
Your pipelines are triggered on commits, PR's or through manual triggers.
Advanced Deployment Options:
- Consider using your pipeline to manage the app deployment not just the build.
- Consider setting up multiple environments (dev, staging, production)
- Implement Blue/Green deployment for zero-downtime updates
- Add post-deployment health checks to verify successful deployment
While containerised MongoDB within ARO works for development, a production-ready architecture should leverage managed database services for better scalability, reliability, and operational efficiency. I rebuke containerised databases...
- Create an Azure Cosmos DB for MongoDB API
- Update the backend application to connect to Cosmos DB
- Commit changes to trigger CI pipeline image build
- Deploy the updated application on ARO
# Create MongoDB-compatible Cosmos DB account
az cosmosdb create \
--name aro-task-manager-db \
--resource-group $RESOURCE_GROUP \
--kind MongoDB \
--capabilities EnableMongo \
--server-version 4.0 \
--default-consistency-level Session
# Get the connection string
CONNECTION_STRING=$(az cosmosdb keys list \
--name aro-task-manager-db \
--resource-group $RESOURCE_GROUP \
--type connection-strings \
--query "connectionStrings[?description=='Primary MongoDB Connection String'].connectionString" -o tsv)
echo "Connection string: $CONNECTION_STRING"Modify backend/src/server.js to handle Cosmos DB connections:
// DB Connection
const DB_URI = process.env.MONGODB_URI || 'mongodb://localhost:27017/taskmanager';
const DB_OPTIONS = {
useNewUrlParser: true,
useUnifiedTopology: true,
...(process.env.MONGODB_URI?.includes('cosmos.azure.com') ? {
// Cosmos DB specific options
retryWrites: false,
ssl: true,
tlsAllowInvalidCertificates: false,
} : {})
};
mongoose.connect(DB_URI, DB_OPTIONS)
.then(() => console.log('Connected to MongoDB'))
.catch(err => console.error('MongoDB connection error:', err));Commit your backend code changes to trigger the CI pipeline:
# Add your changes
git add backend/src/server.js
# Commit with a descriptive message
git commit -m "Update backend to support Azure Cosmos DB"
# Push to trigger CI pipeline
git push origin mainWait for your CI pipeline to complete and build a new container image.
# Create a secret for the MongoDB connection string
oc create secret generic mongodb-credentials \
--namespace task-manager \
--from-literal=connection-string="$CONNECTION_STRING"Create a new file called backend-deployment-cosmos.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-api
namespace: task-manager
spec:
replicas: 1
selector:
matchLabels:
app: backend-api
template:
metadata:
labels:
app: backend-api
spec:
containers:
- name: backend-api
image: ${YOUR_ACR_URL}/taskmanager-backend:latest # Will use the latest image built by CI
ports:
- containerPort: 3001
env:
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-credentials
key: connection-string
- name: PORT
value: "3001"Apply the updated deployment:
# Delete the existing backend API deployment
kubectl delete deployment backend-api
# Replace ${YOUR_ACR_URL} with your actual ACR URL
sed "s|\${YOUR_ACR_URL}|$ACR_LOGIN_SERVER|g" backend-deployment-cosmos.yaml | oc apply -f -
# Scale down MongoDB (optional - you can keep it running as a fallback)
oc scale deployment mongodb --replicas=0 -n task-manager-
Check that the backend pod is running with the updated configuration:
oc get pods -n task-manager
-
View the backend logs to confirm it's connecting to Cosmos DB:
oc logs -f deployment/backend -n task-manager
-
Test the application functionality to ensure data operations work correctly.
- Reduced Cluster Resource Usage: No MongoDB pods in your ARO cluster
- Improved Reliability: Azure's 99.99% SLA for Cosmos DB
- Automatic Scaling: Cosmos DB scales throughput based on demand
- Global Distribution: Option to distribute data globally if needed
- Security Improvements: Managed encryption at rest and in transit
- Enable GitHub Copilot in your development environment
- Use Copilot to add a new feature to the application. Some ideas:
- Task search functionality
- Task categories or tags
- Due date reminders
- User authentication
- Document how Copilot assisted in the development process
- Use GitHub AI Models to:
- Generate documentation for your code
- Create useful comments
- Explain complex sections of the codebase
- Suggest optimizations or improvements
- Compare the suggestions against the original code
- Implement at least one improvement suggested by the AI
- Enable GitHub Advanced Security features:
- Code scanning with CodeQL
- Dependency scanning
- Secret scanning
- Add security scanning to your CI/CD pipeline
- Address any security issues identified by the scans
- Implement dependency management best practices
- Set up basic monitoring for your application in ARO
- Implement logging and configure log aggregation
- Create at least one dashboard to visualize application performance
- Configure alerts for critical metrics
- Azure Red Hat OpenShift Documentation
- GitHub Actions Documentation
- GitHub Copilot Documentation
- GitHub Advanced Security
- OpenShift Developer Documentation
If you encounter issues during the hackathon, please reach out to the mentors who will be available to assist you.
Good luck and happy hacking!