Containerization using Docker containers is designed to run servers on physical as well as virtual machines, cloud clusters, public instances etc. Fiorano supports containerization, hence enhancing scalability, high availability, sharing of workload and application deployment.
The Fiorano Servers are containerized in a docker image for easy deployment and scaling at the user end. Docker containers wrap up the servers and their environment without the need for initial environment configuration. Kubernetes is used as a container orchestration tool for deploying the Fiorano SOA in a cluster while managing load balancing and also increasing fault tolerance. Fiorano SOA is compressed as an image and then deployed on to the Kubernetes cluster.
This section illustrates deploying Fiorano API servers in a Kubernetes cluster using Minikube.
- 1 Installing Fiorano platform and making changes in the installer
- 2 Installing Docker
- 3 Creating a Docker Image from the Fiorano setup
- 4 Kubernetes Cluster Setup on Local System
- 4.1 Installing Virtual Box
- 4.2 Installing Minikube (for Debian-based Linux)
- 4.3 Setting up a Kubernetes cluster
- 4.4 Scaling
- 4.4.1 Manual Scaling
- 4.4.2 Autoscaling
- 5 Kubernetes Cluster Setup on Google Cloud with Istio
- 5.1 Creating Kubernetes Cluster on Google Cloud
- 5.2 Configuring Istio
- 5.3 Configuration changes to Fiorano installer for cloud setup
- 5.4 Create Persistent Volume Claims
- 5.5 Configure Postgres Login
- 5.6 Create the load balancer services
- 5.7 Configuring Ingress hosts and ports
- 5.8 Create AMS, Cassandra and postgres deployment
- 5.9 Create AGS stateful set deployment
- 5.10 Create the kubernetes gateway service to access services outside a cluster
- 5.11 Create the kubernetes virtual services which would specify host URI
- 5.12 Load Kiali Dashboard
Installing Fiorano platform and making changes in the installer
- Click the link below, download the latest Fiorano Installer and install the same:
https://www.fiorano.com/resources/product_downloads - Use ConfigDeployer from $FioranoHome/esb/tools to set up AGS server1 profile by modifying Cassandra DataStore URL, FESPrimary URL, FESSecondary URL with the customized IP (IP refers to the Cluster IP in Kubernetes which redirects requests to the pods in a deployment) 10.96.0.20 and the ports 9042, 2147 and 2148 respectively.
- The postgres driver must be added to the $FioranoHome/esb/server/bin/server.conf as "/postgresql-9.1-902.jdbc4.jar" in the Fiorano Sources for postgres to connect.
- Extract the stripInstaller.tar.gz and run the script in the format as mentioned in the Readme file to reduce the size of the installer.
- Place a copy of the installer in the folder ams_build and ags_build to create separate AGS and AMS images.
- Refer to the Creating Docker Image from the Fiorano setup and make changes in the dockerfile as necessary.
Installing Docker
For Debian-based Linux systems
Install Dependency packages
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
Add Docker's official GPG key:
curl -fsSL https:
//download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add the Docker repository
sudo add-apt-repository
"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(. /etc/os-release; echo "
$UBUNTU_CODENAME
") stable"
cat /etc/apt/sources.list.d/additional-repositories.list
-should show
"deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
Install Docker Engine and Docker Compose
sudo apt-get update
sudo apt-get -y install docker-ce docker-compose
Add a user to the group to run docker commands as a nonprivileged user.
sudo usermod -aG docker $USER
- Log out and log back in so that the group membership is re-evaluated.
Creating a Docker Image from the Fiorano setup
Create a folder with the following structure and place Dockerfile
- Working directory
- Fiorano
- 12.1.0
- Dockerfile
- Postgres Driver
- Fiorano
Contents of DockerFile:
For creating ESB servers
FROM store/oracle/serverjre: 1.8 .0_241-b07 WORKDIR / ADD Fiorano Fiorano EXPOSE 8080 1980 1847 2047 1947 1880 1867 2067 CMD ./Fiorano/ 12.1 . 0 /esb/server/bin/server.sh -mode fes -profile esb -nobackground |
To create docker image of AMS servers
FROM store/oracle/serverjre: 1.8 .0_241-b07 WORKDIR / ADD Fiorano Fiorano ADD postgresql- 9.1 - 902 .jdbc4.jar postgresql- 9.1 - 902 .jdbc4.jar EXPOSE 8080 1980 1847 2047 1947 1880 1867 2067 9042 7000 7001 7199 5432 1981 CMD Fiorano/ 12.1 . 0 /esb/server/bin/server.sh -mode ams -profile server1 -nobackground |
To create docker image of AGS servers
FROM store/oracle/serverjre: 1.8 .0_241-b07 WORKDIR / ADD Fiorano Fiorano ADD postgresql- 9.1 - 902 .jdbc4.jar postgresql- 9.1 - 902 .jdbc4.jar EXPOSE 8080 1980 1847 2047 1947 1880 1867 2067 9042 7000 7001 7199 5432 1981 CMD Fiorano/ 12.1 . 0 /esb/server/bin/server.sh -mode ags -profile server1 -serverName `echo $SERV_NAME | tr - _` -serverGroupName serverGroup1 -nobackground |
Create the docker image from the setup
Open the terminal to the folder with dockerfile and execute the command:
docker build -t <Image_name> |
$ docker build -t fiorano_ags |
Save the docker image to a tar file
docker save <Image_name_from_docker_build> > <Name_of_tarfile>.tar |
$ docker save fiorano_ags > fioranoAGSv12_1.tar |
Likewise, docker images can be built and saved for Fiorano AGS, AMS and ESB servers as per requirement.
References and Side Note for docker
Purpose | Command |
---|---|
To pull Cassandra and postgres images Images can be saved as mentioned in the section above. | docker pull cassandra |
To remove all stopped containers, all dangling images, and all unused networks | docker system prune |
To lists all containers | docker ps -a |
To list all docker images | docker images |
To remove all dangling images | docker image prune |
To remove all images | docker image prune -a |
Kubernetes Cluster Setup on Local System
Installing Virtual Box
Download the virtual box from https://www.virtualbox.org/wiki/Downloads
Installing Minikube (for Debian-based Linux)
Install and setup kubectl
$ curl -LO {+}https: //storage.googleapis.com/kubernetes-release/release/+$(curl -s {+}https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl+ $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl |
Install Minikube
$ curl -Lo minikube https: //storage.googleapis.com/minikube/releases/v0.29.0/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube |
Start Minikube and check
$ minikube start |
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
171.87 MB / 171.87 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
Setting up a Kubernetes cluster
Set up a Network File System for persisting data
- Install nfs-server in the host machine and nfs-client in the client machine to access the files.
To link a host directory to mount point, edit /etc/fstab and then give the command below:
$ sudo mount -a -t none
- To mount a directory, edit /etc/exports and set the directory.
Use the command below to export all directories mentioned in the /etc/exports:
$ exportfs -ra
Check if it is mounted successfully using the command below:
$ showmount -e
Start Minikube
For Windows 10 Professional Edition
- Install Chocolatey package manager for ease.
- Enable Hyper-V from Windows Features
Using chocolatey, install kubernetes-cli using the command:
choco install kubernetes-cli
Using chocolatey, install minikube using the command:
choco install minikube
- To set where the VM needs to be created, add MINIKUBE_HOME=<pathofVMdir> to system properties.
- After it creates VM in the path, configure Hyper-V Switch Manager to allow internet access for the VM.
Start minikube vm using the command:
minikube start --vm-driver=hyperv --cpus=
4
--memory=
6144
--hyperv-virtual-
switch
=
"Primary"
–disk-size=40GB
For Debian based Linux
Start minikube and set up resources as per requirement
$ minikube start --cpus=
4
--memory=
6144
--disk-size=40GB
Open dashboard using the command:
$ minikube dashboard
Minikube file setup
Ssh into the VM
$ minikube ssh
Set user permissions in the data folder in minikube
$ su -
$ chmod -R
777
/mnt/sda1/data
Also note the <VM_IP> of the minikube node
$ ifconfig
Transfer the .tar files saved from docker
The tar file can be copied using scp command
$ cd $ scp -i .minikube/machines/minikube/id_rsa <tar_file_location> docker@<VM_IP>:/mnt/sda1/data |
scp -i .minikube/machines/minikube/id_rsa /home/fiorano/Documents/postgres.tar docker @192 .168. 99.100 :/mnt/sda1/data |
Load the docker images
$ minikube ssh $ cd /mnt/sda1/data $ docker load < <tar_file> |
docker load < fioranoAGSv12_1.tar |
Load the cassandra, postgres, AMS and AGS images
Enable port-forwarding in the minikube
$ minikube stop |
Create a script with contents as below and execute the script to open port forwarding:
Restart the minikube with existing configurations
$ minikube start $ minikube dashboard |
Execute the yaml files in the order mentioned below
services.yaml file contains the ports required to be exposed by the node for AMS servers and database while also taking care of directing requests. It is of type NodePort and has cluster IP 10.96.0.20
$ kubectl apply -f services.yaml
ags-services.yaml contains the ports for accessing resource created by a project. It is of type LoadBalancer and has cluster IP 10.96.0.30
$ kubectl apply -f ags-services.yaml
Set the login credentials for postgres comprising of username, password in postgres-config.yaml
$ kubectl apply -f postgres-config.yaml
Create persistent volumes and their respective claims for Cassandra, Postgres and Fiorano AMS runtimedata.
$ kubectl apply -f cassandra_pv_pvc.yaml
$ kubectl apply -f postgres_pv_pvc.yaml
$ kubectl apply -f fiorano_pv_pvc.yaml
Create deployment for multicontainer pods containing Fiorano AMS and databases.
$ kubectl apply -f fiorano-cassandra-deployments.yaml
Wait for 5 minutes to get all the containers running and the workloads to turn green from yellow which means deployments are successful.
Check if deployment successful by opening the apimgmt dashboard in the browser.
Common causes of deployment failure:- Fiorano Installer License Expiry
- Unable to link to the persistent volume which may be due to unavailability of NFS-server
- Ensure that the docker image is loaded in minikube ssh and the docker image name correctly corresponds in the yaml file.
Create a stateful set for AGS servers. ( Ensure port forwarding has been enabled in VM before this step)
$ kubectl apply -f ags-stateful.yaml
Login to the API management dashboard and check for the servers ags_0 etc. available in the server group - serverGroup1.
Deploy projects and check the accessibility of the resource hosted by the gateway servers by changing the IP to localhost:2160
Scaling
Manual Scaling
Change the number of replicas manually by clicking "Edit" option in stateful sets in dashboard.
Autoscaling
For auto-scaling add resource request and limit value in AGS stateful sets.
resources:
limits:
cpu:
'4'
memory: 4G
requests:
cpu:
'1'
memory: 2G
Now stop minikube in current configuration
$ minikube stop
- Start minikube with the following arguments.
For Debian based Linux:
minikube start --extra-config=controller-manager.horizontal-pod-autoscaler-upscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-downscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-sync-period=10s --extra-config=controller-manager.horizontal-pod-autoscaler-downscale-stabilization=1m
For Windows:
minikube start --vm-driver=hyperv –hyperv-virtual-
switch
=
"Primary"
--extra-config=controller-manager.horizontal-pod-autoscaler-upscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-downscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-sync-period=10s –extra-config=controller-manager.horizontal-pod-autoscaler-downscale-stabilization=1m
Enable the metrics-server add on using the following command:
minikube addons enable metrics-server
Then wait for some time and create the autoscaler
kubectl autoscale statefulsets ags --cpu-percent=
30
--min=
1
–max=
2
For system details:kubectl top node
For container details:kubectl top pod
For the above-mentioned autoscaler details:kubectl describe hpa
For memory-based autoscaling, create a yaml file with the content below and set targetAverageUtilization in it as per requirement:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: fioranoapi12v1mem
namespace:
default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: StatefulSet
name: ags
minReplicas:
1
maxReplicas:
2
metrics:
- type: Resource
resource:
name: memory
targetAverageUtilization:
40
To deploy this autoscaler use the command below:
$ kubectl apply -f <fileName>.yaml
Use kubectl describe hpa to see what its doing
To delete hpa, use:$ kubectl delete hpa fioranoapi12v1mem
Kubernetes Cluster Setup on Google Cloud with Istio
Creating Kubernetes Cluster on Google Cloud
Login and Select Kubernetes Engine on Google Cloud Platform
Create a cluster by running the following command on cloud shell
$ gcloud container clusters create fiorano-api-cluster --cluster-version latest --machine-type=n1-standard- 2 --num-nodes 4 --zone asia-south1-b --project esbtest- 14082018 |
Retrieve your credentials for kubectl
$ gcloud container clusters get-credentials fiorano-api-cluster --zone asia-south1-b --project esbtest- 14082018 |
Grant cluster administrator (admin) permissions to the current user
To create the necessary RBAC rules for Istio, the current user requires admin permissions.
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account) |
Configuring Istio
Downloading Istio
Go to the Istio release page to download the installation file for your OS, or download and extract the latest release automatically (Linux or macOS) as mentioned in https://istio.io/docs/setup/getting-started/
Run the following in the cloud shell
$ curl -L https: //istio.io/downloadIstio | sh - |
Adding istioctl client to your cloud system path
$ cd istio- 1.5 . 0 $ export ISTIO_HOME= "/path/to/istio/istio-1.5.0" $ export PATH=$IedxzSTIO_HOME/bin:$PATH |
Configuring Istio Profile
For this installation, we use the demo configuration profile. It’s selected to have a good set of defaults for testing along with dashboards like kiali, prometheus etc.
$ istioctl manifest apply --set profile=demo |
Configuring Istio Namespace to allow injection
Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later
$ kubectl label namespace default istio-injection=enabled |
Configuration changes to Fiorano installer for cloud setup
In the Fiorano Latest installer, change the IP for Cassandra, Primary and Secondary URL as 10.35.240.20 (configured cluster IP for AMS) in config deployer in for server1 profile AGS.
Configuring the Docker image
- Upload the tar files created for AMS and AGS compressed as zip to the google cloud console, after upload extract the same.
- Change directory and go to the directory containing the docker images using cloud shell
Run the following code blocks to load docker images to the cloud docker
$ docker load < FIORANO_DOCKER_IMAGE
Add Cred Helper
Add the Docker credHelper entry to Docker's configuration file, or creates the file if it doesn't exist. This will register gcloud as the credential helper for all Google-supported Docker registries. ( refer to https://cloud.google.com/container-registry/docs/pushing-and-pulling)$ gcloud auth configure-docker
Create tags with registry name
Example$ docker tag fiorano_ams gcr.io/esbtest-
14082019
/fiorano_ams:latest
$ docker tag fiorano_ags gcr.io/esbtest-
14082019
/fiorano_ags:latest
Push the tagged images to container registry
Example$ docker push gcr.io/esbtest-
14082018
/fiorano_ams:latest
$ docker push gcr.io/esbtest-
14082018
/fiorano_ags:latest
Create Persistent Volume Claims
Run the following command to execute the yaml files for persistent volume claim configuration, please navigate to the folder containing the yamls before executing
$ kubectl apply -f cassandra_pv_pvc.yaml
$ kubectl apply -f postgres_pv_pvc.yaml
$ kubectl apply -f fiorano_pv_pvc.yaml
Configure Postgres Login
Apply the postgres configuration file for login credentials
The template of the file can be found here.
$ kubectl apply -f postgres-config.yaml |
Create the load balancer services
Load Balancer configuration for AMS
Sample services.yaml can be found here
$ kubectl apply -f services.yaml |
Load Balancer Configuration for AGS
Sample ags-services.yaml can be found here
$ kubectl apply -f ags-services.yaml |
Configuring Ingress hosts and ports
$ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath= '{.status.loadBalancer.ingress[0].ip}' ) $ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath= '{.spec.ports[?(@.name=="http2")].port}' ) $ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath= '{.spec.ports[?(@.name=="https")].port}' ) |
Create AMS, Cassandra and postgres deployment
Sample yaml file can be found here.
$ kubectl apply -f fiorano-cass-post-deployment.yaml |
Create AGS stateful set deployment
Sample yaml file can be found here.
$ kubectl apply -f ags-stateful.yaml |
Create the kubernetes gateway service to access services outside a cluster
Sample yaml file can be found here.
$ kubectl apply -f gateway.yaml |
Create the kubernetes virtual services which would specify host URI
Click the file names to get sample virtual.yaml and resource.yaml files
$ kubectl apply -f virtual.yaml $ kubectl apply -f resource.yaml |
To check INGRESS_HOST and PORT give the following command in cloud shell
Load Kiali Dashboard
$ istioctl dashboard kiali |