Contents
    Created with Raphaël 2.1.0
    Loading...

Popular

Containerization using Docker containers is designed to run servers on physical as well as virtual machines, cloud clusters, public instances etc. Fiorano supports containerization, hence enhancing scalability, high availability, sharing of workload and application deployment. 

The Fiorano Servers are containerized in a docker image for easy deployment and scaling at the user end. Docker containers wrap up the servers and their environment without the need for initial environment configuration. Kubernetes is used as a container orchestration tool for deploying the Fiorano SOA in a cluster while managing load balancing and also increasing fault tolerance. Fiorano SOA is compressed as an image and then deployed on to the Kubernetes cluster.

This section illustrates deploying Fiorano API servers in a Kubernetes cluster using Minikube.


Installing Fiorano platform and making changes in the installer

  1. Click the link below, download the latest Fiorano Installer and install the same:
    https://www.fiorano.com/resources/product_downloads
  2. Use ConfigDeployer from $FioranoHome/esb/tools to set up AGS server1 profile by modifying Cassandra DataStore URL, FESPrimary URL, FESSecondary URL with the customized IP (IP refers to the Cluster IP in Kubernetes which redirects requests to the pods in a deployment) 10.96.0.20 and the ports 9042, 2147 and 2148 respectively.
  3. The postgres driver must be added to the $FioranoHome/esb/server/bin/server.conf as "/postgresql-9.1-902.jdbc4.jar" in the Fiorano Sources for postgres to connect.
  4. Extract the stripInstaller.tar.gz and run the script in the format as mentioned in the Readme file to reduce the size of the installer.
  5. Place a copy of the installer in the folder ams_build and ags_build to create separate AGS and AMS images.
  6. Refer to the Creating Docker Image from the Fiorano setup and make changes in the dockerfile as necessary.

Installing Docker

For Debian-based Linux systems

  1. Install Dependency packages

    sudo apt-get update
    sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
  2. Add Docker's official GPG key:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -


  3. Add the Docker repository

    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(. /etc/os-release; echo "$UBUNTU_CODENAME") stable"
    cat /etc/apt/sources.list.d/additional-repositories.list
    -should show "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
  4. Install Docker Engine and Docker Compose

    sudo apt-get update
    sudo apt-get -y install docker-ce docker-compose
  5. Add a user to the group to run docker commands as a nonprivileged user.

    sudo usermod -aG docker $USER
  6. Log out and log back in so that the group membership is re-evaluated.

Creating a Docker Image from the Fiorano setup

Icon

Configure the Cluster IP of AMS to the AGS configuration correctly before creating the image, steps here.

Create a folder with the following structure and place Dockerfile

  • Working directory
    • Fiorano
      • 12.1.0
    • Dockerfile
    • Postgres Driver

Contents of DockerFile: 

For creating ESB servers
FROM store/oracle/serverjre:1.8.0_241-b07
WORKDIR /
ADD Fiorano Fiorano
EXPOSE 8080 1980 1847 2047 1947 1880 1867 2067
CMD ./Fiorano/12.1.0/esb/server/bin/server.sh -mode fes -profile esb -nobackground
To create docker image of AMS servers
FROM store/oracle/serverjre:1.8.0_241-b07
WORKDIR /
ADD Fiorano Fiorano
ADD postgresql-9.1-902.jdbc4.jar postgresql-9.1-902.jdbc4.jar
EXPOSE 8080 1980 1847 2047 1947 1880 1867 2067 9042 7000 7001 7199 5432 1981
CMD Fiorano/12.1.0/esb/server/bin/server.sh -mode ams -profile server1 -nobackground
To create docker image of AGS servers
FROM store/oracle/serverjre:1.8.0_241-b07
WORKDIR /
ADD Fiorano Fiorano
ADD postgresql-9.1-902.jdbc4.jar postgresql-9.1-902.jdbc4.jar
EXPOSE 8080 1980 1847 2047 1947 1880 1867 2067 9042 7000 7001 7199 5432 1981
CMD Fiorano/12.1.0/esb/server/bin/server.sh -mode ags -profile server1 -serverName `echo $SERV_NAME | tr - _` -serverGroupName serverGroup1 -nobackground
Icon
  • The variable $SERV_NAME is passed as an environment variable from Kubernetes at the time of deployment for starting gateway servers. Hence use the same variable name from deployment.
  • All the servers must be run in foreground mode or may result in containers crashing.
  • Pulling an oracle JRE in the docker file might request login which requires to create a user account in the docker hub, otherwise, substitute java:8 in place of store/oracle/serverjre:1.8.0_241-b07 in Dockerfile.
  • Ensure that while running CMD command the path is the same as the folder structure.

Create the docker image from the setup 

Open the terminal to the folder with dockerfile and execute the command:

docker build -t <Image_name>
Example
$ docker build -t fiorano_ags 


Save the docker image to a tar file

docker save <Image_name_from_docker_build> > <Name_of_tarfile>.tar
Example
$ docker save fiorano_ags > fioranoAGSv12_1.tar

Likewise, docker images can be built and saved for Fiorano AGS, AMS and ESB servers as per requirement.

References and Side Note for docker

PurposeCommand

To pull Cassandra and postgres images

Images can be saved as mentioned in the section above.

docker pull cassandra
To remove all stopped containers, all dangling images, and all unused networks
docker system prune
To lists all containers
docker ps -a 
To list all docker imagesdocker images 
To remove all dangling imagesdocker image prune
To remove all imagesdocker image prune -a

Kubernetes Cluster Setup on Local System

Icon

Installing docker should have been completed successfully before continuing with this setup

Installing Virtual Box

Download the virtual box from https://www.virtualbox.org/wiki/Downloads

Installing Minikube (for Debian-based Linux)

Install and setup kubectl

$ curl -LO {+}https://storage.googleapis.com/kubernetes-release/release/+$(curl -s {+}https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl+
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

Install Minikube

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.29.0/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube

Start Minikube and check

$ minikube start

Starting local Kubernetes v1.10.0 cluster...
Starting VM...

Downloading Minikube ISO
171.87 MB / 171.87 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Minikube

Icon
  • Command to ssh into the VM
$ minikube ssh
  • Command to stop
$ minikube stop
  • Command to delete VM
$ minikube delete

Setting up a Kubernetes cluster

Set up a Network File System for persisting data 

  1. Install nfs-server in the host machine and nfs-client in the client machine to access the files.
  2. To link a host directory to mount point, edit /etc/fstab and then give the command below:

    $ sudo mount -a -t none
  3. To mount a directory, edit /etc/exports and set the directory.
  4. Use the command below to export all directories mentioned in the /etc/exports:

    $ exportfs -ra
  5. Check if it is mounted successfully using the command below:

    $ showmount -e

Start Minikube

  • For Windows 10 Professional Edition
    1. Install Chocolatey package manager for ease.
    2. Enable Hyper-V from Windows Features
    3. Using chocolatey, install kubernetes-cli using the command:

      choco install kubernetes-cli
    4. Using chocolatey, install minikube using the command:

      choco install minikube
    5. To set where the VM needs to be created, add MINIKUBE_HOME=<pathofVMdir> to system properties.
    6. After it creates VM in the path, configure Hyper-V Switch Manager to allow internet access for the VM.
    7. Start minikube vm using the command:

      minikube start --vm-driver=hyperv --cpus=4 --memory=6144 --hyperv-virtual-switch="Primary" –disk-size=40GB
  • For Debian based Linux
    1. Start minikube and set up resources as per requirement 

      $ minikube start --cpus=4 --memory=6144 --disk-size=40GB
    2. Open dashboard using the command:

      $ minikube dashboard

Minikube file setup 

  1. Ssh into the VM

    $ minikube ssh
  2. Set user permissions in the data folder in minikube

    $ su -
    $ chmod -R 777 /mnt/sda1/data
  3. Also note the <VM_IP> of the minikube node

    $ ifconfig

Transfer the .tar files saved from docker

The tar file can be copied using scp command

$ cd
$ scp -i .minikube/machines/minikube/id_rsa <tar_file_location> docker@<VM_IP>:/mnt/sda1/data
Example
scp -i .minikube/machines/minikube/id_rsa /home/fiorano/Documents/postgres.tar docker@192.168.99.100:/mnt/sda1/data 


Load the docker images  

$ minikube ssh
$ cd /mnt/sda1/data
$ docker load < <tar_file>
Usage
docker load < fioranoAGSv12_1.tar

Load the cassandra, postgres, AMS and AGS images

Enable port-forwarding in the minikube  

$ minikube stop

Create a script with contents as below and execute the script to open port forwarding:

VBoxManage modifyvm "minikube" --natpf1 "cassandra,tcp,,9042,,31042"
VBoxManage modifyvm "minikube" --natpf1 "peer,tcp,,1880,,31880"
VBoxManage modifyvm "minikube" --natpf1 "postgres,tcp,,5432,,31432"
VBoxManage modifyvm "minikube" --natpf1 "server,tcp,,2147,,32147"
VBoxManage modifyvm "minikube" --natpf1 "soa dashboard,tcp,,1980,,31980"
VBoxManage modifyvm "minikube" --natpf1 "tls-intra,tcp,,7001,,31001"
VBoxManage modifyvm "minikube" --natpf1 "jmx,tcp,,7199,,31099"
VBoxManage modifyvm "minikube" --natpf1 "intra-node,tcp,,7000,,31000"
VBoxManage modifyvm "minikube" --natpf1 "https,tcp,,14401,,443"
VBoxManage modifyvm "minikube" --natpf1 "http,tcp,,14400,,80"
VBoxManage modifyvm "minikube" --natpf1 "backup,tcp,,2148,,32148"
VBoxManage modifyvm "minikube" --natpf1 "apimgmt,tcp,,1981,,31981"
VBoxManage modifyvm "minikube" --natpf1 "resources,tcp,,2160,,32160"
VBoxManage modifyvm "minikube" --natpf1 "rmi,tcp,,2367,,32367"
VBoxManage modifyvm "minikube" --natpf1 "conn,tcp,,2167,,32167" 

Restart the minikube with existing configurations  

$ minikube start
$ minikube dashboard 

Execute the yaml files in the order mentioned below

  1. services.yaml file contains the ports required to be exposed by the node for AMS servers and database while also taking care of directing requests. It is of type NodePort and has cluster IP 10.96.0.20

    $ kubectl apply -f services.yaml
  2. ags-services.yaml contains the ports for accessing resource created by a project. It is of type LoadBalancer and has cluster IP 10.96.0.30

    $ kubectl apply -f ags-services.yaml
    Icon

    Ensure that the service name in yaml corresponds to the app name of the deployment or stateful set to be created; otherwise it may not work properly.

  3. Set the login credentials for postgres comprising of username, password in postgres-config.yaml

    $ kubectl apply -f postgres-config.yaml  
  4.  Create persistent volumes and their respective claims for Cassandra, Postgres and Fiorano AMS runtimedata.

    $ kubectl apply -f cassandra_pv_pvc.yaml
    $ kubectl apply -f postgres_pv_pvc.yaml
    $ kubectl apply -f fiorano_pv_pvc.yaml


  5. Create deployment for multicontainer pods containing Fiorano AMS and databases.

    $ kubectl apply -f fiorano-cassandra-deployments.yaml



    Wait for 5 minutes to get all the containers running and the workloads to turn green from yellow which means deployments are successful.
    Check if deployment successful by opening the apimgmt dashboard in the browser.
    Common causes of deployment failure:

    1. Fiorano Installer License Expiry
    2. Unable to link to the persistent volume which may be due to unavailability of NFS-server
    3. Ensure that the docker image is loaded in minikube ssh and the docker image name correctly corresponds in the yaml file.
  6. Create a stateful set for AGS servers. ( Ensure port forwarding has been enabled in VM before this step)

    $ kubectl apply -f ags-stateful.yaml


Login to the API management dashboard and check for the servers ags_0 etc. available in the server group - serverGroup1.

Deploy projects and check the accessibility of the resource hosted by the gateway servers by changing the IP to localhost:2160  

Icon

This would work provided 2160 has been port forwarded in VM and the ags-service is deployed

Scaling

Manual Scaling

Change the number of replicas manually by clicking "Edit" option in stateful sets in dashboard.

Autoscaling

  1. For auto-scaling add resource request and limit value in AGS stateful sets.

    resources:
      limits:
        cpu: '4'
        memory: 4G
      requests:
        cpu: '1'
        memory: 2G



  2. Now stop minikube in current configuration 

    $ minikube stop
  3. Start minikube with the following arguments.
    1. For Debian based Linux:

      minikube start --extra-config=controller-manager.horizontal-pod-autoscaler-upscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-downscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-sync-period=10s --extra-config=controller-manager.horizontal-pod-autoscaler-downscale-stabilization=1m
    2. For Windows: 

      minikube start --vm-driver=hyperv –hyperv-virtual-switch="Primary" --extra-config=controller-manager.horizontal-pod-autoscaler-upscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-downscale-delay=1m --extra-config=controller-manager.horizontal-pod-autoscaler-sync-period=10s –extra-config=controller-manager.horizontal-pod-autoscaler-downscale-stabilization=1m



  4. Enable the metrics-server add on using the following command:

    minikube addons enable metrics-server
  5. Then wait for some time and create the autoscaler

    kubectl autoscale statefulsets ags --cpu-percent=30 --min=1 –max=2

    For system details:
    kubectl top node
    For container details:
    kubectl top pod
    For the above-mentioned autoscaler details:
    kubectl describe hpa



  6. For memory-based autoscaling, create a yaml file with the content below and set targetAverageUtilization in it as per requirement:

    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    metadata:
     name: fioranoapi12v1mem
     namespace: default
    spec:
     scaleTargetRef:
      apiVersion: apps/v1beta1
      kind: StatefulSet
      name: ags
    minReplicas: 1
    maxReplicas: 2
    metrics:
    - type: Resource
      resource:
       name: memory
       targetAverageUtilization: 40
  7. To deploy this autoscaler use the command below:

    $ kubectl apply -f <fileName>.yaml



  8. Use kubectl describe hpa to see what its doing

     
    To delete hpa, use:

    $ kubectl delete hpa fioranoapi12v1mem
    Icon

    This would work provided 2160 has been port forwarded in VM and the ags-service is deployed

Kubernetes Cluster Setup on Google Cloud with Istio

Creating Kubernetes Cluster on Google Cloud

Login and Select Kubernetes Engine on Google Cloud Platform

Create a cluster by running the following command on cloud shell
$ gcloud container clusters create fiorano-api-cluster --cluster-version latest --machine-type=n1-standard-2 --num-nodes 4 --zone asia-south1-b --project esbtest-14082018

Retrieve your credentials for kubectl
Example
$ gcloud container clusters get-credentials fiorano-api-cluster --zone asia-south1-b --project esbtest-14082018
Grant cluster administrator (admin) permissions to the current user

To create the necessary RBAC rules for Istio, the current user requires admin permissions.

$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)

Configuring Istio

Downloading Istio

Go to the Istio release page to download the installation file for your OS, or download and extract the latest release automatically (Linux or macOS) as mentioned in https://istio.io/docs/setup/getting-started/

Run the following in the cloud shell

$ curl -L https://istio.io/downloadIstio | sh -
Adding istioctl client to your cloud system path
Icon

Please use the version of istio you downloaded and correct the paths below to suit your version

$ cd istio-1.5.0
$ export ISTIO_HOME="/path/to/istio/istio-1.5.0"
$ export PATH=$IedxzSTIO_HOME/bin:$PATH
Configuring Istio Profile

For this installation, we use the demo configuration profile. It’s selected to have a good set of defaults for testing along with dashboards like kiali, prometheus etc.

$ istioctl manifest apply --set profile=demo

Configuring Istio Namespace to allow injection

Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later

$ kubectl label namespace default istio-injection=enabled

Configuration changes to Fiorano installer for cloud setup

In the Fiorano Latest installer, change the IP for Cassandra, Primary and Secondary URL as 10.35.240.20 (configured cluster IP for AMS) in config deployer in for server1 profile AGS.

Configuring the Docker image
  1. Upload the tar files created for AMS and AGS compressed as zip to the google cloud console, after upload extract the same. 
  2. Change directory and go to the directory containing the docker images using cloud shell 
  3. Run the following code blocks to load docker images to the cloud docker

    $ docker load < FIORANO_DOCKER_IMAGE

    Icon

    Change the FIORANO_DOCKER_IMAGE to the actual docker image file name for both AMS and AGS

  4. Add Cred Helper
    Add the Docker credHelper entry to Docker's configuration file, or creates the file if it doesn't exist. This will register gcloud as the credential helper for all Google-supported Docker registries. ( refer to https://cloud.google.com/container-registry/docs/pushing-and-pulling)

    $ gcloud auth configure-docker

  5. Create tags with registry name

    Example
    $ docker tag fiorano_ams gcr.io/esbtest-14082019/fiorano_ams:latest
    $ docker tag fiorano_ags gcr.io/esbtest-14082019/fiorano_ags:latest
  6. Push the tagged images to container registry

    Example
    $ docker push gcr.io/esbtest-14082018/fiorano_ams:latest
    $ docker push gcr.io/esbtest-14082018/fiorano_ags:latest

Create Persistent Volume Claims

Run the following command to execute the yaml files for persistent volume claim configuration, please navigate to the folder containing the yamls before executing

Icon

Click on the file name to download a sample template for the respective yaml.

  1. cassandra_pv_pvc.yaml

    $ kubectl apply -f cassandra_pv_pvc.yaml
  2. postgres_pv_pvc.yaml

    $ kubectl apply -f postgres_pv_pvc.yaml
  3. fiorano_pv_pvc.yaml

    $ kubectl apply -f fiorano_pv_pvc.yaml

Configure Postgres Login

Apply the postgres configuration file for login credentials

The template of the file can be found here.

$ kubectl apply -f postgres-config.yaml

Create the load balancer services

Icon

Please update the Cluster IP field in services.yaml and ags-services.yaml based on your setup. For services.yaml use the same cluster IP that we had set in the Fiorano profile while creating the docker image for AGS.

Load Balancer configuration for AMS

Sample services.yaml can be found here

$ kubectl apply -f services.yaml
Load Balancer Configuration for AGS

Sample ags-services.yaml can be found here

$ kubectl apply -f ags-services.yaml
Icon

Wait for a few minutes for the Loadbalancer endpoint to get assigned.

Configuring Ingress hosts and ports

$ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
$ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}'

Create AMS, Cassandra and postgres deployment

Icon

Check if the image name in yaml is the same as tagged in the gcloud registry otherwise pods may fail

Sample yaml file can be found here.

$ kubectl apply -f fiorano-cass-post-deployment.yaml
Icon

Wait for a few minutes to get the pods running

Create AGS stateful set deployment

Icon

Check if the image name in yaml is the same as tagged in the gcloud registry otherwise pods may fail

Sample yaml file can be found here.

$ kubectl apply -f ags-stateful.yaml

Create the kubernetes gateway service to access services outside a cluster

Icon

Check if hosts field is "*" or specify the Ingressgateway IP which is in INGRESS_HOST )

Sample yaml file can be found here.

$ kubectl apply -f gateway.yaml

Create the kubernetes virtual services which would specify host URI

Icon

Check for /api in prefix and check if hosts field is the Ingressgateway IP which is in INGRESS_HOST

Click the file names to get sample virtual.yaml and resource.yaml files

$ kubectl apply -f virtual.yaml
$ kubectl apply -f resource.yaml
Icon

Now check if external access works by opening browser and giving URL as: http://$INGRESS_HOST:$INGRESS_PORT/apimgmt

To check INGRESS_HOST and PORT give the following command in cloud shell

Icon

echo $INGRESS_HOST:$INGRESS_PORT

Load Kiali Dashboard

$ istioctl dashboard kiali
Adaptavist ThemeBuilder EngineAtlassian Confluence