Installing MT4 gRPC API service to the microk8s Kubernetes cluster with round-robing balancing and sticky sessions
Microk8s installation:
sudo apt update
sudo apt install snapd
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
Then you need to restart user session.
After this, you can check the microk8s status:
microk8s.status --wait-ready
microk8s kubectl get all --all-namespaces
Cert manager installation:
microk8s enable cert-manager
Istio installation:
microk8s enable community
microk8s.enable istio
MetalB external balancing tool installation:
microk8s enable metallb
Set IP addresses range as your server external IP.
For example:
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 195.201.62.15-195.201.62.15
Creating specific workload namespace "grpcss" and setting context for it:
microk8s kubectl get namespaces
microk8s kubectl config set-context --current --namespace=grpcss
microk8s kubectl create namespace grpcss
Enabling Istio injection to the created namespace:
microk8s kubectl label namespace grpcss istio-injection=enabled
Creating docker registry authentification secret:
microk8s kubectl create secret docker-registry mtapiregistrykey --docker-server=reg.mtapi.io:5050 --docker-username=<user name> --docker-password='<password>'
Each time you need to create somthing from json use:
microk8s kubectl apply -o json -f - <<EOF
{
<JSON>
}
EOF
Creating deployment with 3 replicas:
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"labels": {
"app": "mt4grpc-ss"
},
"name": "mt4grpc-sticky-sessions",
"namespace": "grpcss"
},
"spec": {
"replicas": 3,
"selector": {
"matchLabels": {
"app": "mt4grpc-ss",
"task": "mt4grpc-sticky-sessions"
}
},
"template": {
"metadata": {
"labels": {
"app": "mt4grpc-ss",
"task": "mt4grpc-sticky-sessions"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "LogLevel",
"value": "Information"
}
],
"image": "reg.mtapi.io:5050/root/mt4grpc-full/mt4grpc",
"imagePullPolicy": "Always",
"name": "mt4grpc-ss-containers",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
]
}
],
"imagePullSecrets": [
{
"name": "mtapiregistrykey"
}
]
}
}
}
}
Creating kubernetes services for exposing deployment within cluster.
First "grpc-mt4-ss-nn" service will be used for http connections.
Second "grpc-mt4-ss-nn-tls" service will serve for https connections.
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "grpc-mt4-ss-nn",
"namespace": "grpcss"
},
"spec": {
"ports": [
{
"name": "http2-query",
"port": 8888,
"protocol": "TCP",
"targetPort": 80
}
],
"selector": {
"app": "mt4grpc-ss"
},
"type": "ClusterIP"
}
}
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "grpc-mt4-ss-nn-tls",
"namespace": "grpcss"
},
"spec": {
"ports": [
{
"name": "grpc",
"port": 8888,
"protocol": "TCP",
"targetPort": 80
}
],
"selector": {
"app": "mt4grpc-ss"
},
"type": "ClusterIP"
}
}
Now we need to prepare cluster issuer. It is extremely important to link it to the Istio class "class": "istio":
{
"apiVersion": "cert-manager.io/v1",
"kind": "ClusterIssuer",
"metadata": {
"name": "lets-encrypt"
},
"spec": {
"acme": {
"email": "<your email for notifications from Let's Encrypt>",
"privateKeySecretRef": {
"name": "lets-encrypt-priviate-key"
},
"server": "https://acme-v02.api.letsencrypt.org/directory",
"solvers": [
{
"http01": {
"ingress": {
"class": "istio"
}
}
}
]
}
}
}
It's worth to prepare tls Certificate stub. This stub will be filled by certificate by Cluster Issuer
{
"apiVersion": "cert-manager.io/v1",
"kind": "Certificate",
"metadata": {
"name": "istio-gateway-certificate",
"namespace": "istio-system"
},
"spec": {
"commonName": "grpcya.mtapi.io",
"dnsNames": [
"grpcya.mtapi.io"
],
"issuerRef": {
"kind": "ClusterIssuer",
"name": "lets-encrypt"
},
"secretName": "istio-gateway-certificate"
}
}
You can check certificate readiness by
microk8s kubectl get certificate istio-gateway-certificate -n istio-system
And now we need to create external entry point.
It will be Istio ingress gateway.
You can see that this gateway uses certificate for TLS, which we created before.
{
"apiVersion": "networking.istio.io/v1beta1",
"kind": "Gateway",
"metadata": {
"name": "my-istio-gateway-nn",
"namespace": "istio-system"
},
"spec": {
"selector": {
"istio": "ingressgateway"
},
"servers": [
{
"hosts": [
"*"
],
"port": {
"name": "http2",
"number": 80,
"protocol": "HTTP2"
}
},
{
"hosts": [
"grpcya.mtapi.io"
],
"port": {
"name": "https",
"number": 443,
"protocol": "HTTPS"
},
"tls": {
"credentialName": "istio-gateway-certificate",
"mode": "SIMPLE"
}
}
]
}
}
Now we need to create two Istio Virtual services for request routing from Istio gateway to kubernetes services.
{
"apiVersion": "networking.istio.io/v1beta1",
"kind": "VirtualService",
"metadata": {
"name": "grpc-service-route",
"namespace": "grpcss"
},
"spec": {
"gateways": [
"istio-system/my-istio-gateway-nn"
],
"hosts": [
"*"
],
"http": [
{
"match": [
{
"port": 80
}
],
"route": [
{
"destination": {
"host": "grpc-mt4-ss-nn.grpcss.svc.cluster.local",
"port": {
"number": 8888
}
}
}
]
}
]
}
}
{
"apiVersion": "networking.istio.io/v1beta1",
"kind": "VirtualService",
"metadata": {
"name": "grpc-service-route-tls",
"namespace": "grpcss"
},
"spec": {
"gateways": [
"istio-system/my-istio-gateway-nn"
],
"hosts": [
"grpcya.mtapi.io"
],
"http": [
{
"match": [
{
"port": 443
}
],
"route": [
{
"destination": {
"host": "grpc-mt4-ss-nn-tls.grpcss.svc.cluster.local",
"port": {
"number": 8888
}
}
}
]
}
]
}
}
It is not enough, we need to specify a balancing option, which allows us to stick the request to the pod by specifying header.
We need to specify option for each Istio Virtual service.
{
"apiVersion": "networking.istio.io/v1beta1",
"kind": "DestinationRule",
"metadata": {
"name": "grpc-mt4-ss-nn",
"namespace": "grpcss"
},
"spec": {
"host": "grpc-mt4-ss-nn",
"trafficPolicy": {
"loadBalancer": {
"consistentHash": {
"httpHeaderName": "mt4-sticky-session-header"
}
}
}
}
}
{
"apiVersion": "networking.istio.io/v1beta1",
"kind": "DestinationRule",
"metadata": {
"name": "grpc-mt4-ss-nn-tls",
"namespace": "grpcss"
},
"spec": {
"host": "grpc-mt4-ss-nn-tls",
"trafficPolicy": {
"loadBalancer": {
"consistentHash": {
"httpHeaderName": "mt4-sticky-session-header"
}
}
}
}
}
And now, if we add the same value of header with name "mt4-sticky-session-header", we would be routed to the same pod we were routed before.
var headers = new Metadata { { StickySessionHeaderName, currentGuid.ToString() } };
var reply = connection.Connect(connectRequest, headers: headers);
if(reply.Error != null)
throw new Exception(reply.Error.Message);
Console.WriteLine("Connect response: " + reply);
var id = reply.Result;
var summaryReq = new AccountSummaryRequest()
{
Id = id
};
var summaryReply = mt4.AccountSummary(summaryReq, headers);
if (summaryReply.Error != null)
throw new Exception(summaryReply.Error.Message);
Console.WriteLine("AccountBalance: " + summaryReply.Result.Balance);
Just for your information. If you need to change some kubernetes object.
KUBE_EDITOR=nano microk8s kubectl edit <object type> <object name> -o json -n <namespace>
KUBE_EDITOR=nano microk8s kubectl edit service/grpc-mt4-ss-nn -o json