DITAS operator is in charge of maintaining the DITAS platform which includes all the tools to manage the deployment of the VDC/VDM as well as to monitor the execution of these elements. Although the deployed VDCs, and in particular the VDM, are properly designed to be adaptive, i.e., the react in case the data utility exposed to the user is in line to what has been promised in the abstract blueprint, there could be situations in which a manual intervention is required.

This part of the SDK contains guidelines in order to deploy, operate and monitor the VDC. Firstly, the Deployment engine services are documented, these services are used in order to deploy the VDC in any Kubernetes environment. After the deployment, it is important to configure and deploy the monitoring services in order to allow the system to trigger the Data or computational movement according to the Users’ SLA agreement.

Deployment Engine Services

DITAS Operator is responsible for deploying and operating the VDC. All the services here help the DITAS operator in the deployment and debugging or troubleshooting of the Kubernetes clusters that are created during the deployment

This operation is usually executed by the resolution engine after an abstract blueprint is selected and the intermediate blueprint is created. However, it’s available as a REST operation for the DITAS operator to invoke with arbitrary blueprints.

Name of the Method/service: Deploy blueprint
Description/Usage Deploys a VDC in a series of infrastructures. If the infrastructures are not initialized, it will initialize them, creating Kubernetes clusters and deploying the VDC and VDM. On a second and further invocation with the same blueprint, it will deploy another VDC in the same infrastructures that were previously initialized.
Input
URL POST   /blueprint
Parameters
Payload Request Body

  • Description: A JSON object containing the blueprint to deploy and a list of clusters to create
  • Content-Type: JSON
  • Example:

{

blueprint: <blueprint>

resources: [
{
“name”: “CloudSigma FRA deployment”,
“description”: “Deployment in CloudSigma FRA”,
“type”: “cloud”,
“on-line”: true,
“provider”: {
“api_endpoint”: “https://fra.cloudsigma.com/api/2.0”,
“api_type”: “cloudsigma”,
“credentials”: {
“username”: <cloudsigma_username>,
“password”: <cloudsigma_password>
}
},
“resources”: [
{
“name”: “master”,
“type”: “vm”,
“cpu”: 4000,
“ram”: 4096,
“disk”: 40960,
“generate_ssh_keys”: false,
“ssh_keys_id”: “uuid”,
“role”: “master”,
“image_id”: “5cbae035-d977-4df0-afa7-ff1ed5877453”,
“drives”: [
{
“name”: “data_main”,
“type”: “SSD”,
“size”: 20480
},{
“name”: “data_sec”,
“type”: “SSD”,
“size”: 5120
}],
“extra_properties”:{
“cloudsigma_boot_drive_type”: “custom”
}
},
{
“name”: “slave”,
“type”: “vm”,
“cpu”: 2000,
“ram”: 4096,
“disk”: 40960,
“generate_ssh_keys”: false,
“ssh_keys_id”: “uuid”,
“role”: “slave”,
“image_id”: “5cbae035-d977-4df0-afa7-ff1ed5877453”,
“drives”: [{
“name”: “data_main”,
“type”: “SSD”,
“size”: 20480
}],
“extra_properties”:{
“cloudsigma_boot_drive_type”: “custom”
}
}
],
“extra_properties”: {
“kubeadm_preinstalled_image”: “true”,
“ditas_glusterfs_client_installed”: “true”,
“ditas_git_installed”: “true”,
“k3s_curl_installed”: “true”
}
}
]

Output\Response
Content Type JSON
Body {
“_id” : “9f1aecc7-abe5-465f-9512-80e43f3b6c62”,
“name” : “Test deployment”,
“infrastructures” : {
“7d346da4-6ebc-4e9c-83c8-4682acd917d2” : {
“id” : “7d346da4-6ebc-4e9c-83c8-4682acd917d2”,
“name” : “CloudSigma FRA deployment”,
“type” : “cloudsigma”,
“provider” : {
“apiendpoint” : “https://fra.cloudsigma.com/api/2.0”,
“apitype” : “cloudsigma”,
“secretid” : “”,
“credentials” : {
“password” : <cloudsigma_password>,
“username” : <cloudsigma_username>
}
},
“nodes” : {
“master” : [
{
“hostname” : “cloudsigmafradeployment-master”,
“role” : “master”,
“ip” : <node_ip>,
“username” : “cloudsigma”,
“uuid” : “f2fcddc7-2a74-415e-95bd-3b63a7711de4”,
“drive_uuid” : “c0026fc7-44c6-4ac0-b5ed-8ac0ddacbba2”,
“drive_size” : 42949672960,
“data_drives” : [
{
“uuid” : “c7dcf12e-aa54-45fe-a034-ab22e26f1fb6”,
“name” : “data-cloudsigmafradeployment-master-data_main”,
“size” : 21474836480
},
{
“uuid” : “d3f29f73-c6f7-47a5-b0a8-c270ddb1dc40”,
“name” : “data-cloudsigmafradeployment-master-data_sec”,
“size” : 5368709120
}
],
“extraproperties” : {
“cloudsigma_boot_drive_type” : “custom”
}
}
],
“slave” : [
{
“hostname” : “cloudsigmafradeployment-slave”,
“role” : “slave”,
“ip” : <node_ip>,
“username” : “cloudsigma”,
“uuid” : “8439cdea-5157-4516-a264-69f0c0518ccc”,
“drive_uuid” : “5deb0aff-52ee-4765-8e04-1fc90a3a60f6”,
“drive_size” : 42949672960,
“data_drives” : [
{
“uuid” : “8e7b9c25-9ee2-4dfc-8b86-639a3899da2a”,
“name” : “data-cloudsigmafradeployment-slave-data_main”,
“size” : 21474836480
}
],
“extraproperties” : {
“cloudsigma_boot_drive_type” : “custom”
}
}
]
},
“status” : “running”,
“products” : {
“kubernetes” : {
“configurationfile” : “/home/dep-eng/deployment-engine/deployments/9f1aecc7-abe5-465f-9512-80e43f3b6c62/7d346da4-6ebc-4e9c-83c8-4682acd917d2/config”,
“registriessecret” : “docker-registries”,
“lastnodeport” : 30000,
“freednodeports” : [],
“registriessecrets” : {},
“deploymentsconfiguration” : {}
}
},
“extraproperties” : {
“k3s_curl_installed” : “true”,
“kubeadm_preinstalled_image” : “true”,
“ditas_git_installed” : “true”,
“ditas_glusterfs_client_installed” : “true”
}
}
},
“extraproperties” : null,
“status” : “starting”
}

Once the clusters are created, in the response and MongoDB database there’s information about the IPs of the nodes that were created. This nodes are accessible by SSH using the private key of the deployment engine so in order to debug or troubleshoot one cluster you need to:

  • SSH into the deployment engine node and/or container
  • From there, SSH into the master node of the cluster
  • The user in the node description is configured to be able to execute any kubectl command

Monitoring Services

DITAS Operator should have access to monitoring data to ensure the smooth operation of the VDC. To this end, DITAS uses the industry proven elastic stack. DITAs uses multiple logging agents that send information to the elasticsearch logging database.

 

ElasticSerach API
Description API to access VDC monitoring data using ElasticSearch.
Input
URL GET   https://<elasticurl>/{Blueprint ID}-{VDC ID}-{year}-{month}-{day}/_search?q={query}&pretty
Parameters
  • Blueprint ID: unique identifier of the blueprint
  • VDC ID1: unique identifier of a vdc deployment
  • Year1: year of the events
  • Month1: month of the events
  • Day1: day of the events
  • Query2: EasticSearch query String

1 :can be replaced with ‘*’ for a query about all deployments

2:see the following paragraph for examples and the following table for possible properties

Output\Response
Content Type application/json
Body {

“took”: 2,

“timed_out”: false,

“_shards”: {

“total”: 5,

“successful”: 5,

“skipped”: 0,

“failed”: 0

},

“hits”: {

“total”: {#results},

“max_score”: …,

“hits”: [

{

“_index”: “…”,

“_type”: “data”,

“_id”: “…”,

“_score”: …,

“_source”: {

<results>

}

}, … ]

}

}

 

Additional documentation can also be found at the elastic documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html.

Examples:

Get all data related to a specific property /_search?q={property name}
Get all data related to a with a specific property value /_search?q={property name}:{value}
Get all data in a timeframe /_search?q=@timestamp:[{from}+TO+{to}]

  • Time format [YYYY-MM-DDTHH:MM:SS]
  • Lesser than/greater than: {from} and {to} can be replaced with * to get all less events that happened before or until
Query multiple properties /_search?q={field}:{value}+AND+{field}:{value}

Properties:

Name Type Description
@timestamp data-time Time the data was generated (included in all documents)
log.value string Arbitrary log message that was send by a VDC component.
meter.value string Value of the custom measurement
meter.unit string Unit of the custom measurement
meter.name string Custom name for any non standard measurement
meter.operationID string operationID of the blueprint method, might be empty if the data could not be calculated in time
request.operationID string operationID of the blueprint method, might be empty if the data could not be calculated in time
request.id string Unique ID (per client) for a request. Can be used for session aggregation.
request.length number Length of the Request-Payload. e.g. Content-Length in byte
request.path string URL used in a request. e.g. /find?gender=m
request.method string HTTP-Method used. e.g. GET
request.requestTime number Time it took for the request to be answered by the VDC in ns.
response.status number HTTP-Status code of the response. e.g. 200
response.length number Length of the Response-Payload. e.g. Content-Length in byte
traffic.component string Name or IP of a component used by the VDC
traffic.received number Number of bytes received form that component in the set time window
traffic.send number Number of bytes send to that component in the set time window
traffic.total number Number of bytes transferred between the component and the vdc.

Repository Scaling Services

DITAS Operator needs a real-time management tool for the repositories used in order to be able to scale them, create them or destroy them in an automated manner. DITAS has created a set of RestFul APIs, which are described here, in order to facilitate this need.

Repository Creation API
Description API to create a new MongoDB cluster, using the provided configuration.
Input
URL POST   https://<hosturl>/manager/runCluster
Parameters
  • username: the name of the cluster’s creator
  • base_url: a docker host url
  • default_host*: the default url for containers to listen on
  • app: list of Mongos server configurations containing a host*, replica set and a port. If the host is missing the default_host is used.
  • config: list of configuration server configurations containing a host*, replica set and a port. If the host is missing the default_host is used.
  • shard: list of sharding server configurations containing a host*, replica set and a port. If the host is missing the default_host is used.

* : Optional.

Example:

{
“username”:”DITAS_OP_1″,
“base_url”:”tcp://localhost:2375″,
“default_host”:”0.0.0.0″,
“app”:[
{
“rs”:”A”,
“port”:”50117″
}
],
“config”:[
{
“rs”:”C”,
“port”:”50118″
},
{
“rs”:”C”,
“port”:”50119″,
“host”:”192.168.1.2″
}
],
“shard”:[
{
“rs”:”0″,
“port”:”50120″
},
{
“rs”:”0″,
“port”:”50121″
},
{
“rs”:”1″,
“port”:”50122″
},
{
“rs”:”1″,
“port”:”50123″,
“host”:”192.168.1.2″
}
]
}

Output\Response
Content Type application/json
Body
{
          “access_points”: [
                  “0.0.0.0:50117”
          ]
}

 

Repository Scale Out API
Description API to add a new server to a running MongoDB cluster.
Input
URL POST   https://<hosturl>/manager/addServer
Parameters
  • username: the name of the cluster’s creator
  • base_url: a docker host url
  • mongo_host: the MongoDB cluster access point host.
  • mongo_port: the MongoDB cluster access point port.
  • host: the url for containers to listen on
  • rs: the replica set to add this new server to.
  • port: the port that this server will listen on.
  • type: the type of the server {app|config|shard}

Example:

{
“username”:”DITAS_OP_1″,
“base_url”:”tcp://localhost:2375″,
“default_host”:”0.0.0.0″,
“rs”:”1″,
“port”:”50123″,
“host”:”192.168.2.5″,
“mongo_port”:”50117″,
“mongo_host”:”0.0.0.0″,
“type”:”shard”
}

Output\Response
Content Type application/json
Body
{‘success’: True, ‘debug’: []}

 

Repository Scale In API
Description API to remove a server or replica set from a running MongoDB cluster.
Input
URL POST   https://<hosturl>/manager/removeServer
Parameters
  • username: the name of the cluster’s creator
  • base_url: a docker host url
  • id*: the containerID to remove
  • mongo_host: the MongoDB cluster access point host.
  • mongo_port: the MongoDB cluster access point port.
  • host: the url for containers to listen on
  • rs: the replica set to add this new server to.
  • port: the port that this server will listen on.
  • type: the type of the server {app|config|shard}

*: Optional, if it is missing the API searches for a container matching the host, rs, port and type combination.

Example:

{
“username”:”DITAS_OP_1″,
“base_url”:”tcp://localhost:2375″,
“id”:”8eb57f0abc”,
“default_host”:”0.0.0.0″,
“rs”:”1″,
“port”:”50123″,
“host”:”192.168.2.5″,
“mongo_port”:”50117″,
“mongo_host”:”0.0.0.0″,
“type”:”shard”
}

Output\Response
Content Type application/json
Body
{‘success’: ‘8eb57f0abc’, ‘debug’: []}

 

Repository Deletion API
Description API to stop and/or delete a MongoDB cluster.
Input
URL POST   https://<hosturl>/manager/destroyCluster
Parameters
  • username: the name of the cluster’s creator
  • base_url: a docker host url
  • mongo_host: the MongoDB cluster access point host.
  • mongo_port: the MongoDB cluster access point port.

Example:

{
“username”:”DITAS_OP_1″,
“base_url”:”tcp://localhost:2376″,
“mongo_server”:”0.0.0.0″,
“mongo_port”:”50117″
}

Output\Response
Content Type application/json
Body
A JSON object containing all the container IDs that were deleted.
Example:
{
          “success”: [
          {
                  “success”: “3246195818”
          },
          {
                  “success”: “8eb57f0abc”
          },
          {
                  “success”: “ed948c229b”
          },
          {
                  “success”: “f959a60682”
          }
          ]
}

 

Repository Image Preloading API
Description API to pre-load a docker image in order to support real time scaling.
Input
URL POST   https://<hosturl>/docker/buildImage
Parameters
  • username: the name of the cluster’s creator
  • base_url: a docker host url
  • type: the server type that this image will be used to create {app|config|shard|monitor|base}. The monitor type is used for monitoring the cluster and the containers in order to support scaling decisions. The base type is used to pre-load a base image in order to create mongodb server images (app,config and shard) faster.
  • rs: the replica set that containers created from this image will belong to.
  • config_host: the URI of the configuration host that containers from this image will report to.

Example:

{
“username”:”DITAS_OP_1″,
“base_url”:”tcp://localhost:2376″,
“type”:”shard”,
“rs”:”1″,
“config_host”: “rsC/0.0.0.0:50119”
}

Output\Response
Content Type application/json
Body {‘success’: True, ‘debug’: []}

 

More information can be found in the component’s GitHub page: https://github.com/DITAS-Project/DITAS_Marketplace_Repository_Scaling.