Int J Performability Eng ›› 2021, Vol. 17 ›› Issue (7): 579-588.doi: 10.23940/ijpe.21.07.p2.579588

Previous Articles     Next Articles

Kubernetes Virtual Warehouse Placement based on Reinforcement Learning

Haoran Lia, Dongcheng Lib,*, W. Eric. Wongb, Deze Zenga, and Man Zhaoa   

  1. aSchool of Computer Science, China University of Geosciences, Wuhan, 430074, China;
    bDepartment of Computer Science, University of Texas at Dallas, 75082, USA
  • Contact: * E-mail address: dxl170030@utdallas.edu

Abstract: As a method to build and run applications, cloud native can make frequent and predictable major changes to the system, is closely correlated to the fast iteration and automated deployment, and is adaptive to the network era with high-speed changes in large amounts of data. Nevertheless, cloud native is still growing and there are still many problems to be solved. This paper selected Kubernetes, the cornerstone of the cloud native ecosystem, and Docker, the huge orchestration system that manages containers, to deploy a Virtual Warehouse for managing mirror resources. With the rapid development of artificial intelligence (AI), the reinforcement learning (RL) algorithm has been extensively applied in AI by virtue of two features: trial-and-error and the value of long-term rewards. The RL algorithm can be applied to multiple problem scenarios. The study object in this paper meets the requirements of the RL algorithm and has constantly updated environmental conditions. In consideration of the implementation of automated container operations by k8s, it was proposed to learn in the existing environment through RL and with changes in demand as the environment updates, until the Virtual Warehouse converges to the optimum location. In this paper, simulation modeling was made on the cloud native process. The model data were trained according to the model environment and by the RL algorithm to obtain the optimum warehouse placement. The warehouse location parameters obtained were substituted into the simulation environment. In addition, the abstract task class was pulled according to the extended image to obtain the delay time of different tasks, thus verifying the superiority of the RL algorithm in the K8S warehouse placement.

Key words: Cloud native, CloudSim, reinforcement learning, Kubernetes, virtual warehouse placement