Introdução
In today’s world, the server infrastructure machines are either in on-premise data centers, private data centers or public cloud data centers. These machines are either physical bare metal machines, virtual machines (VMs) with hypervisors or small containers like docker containers on top of physical or virtual machines. These machines physically might be in the local lab. In a private data center scenario, where your own procured hosts are placed in a physical space that is shared in a third party data center and connected remotely. Whereas in the public data centers like AWS, GCP, Azure, OCI the machines are either reserved or created on-demand for the highly scalable needs connecting remotely. Each of these have its own advantages w.r.t scalability, security, reliability, management and costs associated with those infrastructures.
The product development environment teams might need many servers during the SDLC process. Let us say if one had chosen the private data center with their own physical machines along with Xen Servers. Now, the challenge is how the VMs lifecycle is managed for provision or termination in similar to cloud environments with lean and agile processes.
This document is aiming to provide the basic infrastructure model, architecture, minimum APIs and sample code snippets so that one can easily build dynamic infrastructure environments.
Benefícios
First let us understand the typical sequence of the steps followed in this server’s infrastructure process. You can recollect it as below.
-
- Procurement of the new machines by IT
- Host virtualization – Install Xen Server and Create VM Templates by IT
- Static VMs request by Dev and test teams via (say JIRA) tickets to IT
- Maintain the received VM IPs in a database or a static file or hardcoded in configuration files or CI tools like in Jenkins config.xml
- Monitor the VMs for health checks to make sure these are healthy before using to install the products
- Cleanup or uninstall before or after Server installations
- Windows might need some registry cleanup before installing the product
- Fixed allocation of VMs to an area or a team or dedicated to an engineer might have been done
Now, how can you make this process more lean and agile? Can you eliminate most of the above steps with simple automation?
Yes. In our environment, we had more than 1000 VMs and tried to achieved and mainly the below.
“Disposable VMs on-demand as required during tests execution. Solve Windows cleanup issues with regular test cycles.”
As you see below, using the dynamic VMs server manager API service, 6 out of 8 steps can be eliminated and it gives the unlimited infrastructure view for the entire product team. Only the first 2 steps – procure and host virtualization are needed. In effect, this saves in terms of time and cost!

Typical flow to get infrastructure
Dynamic Infrastructure model
The below picture shows our proposed infrastructure for a typical server product environments where 80% of docker containers, 15% as dynamic VMs and 5% as static pooled VMs for special cases. This distribution can be adjusted based on what works most for your environment.

Infrastructure model
From here on, we will discuss more about Dynamic VM server manager part.
Dynamic Server Manager architecture
In the dynamic VMs server manager, a simple API service where the below REST APIs can be exposed and can be used anywhere in the automated process. As the tech stack shows, python 3 and Python based Xen APIs are used for actual creation of VMs with XenServer host. Flask is being used for the REST service layer creation. The OS can be any of your product supported platforms like windows2016, centos7, centos8, debian10, ubuntu18, oel8, suse15.

Dynamic VMs server manager architecture
Save the history of the VMs to track the usage and time to provision or termination can be analyzed further. For storing the json document, Couchbase enterprise server, which is a nosql document database can be used.
Simple REST APIs
Método | URI(s) | Finalidade |
OBTER | /showall | Lists all VMs in json format |
OBTER | /getavailablecount/<os> | Gets the list of available VMs count for the given <os> |
OBTER | /getservers/<name>?os=<os>
/getservers/<name>?os=<os>&count=<count> /getservers/<name>?os=<os>&count=<count>&cpus=<cpus>&mem=<memsize> /getservers/<name>?os=<os>&expiresin=<minutes> |
Provisions given <count> VMs of <os>.
cpus count and mem size also can be supported. expiresin parameter in minutes to get expiry (auto termination) of the VMs. |
OBTER | /releaseservers/<name>?os=<os>
/releaseservers/<name>?os=<os>&count=<count> |
Terminates given <count> VMs of <os> |
Pre-requirements for dynamic VM targeted Xen Hosts
- Identify targeted dynamic VM Xen Hosts
- Copy/create the VM templates
- Move these Xen Hosts a separate VLAN/Subnet (work with IT) for IPs recycle
Implementação
At a high level –
- Create functions each REST API
- Call a common service to perform different REST actions.
- Understand the Xen Session creation, getting the records, cloning VM from template, attaching the right disk, waiting for the VM creation and IP received; deletion of VMs, deletion of disks
- Start a thread for expiry of VMs automatically
- Read the common configuration such as .ini format
- Understand working with Couchbase database and save documents
- Test all APIs with required OSes and parameters
- Fix issues if any
- Perform a POC with few Xen Hosts
The below code snippets can help you to understand even better.
APIs creation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
@aplicativo.rota('/showall/<string:os>') @aplicativo.rota("/showall") def showall_service(os=Nenhum): contagem, _ = get_all_xen_hosts_count(os) registro.informações("--> count: {}".formato(contagem)) all_vms = {} para xen_host_ref em alcance(1, contagem + 1): registro.informações("Getting xen_host_ref=" + str(xen_host_ref)) all_vms[xen_host_ref] = perform_service(xen_host_ref, service_name='listvms', os=os) retorno json.lixeiras(all_vms, recuo=2, sort_keys=Verdadeiro) @aplicativo.rota('/getavailablecount/<string:os>') @aplicativo.rota('/getavailablecount') def getavailable_count_service(os='centos'): """ Calculate the available count: Get Total CPUs, Total Memory Get Free CPUs, Free Memory Get all the VMs - CPUs and Memory allocated Get each OS template - CPUs and Memory Available count1 = (Free CPUs - VMs CPUs)/OS_Template_CPUs Available count2 = (Free Memory - VMs Memory)/OS_Template_Memory Return min(count1,count2) """ contagem, available_counts, xen_hosts = get_all_available_count(os) registro.informações("{},{},{},{}".formato(contagem, available_counts, xen_hosts, reserved_count)) se contagem > reserved_count: contagem -= reserved_count registro.informações("Less reserved count: {},{},{},{}".formato(contagem, available_counts, xen_hosts, reserved_count)) retorno str(contagem) # /getservers/username?count=number&os=centos&ver=6&expiresin=30 @aplicativo.rota('/getservers/<string:username>') def getservers_service(nome de usuário): global reserved_count se solicitação.argumentos.obter('count'): vm_count = int(solicitação.argumentos.obter('count')) mais: vm_count = 1 os_name = solicitação.argumentos.obter('os') se solicitação.argumentos.obter('cpus'): cpus_count = solicitação.argumentos.obter('cpus') mais: cpus_count = "default" se solicitação.argumentos.obter('mem'): mem = solicitação.argumentos.obter('mem') mais: mem = "default" se solicitação.argumentos.obter('expiresin'): exp = int(solicitação.argumentos.obter('expiresin')) mais: exp = MAX_EXPIRY_MINUTES se solicitação.argumentos.obter('format'): output_format = solicitação.argumentos.obter('format') mais: output_format = "servermanager" xhostref = Nenhum se solicitação.argumentos.obter('xhostref'): xhostref = solicitação.argumentos.obter('xhostref') reserved_count += vm_count se xhostref: registro.informações("--> VMs on given xenhost" + xhostref) vms_ips_list = perform_service(xhostref, 'createvm', os_name, nome de usuário, vm_count, cpus=cpus_count, maxmemory=mem, expiry_minutes=exp, output_format=output_format) retorno json.lixeiras(vms_ips_list) ... # /releaseservers/{username} @aplicativo.rota('/releaseservers/<string:username>/<string:available>') @aplicativo.rota('/releaseservers/<string:username>') def releaseservers_service(nome de usuário): se solicitação.argumentos.obter('count'): vm_count = int(solicitação.argumentos.obter('count')) mais: vm_count = 1 os_name = solicitação.argumentos.obter('os') delete_vms_res = [] para vm_index em alcance(vm_count): se vm_count > 1: vm_name = nome de usuário + str(vm_index + 1) mais: vm_name = nome de usuário xen_host_ref = get_vm_existed_xenhost_ref(vm_name, 1, Nenhum) registro.informações("VM to be deleted from xhost_ref=" + str(xen_host_ref)) se xen_host_ref != 0: delete_per_xen_res = perform_service(xen_host_ref, 'deletevm', os_name, vm_name, 1) para deleted_vm_res em delete_per_xen_res: delete_vms_res.anexar(deleted_vm_res) se len(delete_vms_res) < 1: retorno "Error: VM " + nome de usuário + " doesn't exist" mais: retorno json.lixeiras(delete_vms_res, recuo=2, sort_keys=Verdadeiro) def perform_service(xen_host_ref=1, service_name='list_vms', os="centos", vm_prefix_names="", number_of_vms=1, cpus="default", maxmemory="default", expiry_minutes=MAX_EXPIRY_MINUTES, output_format="servermanager", start_suffix=0): xen_host = get_xen_host(xen_host_ref, os) ... def principal(): # options = parse_arguments() set_log_level() aplicativo.executar(hospedeiro='0.0.0.0', depurar=Verdadeiro) |
Creation Xen session
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
def get_xen_session(xen_host_ref=1, os="centos"): xen_host = get_xen_host(xen_host_ref, os) se não xen_host: retorno Nenhum url = "http://" + xen_host['host.name'] registro.informações("\nXen Server host: " + xen_host['host.name'] + "\n") tentar: sessão = XenAPI.Sessão(url) sessão.xenapi.login_with_password(xen_host['host.user'], xen_host['host.password']) exceto XenAPI.Failure como f: erro = "Failed to acquire a session: {}".formato(f.detalhes) registro.erro(erro) retorno erro retorno sessão |
List VMs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
def list_vms(sessão): vm_count = 0 vms = sessão.xenapi.VM.get_all() registro.informações("Server has {} VM objects (this includes templates):".formato(len(vms))) registro.informações("-----------------------------------------------------------") registro.informações("S.No.,VMname,PowerState,Vcpus,MaxMemory,Networkinfo,Description") registro.informações("-----------------------------------------------------------") vm_details = [] para vm em vms: network_info = 'N/A' registro = sessão.xenapi.VM.get_record(vm) se não (registro["is_a_template"]) e não (registro["is_control_domain"]): registro.depurar(registro) vm_count = vm_count + 1 nome = registro["name_label"] name_description = registro["name_description"] power_state = registro["power_state"] vcpus = registro["VCPUs_max"] memory_static_max = registro["memory_static_max"] se registro["power_state"] != 'Halted': ip_ref = sessão.xenapi.VM_guest_metrics.get_record(registro['guest_metrics']) network_info = ','.unir-se([str(elem) para elem em ip_ref['networks'].valores()]) mais: continuar # Listing only Running VMs vm_info = {"nome: nome, 'power_state': power_state, 'vcpus': vcpus, 'memory_static_max': memory_static_max, 'networkinfo': network_info, 'name_description': name_description} vm_details.anexar(vm_info) registro.informações(vm_info) registro.informações("Server has {} VM objects and {} templates.".formato(vm_count, len(vms) - vm_count)) registro.depurar(vm_details) retorno vm_details |
Create VM
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
def create_vm(sessão, os_name, modelo, new_vm_name, cpus="default", maxmemory="default", expiry_minutes=MAX_EXPIRY_MINUTES): erro = '' vm_os_name = '' vm_ip_addr = '' prov_start_time = tempo.tempo() tentar: registro.informações("\n--- Creating VM: " + new_vm_name + " using " + modelo) pifs = sessão.xenapi.PIF.get_all_records() lowest = Nenhum para pifRef em pifs.chaves(): se (lowest é Nenhum) ou (pifs[pifRef]['device'] < pifs[lowest]['device']): lowest = pifRef registro.depurar("Choosing PIF with device: {}".formato(pifs[lowest]['device'])) ref = lowest mac = pifs[ref]['MAC'] dispositivo = pifs[ref]['device'] modo = pifs[ref]['ip_configuration_mode'] ip_addr = pifs[ref]['IP'] net_mask = pifs[ref]['IP'] portal = pifs[ref]['gateway'] dns_server = pifs[ref]['DNS'] registro.depurar("{},{},{},{},{},{},{}".formato(mac, dispositivo, modo, ip_addr, net_mask, portal, dns_server)) # List all the VM objects vms = sessão.xenapi.VM.get_all_records() registro.depurar("Server has {} VM objects (this includes templates)".formato(len(vms))) modelos = [] all_templates = [] para vm em vms: registro = vms[vm] res_type = "VM" se registro["is_a_template"]: res_type = "Template" all_templates.anexar(vm) # Look for a given template se registro["name_label"].começa com(modelo): modelos.anexar(vm) registro.depurar(" Found %8s with name_label = %s" % (res_type, registro["name_label"])) registro.depurar("Server has {} Templates and {} VM objects.".formato(len(all_templates), len(vms) - len( all_templates))) registro.depurar("Choosing a {} template to clone".formato(modelo)) se não modelos: registro.erro("Could not find any {} templates. Exiting.".formato(modelo)) sistema.saída(1) template_ref = modelos[0] registro.depurar(" Selected template: {}".formato(sessão.xenapi.VM.get_name_label(template_ref))) # Retries when 169.x address received ipaddr_max_retries = 3 retry_count = 1 is_local_ip = Verdadeiro vm_ip_addr = "" enquanto is_local_ip e retry_count != ipaddr_max_retries: registro.informações("Installing new VM from the template - attempt #{}".formato(retry_count)) vm = sessão.xenapi.VM.clone(template_ref, new_vm_name) rede = sessão.xenapi.PIF.get_network(lowest) registro.depurar("Chosen PIF is connected to network: {}".formato( sessão.xenapi.rede.get_name_label(rede))) vifs = sessão.xenapi.VIF.get_all() registro.depurar(("Number of VIFs=" + str(len(vifs)))) para i em alcance(len(vifs)): vmref = sessão.xenapi.VIF.get_VM(vifs[i]) a_vm_name = sessão.xenapi.VM.get_name_label(vmref) registro.depurar(str(i) + "." + sessão.xenapi.rede.get_name_label( sessão.xenapi.VIF.get_network(vifs[i])) + " " + a_vm_name) se a_vm_name == new_vm_name: sessão.xenapi.VIF.move(vifs[i], rede) registro.depurar("Adding non-interactive to the kernel commandline") sessão.xenapi.VM.set_PV_args(vm, "non-interactive") registro.depurar("Choosing an SR to instantiate the VM's disks") piscina = sessão.xenapi.piscina.get_all()[0] default_sr = sessão.xenapi.piscina.get_default_SR(piscina) default_sr = sessão.xenapi.SR.get_record(default_sr) registro.depurar("Choosing SR: {} (uuid {})".formato(default_sr['name_label'], default_sr['uuid'])) registro.depurar("Asking server to provision storage from the template specification") descrição = new_vm_name + " from " + modelo + " on " + str(data e hora.data e hora.utcnow()) sessão.xenapi.VM.set_name_description(vm, descrição) se cpus != "default": registro.informações("Setting cpus to " + cpus) sessão.xenapi.VM.set_VCPUs_max(vm, int(cpus)) sessão.xenapi.VM.set_VCPUs_at_startup(vm, int(cpus)) se maxmemory != "default": registro.informações("Setting memory to " + maxmemory) sessão.xenapi.VM.set_memory(vm, maxmemory) # 8GB="8589934592" or 4GB="4294967296" sessão.xenapi.VM.provision(vm) registro.informações("Starting VM") sessão.xenapi.VM.iniciar(vm, Falso, Verdadeiro) registro.depurar(" VM is booting") registro.depurar("Waiting for the installation to complete") # Get the OS Name and IPs registro.informações("Getting the OS Name and IP...") configuração = read_config() vm_network_timeout_secs = int(configuração.obter("common", "vm.network.timeout.secs")) se vm_network_timeout_secs > 0: TIMEOUT_SECS = vm_network_timeout_secs registro.informações("Max wait time in secs for VM OS address is {0}".formato(str(TIMEOUT_SECS))) se "win" não em modelo: maxtime = tempo.tempo() + TIMEOUT_SECS enquanto read_os_name(sessão, vm) é Nenhum e tempo.tempo() < maxtime: tempo.dormir(1) vm_os_name = read_os_name(sessão, vm) registro.informações("VM OS name: {}".formato(vm_os_name)) mais: # TBD: Wait for network to refresh on Windows VM tempo.dormir(60) registro.informações("Max wait time in secs for IP address is " + str(TIMEOUT_SECS)) maxtime = tempo.tempo() + TEMPO LIMITE_SECS # Wait until IP is not None or 169.xx (when no IPs available, this is default) and timeout # is not reached. enquanto (read_ip_address(sessão, vm) é Nenhum ou read_ip_address(sessão, vm).começa com( '169')) e \ tempo.tempo() < maxtime: tempo.dormir(1) vm_ip_addr = read_ip_address(sessão, vm) registro.informações("VM IP: {}".formato(vm_ip_addr)) se vm_ip_addr.começa com('169'): registro.informações("No Network IP available. Deleting this VM ... ") registro = sessão.xenapi.VM.get_record(vm) power_state = registro["power_state"] se power_state != 'Halted': sessão.xenapi.VM.hard_shutdown(vm) delete_all_disks(sessão, vm) sessão.xenapi.VM.destruir(vm) tempo.dormir(5) is_local_ip = Verdadeiro retry_count += 1 mais: is_local_ip = Falso registro.informações("Final VM IP: {}".formato(vm_ip_addr)) |
Delete VM
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
def delete_vm(sessão, vm_name): registro.informações("Deleting VM: " + vm_name) delete_start_time = tempo.tempo() vm = sessão.xenapi.VM.get_by_name_label(vm_name) registro.informações("Number of VMs found with name - " + vm_name + " : " + str(len(vm))) para j em alcance(len(vm)): registro = sessão.xenapi.VM.get_record(vm[j]) power_state = registro["power_state"] se power_state != 'Halted': # session.xenapi.VM.shutdown(vm[j]) sessão.xenapi.VM.hard_shutdown(vm[j]) # print_all_disks(session, vm[j]) delete_all_disks(sessão, vm[j]) sessão.xenapi.VM.destruir(vm[j]) delete_end_time = tempo.tempo() delete_duration = rodada(delete_end_time - delete_start_time) # delete from CB uuid = registro["uuid"] chave_documento = uuid cbdoc = CBDoc() doc_result = cbdoc.get_doc(chave_documento) se doc_result: doc_value = doc_result.valor doc_value["estado"] = 'deleted' current_time = tempo.tempo() doc_value["deleted_time"] = current_time se doc_value["created_time"]: doc_value["live_duration_secs"] = rodada(current_time - doc_value["created_time"]) doc_value["delete_duration_secs"] = delete_duration cbdoc.save_dynvm_doc(chave_documento, doc_value) |
Historic Usage of VMs
It is better to maintain the history of all the VMs created and terminated along with other useful data. Here is the example of json document stored in the Couchbase, a free Nosql database server. Insert a new document using the key as the xen opac reference uuid whenever a new VM is provisioned and update the same whenever VM is terminated. Track the live usage time of the VM and also how the provisioning/termination done by each user.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
registro = sessão.xenapi.VM.get_record(vm) uuid = registro["uuid"] # Save as doc in CB estado = "disponível" nome de usuário = new_vm_name piscina = "dynamicpool" doc_value = {"ipaddr": vm_ip_addr, "origem": xen_host_description, "os": os_name, "estado": estado, "poolId": piscina, "prevUser": "", "nome de usuário": nome de usuário, "ver": "12", "memory": memory_static_max, "os_version": vm_os_name, "name" (nome): new_vm_name, "created_time": prov_end_time, "create_duration_secs": create_duration, "cpu": vcpus, "disco": disks_info} # doc_value["mac_address"] = mac_address chave_documento = uuid cb_doc = CBDoc() cb_doc.save_dynvm_doc(chave_documento, doc_value) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
classe CBDoc: def __init__(autônomo): configuração = read_config() autônomo.cb_server = configuração.obter("couchbase", "couchbase.server") autônomo.cb_bucket = configuração.obter("couchbase", "couchbase.bucket") autônomo.cb_username = configuração.obter("couchbase", "couchbase.username") autônomo.cb_userpassword = configuração.obter("couchbase", "couchbase.userpassword") tentar: autônomo.cb_cluster = Aglomerado('couchbase://' + autônomo.cb_server) autônomo.cb_auth = PasswordAuthenticator(autônomo.cb_username, autônomo.cb_userpassword) autônomo.cb_cluster.autenticar(autônomo.cb_auth) autônomo.cb = autônomo.cb_cluster.open_bucket(autônomo.cb_bucket) exceto Exceção como e: registro.erro('Connection Failed: %s ' % autônomo.cb_server) registro.erro(e) def get_doc(autônomo, chave_documento): tentar: retorno autônomo.cb.obter(chave_documento) exceto Exceção como e: registro.erro('Error while getting doc %s !' % chave_documento) registro.erro(e) def save_dynvm_doc(autônomo, chave_documento, doc_value): tentar: registro.informações(doc_value) autônomo.cb.upsert(chave_documento, doc_value) registro.informações("%s added/updated successfully" % chave_documento) exceto Exceção como e: registro.erro('Document with key: %s saving error' % chave_documento) registro.erro(e) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
"dynserver-pool": { "cpu": "6", "create_duration_secs": 65, "created_time": 1583518463.8943903, "delete_duration_secs": 5, "deleted_time": 1583520211.8498628, "disco": "75161927680", "ipaddr": "x.x.x.x", "live_duration_secs": 1748, "memory": "6442450944", "name" (nome): "Win2019-Server-1node-DynVM", "origem": "s827", "os": "win16", "os_version": "", "poolId": "dynamicpool", "prevUser": "", "estado": "excluído", "nome de usuário": "Win2019-Server-1node-DynVM", "ver": "12" } |
Configuração
The Dynamic VM server manager service configuration such as couchbase server, xenhost servers, template details, default expiry and network timeout values can be maintained in a simple .ini format. Any new Xen Host received, then just add as a separate section.The config is dynamically loaded without restarting the Dynamic VM SM service.
Sample config file: .dynvmservice.ini
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
[couchbase] couchbase.servidor=<couchbase-hostIp> couchbase.balde=<balde-nome> couchbase.nome de usuário=<nome de usuário> couchbase.userpassword=<senha> [comum] vm.expiração.minutos=720 vm.rede.tempo limite.secs=400 [xenhost1] hospedeiro.nome=<xenhostip1> hospedeiro.usuário=raiz hospedeiro.senha=xxxx hospedeiro.armazenamento.nome=Local Armazenamento 01 centos.modelo=tmpl-cnt7.7 centos7.modelo=tmpl-cnt7.7 janelas.modelo=tmpl-win16dc - PATCHED (1) centos8.modelo=tmpl-cnt8.0 oel8.modelo=tmpl-oel8 deb10.modelo=tmpl-deb10-também debian10.modelo=tmpl-deb10-também ubuntu18.modelo=tmpl-ubu18-4cgb suse15.modelo=tmpl-suse15 [xenhost2] hospedeiro.nome=<xenhostip2> hospedeiro.usuário=raiz hospedeiro.senha=xxxx hospedeiro.armazenamento.nome=ssd centos.modelo=tmpl-cnt7.7 janelas.modelo=tmpl-win16dc - PATCHED (1) centos8.modelo=tmpl-cnt8.0 oel8.modelo=tmpl-oel8 deb10.modelo=tmpl-deb10-também debian10.modelo=tmpl-deb10-também ubuntu18.modelo=tmpl-ubu18-4cgb suse15.modelo=tmpl-suse15 [xenhost3] ... [xenhost4] ... |
Exemplos
Sample REST API calls using curl
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
$ enrolar 'http://127.0.0.1:5000/showall' { "1": [ { "memory_static_max": "6442450944", "name" (nome): "tunable-rebalance-out-Apr-29-13:43:59-7.0.0-19021", "name_description": "tunable-rebalance-out-Apr-29-13:43:59-7.0.0-19021 from tmpl-win16dc - PATCHED (1) on 2020-04-29 20:43:33.785778", "networkinfo": "172.23.137.20,172.23.137.20,fe80:0000:0000:0000:0585:c6e8:52f9:91d1", "power_state": "Em execução", "vcpus": "6" }, { "memory_static_max": "6442450944", "name" (nome): "nserv-nserv-rebalanceinout_P0_Set1_compression-Apr-29-06:36:12-7.0.0-19022", "name_description": "nserv-nserv-rebalanceinout_P0_Set1_compression-Apr-29-06:36:12-7.0.0-19022 from tmpl-win16dc - PATCHED (1) on 2020-04-29 13:36:53.717776", "networkinfo": "172.23.136.142,172.23.136.142,fe80:0000:0000:0000:744d:fd63:1a88:2fa8", "power_state": "Em execução", "vcpus": "6" } ], "2": [ .. ] "3": [ .. ] "4": [ .. ] "5": [ .. ] "6": [ .. ] } $ enrolar 'http://127.0.0.1:5000/getavailablecount/windows' 2 $ enrolar 'http://127.0.0.1:5000/getavailablecount/centos' 10 $ enrolar 'http://127.0.0.1:5000/getservers/demoserver?os=centos&count=3' ["172.23.137.73", "172.23.137.74", "172.23.137.75"] $ enrolar 'http://127.0.0.1:5000/getavailablecount/centos' 7 $ enrolar 'http://127.0.0.1:5000/releaseservers/demoserver?os=centos&count=3' [ "demoserver1", "demoserver2", "demoserver3" ] $ enrolar 'http://127.0.0.1:5000/getavailablecount/centos' 10 $ |
Jenkins jobs with single VM
1 2 3 4 5 6 7 8 |
enrolar -s -o ${BUILD_TAG}_vm.json "http://<host:port>/getservers/${BUILD_TAG}?os=windows&format=detailed" VM_IP_ADDRESS="`cat ${BUILD_TAG}_vm.json |egrep ${BUILD_TAG}|cut -f2 -d':'|xargs`" se [[ $VM_IP_ADDRESS =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; então eco Válido IP received mais eco NÃO a válido IP received saída 1 fi |
Jenkins jobs with multiple VMs needed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
enrolar -s -o ${BUILD_TAG}.json "${SERVER_API_URL}/getservers/${BUILD_TAG}?os=${OS}&count=${COUNT}&format=detailed" gato ${BUILD_TAG}.json VM_IP_ADDRESS="`cat ${BUILD_TAG}.json |egrep ${BUILD_TAG}|cut -f2 -d':'|xargs|sed 's/,//g'`" eco $VM_IP_ADDRESS ADDR=() ÍNDICE=1 para IP em `eco $VM_IP_ADDRESS` fazer se [[ $IP =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; então eco Válido IP=$IP received ADDR[$ÍNDICE]=${IP} ÍNDICE=`expr ${ÍNDICE} + 1` mais eco NÃO a válido IP=$IP received saída 1 fi feito eco ${ADDR[1]} eco ${ADDR[2]} eco ${ADDR[3]} eco ${ADDR[4]} |
Key considerations
Here are few of my observations noted during the process and it is better to handle to make it more reliable.
- Handle Storage name/ID different among different Xen Hosts
- Keep track of VM storage device name in the service input config file.
- Handle partial templates only available on some Xen Hosts while provisioning
- When Network IPs not available and Xen APIs gets the default 169.254.xx.yy on Windows. Wait until getting the non 169 address or timeout.
- Release servers should ignore os template as some of the templates might not be there Xen Hosts
- Provision on a specific given Xen Host reference
- Handle No IPs available or not getting network IPs for some of the VMs created.
- Plan to have a different subnet for dynamic VMs targeted Xen Hosts. The default network DHCP IP lease expiry might be in days (say 7 days) and no new IPs are provided.
- Handle the capacity check to count the in progress as Reserved IPs and should show less count than full at the moment. Otherwise, both in-progress and incoming requests might have issues. One or two VMs (cpus/memory/disk sizes) can be in buffer while creating and checking if any parallel requests.
Referências
Some of the key references that help while creating the dynamic VM server manager service.
- https://www.couchbase.com/downloads
- https://wiki.xenproject.org/wiki/XAPI_Command_Line_Interface
- https://xapi-project.github.io/xen-api/
- https://docs.citrix.com/en-us/citrix-hypervisor/command-line-interface.html
- https://github.com/xapi-project/xen-api-sdk/tree/master/python/samples
- https://www.citrix.com/community/citrix-developer/citrix-hypervisor-developer/citrix-hypervisor-developing-products/citrix-hypervisor-staticip.html
- https://docs.ansible.com/ansible/latest/modules/xenserver_guest_module.html
- https://github.com/terra-farm/terraform-provider-xenserver
- https://github.com/xapi-project/xen-api/blob/master/scripts/examples/python/renameif.py
- https://xen-orchestra.com/forum/topic/191/single-device-not-reporting-ip-on-dashboard/14
- https://xen-orchestra.com/blog/xen-orchestra-from-the-cli/
- https://support.citrix.com/article/CTX235403
Hope you had a good reading time!
Isenção de responsabilidade: Please view this as a reference if you are dealing with Xen Hosts. Feel free to share if you learned something new that can help us. Your positive feedback is appreciated!
Thanks to Raju Suravarjjala, Ritam Sharma, Wayne Siu, Tom Thrush, James Lee for their help during the process.