Provision::Unix::VirtualOS - Provision virtual computers (VPS,VM,VE,Jail,etc)
version 1.08
use Provision::Unix; use Provision::Unix::VirtualOS; my $prov = Provision::Unix->new(); my $vos = Provision::Unix::VirtualOS->new( prov => $prov ); $vos->create( name => 42, password => 't0ps3kretWerD', ip => '10.1.1.43', hostname => 'test_debian_5.example.com', disk_size => 1000, ram => 512, template => 'debian-5-i386-default.tar.gz', nameservers => '10.1.1.2 10.1.1.3', searchdomain => 'example.com', ) or $prov->error( "unable to create VE" );
Provision::Unix::VirtualOS aims to provide a clean, consistent way to manage virtual machines on a variety of virtualization platforms including Xen, OpenVZ, Vmware, FreeBSD jails, and others. P:U:V provides a command line interface (prov_virtual) and a stable programming interface (API) for provisioning virtual machines on supported platforms. To start a VE on any supported virtualization platform, you run this command:
prov_virtual --name=42 --action=start
Versus this:
xen: xm create /home/xen/42/42.cfg ovz: vzctl start 42 ezj: ezjail-admin start 42
P:U:V tries very hard to insure that every valid command that can succeed will. There is abundant code for handling common errors, such as unmounting xen volumes before starting a VE, making sure a disk volume is not in use before mounting it, and making sure connectivity to the new HW node exists before attempting to migrate.
In addition to the pre-flight checks, there are also post-action checks to determine if the action succeeded. When actions fail, they provide reasonably good error messages geared towards comprehension by sysadmins. Where feasible, actions that fail are rolled back so that when the problem(s) is corrected, the action can be safely retried.
If you are looking for a command line utility, have a look at the docs for prov_virtual. If you are looking to mate an existing Customer Relationship Manager (CRM) or billing system (like Ubersmith, WHMCS, Modernbill, etc..) with a rack full of hardware nodes, this class is it. There are two existing implementations, the prov_virtual CLI, and an RPC agent. The CLI and remote portion of the RPC agent is included in the distribution as bin/remoteagent.
The best way to interface with P:U:V is using a RPC agent to drive this class directly. However, doing so requires a programmer to write an application that accepts/processes requests from your CRMS system and formats them into P:U:V requests.
If you don't have the resources to write your own RPC agent, and your CRM/billing software supports it, you may be able to dispatch the requests to the HW nodes via a terminal connection. If you do this, your CRM software will need to inspect the result code of the script to determine success or failure.
P:U calls are quiet by default. If you want to see all the logging, append each CLI request with --verbose. Doing so will dump the audit and error reports to stdout.
The implementation I use implements RPC over SSH. The billing system we use is rather 'limited' so I wrote a separate request brokering application (maitred). It uses the billing systems API as a trigger to perform basic account management actions (create, destroy, enable, disable). We also provide a control panel so our clients can manage their VEs. The control panel also generates requests (start, stop, install, reinstall, reboot, upgrade, etc). Administrators also have their own control panel which also generates requests.
When a request is initiated, the broker allocates any necessary resources (IPs, licences, a slot on a HW node, etc) and then dispatches the request. The dispatcher builds an appropriate SSH invocation that connects to the remote HW node and runs the remoteagent. Once connected to the remoteagent, the P:U:V class is loaded and its methods are invoked directly. The RPC agent checks the result code of each call, as well as the audit and error logs, feeding those request events back. The RPC local agent logs the request events into the request brokers database so there's a complete audit trail.
RPC is often implemented over HTTP, using SOAP or XML::RPC. However, our VEs are deployed with local storage. We needed the ability to move a VE from one node to another. In addition to the broker to node relationship, we would have also need temporary trust relationships between the nodes, in order to move files between them with root permissions.
The trust relationships are much easier to manage with SSH keys. In our environment, only the request brokers are trusted. In addition to being able to connect to any node, they can also connect from node to node using ssh-agent and key forwarding.
The $vos->migrate() function expects to be running as a user that has the ability to initiate a SSH connection from the node on which it's running, to the node on which you are moving the VE. Our RPC agent connects to the HW nodes as the maitred user and then invokes the remoteagent using sudo. Our sudo config on the HW nodes looks like this:
Cmnd_Alias MAITRED_CMND=/usr/bin/remoteagent, /usr/bin/prov_virtual maitred ALL=NOPASSWD: SETENV: MAITRED_CMND
Since the RPC remoteagent is running as root, the request broker has access to a wide variety of tools (tar over ssh pipe, rsync, etc) to move files from one node to another, without the nodes having any sort of trust relationship between them.
$vos->create( name => 42, ip => '10.0.0.42', hostname => 'vps.example.com', disk_size => 4096, # 4GB ram => 512, template => 'debian-5-i386-default', password => 't0ps3kretWerD', nameservers => '10.0.0.2 10.0.0.3', searchdomain => 'example.com', ); $vos->start( name => 42 ); $vos->stop( name => 42 ); $vos->restart( name => 42 ); $vos->enable( name => 42 ); $vos->disable( name => 42 ); $vos->set_hostname( name => 42, hostname => 'new-host.example.com' ); $vos->set_nameservers( name => 42, nameservers => '10.1.1.3 10.1.1.4' ); $vos->set_password( name => 42, password => 't0ps3kretWerD' ); $vos->set_ssh_key( name => 42, ssh_key => 'ssh-rsa AAAAB3N..' ); $vos->modify( name => 42, disk_size => 4000, hostname => 'new-host.example.com', ip => '10.1.1.43 10.1.1.44', ram => 768, ); $vos->get_status( name => 42 ); $vos->migrate( name => 42, new_node => '10.1.1.13' ); $vos->destroy( name => 42 );
############## # Usage : $vos->create( name => '42', ip=>'127.0.0.2' ); # Purpose : create a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name - name/ID of the virtual OS # : ip - IP address(es), space delimited # Optional : hostname - the FQDN of the virtual OS # : disk_root - the root directory of the virt os # : disk_size - disk space allotment in MB # : ram - in MB # : cpu - how many CPU cores the VE can use/see # : template - a 'template' or tarball the OS is patterned after # : config - a config file with virtual specific settings # : password - the root/admin password for the virtual # : ssh_key - ssh public key for root user # : mac_address - the MAC adress to assign to the vif # : nameservers - # : searchdomain - # : kernel_version - # : skip_start - do not start the VE after creation
# Usage : $vos->start( name => '42' ); # Purpose : start a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name
# Usage : $vos->stop( name => '42' ); # Purpose : stop a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name
# Usage : $vos->restart( name => '42' ); # Purpose : restart a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name
# Usage : $vos->enable( name => '42' ); # Purpose : enable/reactivate/unsuspend a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name
# Usage : $vos->disable( name => '42' ); # Purpose : disable a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name
# Usage : $vos->set_hostname( # name => '42', # hostname => '42.example.com', # ); # Purpose : update the hostname of a VE # Returns : true or undef on failure # Parameters : # Required : name # : hostname - the new FQDN for the virtual OS
# Usage : $vos->set_nameservers( # name => '42', # nameservers => '10.0.1.4 10.0.1.5', # searchdomain => 'example.com', # ); # Purpose : update the nameservers in /etc/resolv.conf # Returns : true or undef on failure # Parameters : # Required : name # : nameservers - space delimited list of IPs # Optional : searchdomain - space delimited list of domain names
# Usage : $vos->set_password( # name => '42', # password => 't0ps3kretWerD', # ); # Purpose : update the password of a user inside a VE # Returns : true or undef on failure # Parameters : # Required : name # : password - the plaintext password to store in /etc/shadow|passwd # Optional : user - /etc/password user name, defaults to 'root' # : ssh_key - an ssh public key, to install in ~/.ssh/authorized_keys # : disk_root- the full to the VE root (ie, / within the VE)
# Usage : $vos->set_ssh_key( # name => '42', # ssh_key => 'ssh-rsa AAAA.....', # ); # Purpose : install an SSH key for a user inside a VE # Returns : true or undef on failure # Parameters : # Required : name # : ssh_key - an ssh public key, to install in ~/.ssh/authorized_keys # Optional : user - /etc/password user name, defaults to 'root' # : disk_root- the full to the VE root (ie, / within the VE)
# Usage : $vos->modify( name => '42' ); # Purpose : modify a VE # Returns : true or undef on failure # Parameters : # Required : name # : disk_size # : hostname # : ip # : ram # Optional : config # : cpu # : disk_root # : mac_address # : nameservers # : password # : searchdomain # : ssh_key # : template
# Usage : $vos->get_status( name => '42' ); # Purpose : get information about a VE # Returns : a hashref with state info about a VE # Parameters : # Required : name # # Example result object: # { # 'dom_id' => '42', # 'disk_use' => 560444, # 'disks' => [ # 'phy:/dev/vol00/42_rootimg,sda1,w', # 'phy:/dev/vol00/42_vmswap,sda2,w' # ], # 'ips' => '10.0.1.42', # 'cpu_time' => '2699.9', # 'mem' => 256, # 'cpus' => '2', # 'state' => 'running' # }
# Usage : $vos->migrate( name => '42', new_node => 'xen5' ); # Purpose : move a VE from one HW node to another # Returns : true or undef on failure # Parameters : # Required : name # : new_node - hostname of the new node # Optional : connection_test - don't migrate, just test SSH connectivity # between the existing and new HW node
# Usage : $vos->destroy( name => 42 ); # Purpose : destroy a virtual OS instance # Returns : true or undef on failure # Parameters : # Required : name
# Usage : $vos->publish_arp( ip => '10.1.0.42' ); # Purpose : update our neighbors with an ARP request for the provided IP(s) # Parameters : # Required : ip, can be a string with one IP, or an arrayref
Create a snapshot of the VE. Only applies to VEs with logical volumes (LVM)
Create disk snapshots. Opposite of create_snapshot.
After a snapshot is created, it can be mounted with this method. For xen VEs, the volume is mounted in ~/mnt, which usually looks like this: /home/xen/42/snap
unmounts a snapshot.
returns an array representing with each line in the VE config file being an element in the array.
Please report any bugs or feature requests to bug-unix-provision-virtualos at rt.cpan.org, or through the web interface at http://rt.cpan.org/NoAuth/ReportBug.html?Queue=Provision-Unix. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
bug-unix-provision-virtualos at rt.cpan.org
You can find documentation for this module with the perldoc command.
perldoc Provision::Unix::VirtualOS
You can also look for information at:
RT: CPAN's request tracker
http://rt.cpan.org/NoAuth/Bugs.html?Dist=Provision-Unix-VirtualOS
AnnoCPAN: Annotated CPAN documentation
http://annocpan.org/dist/Provision-Unix-VirtualOS
CPAN Ratings
http://cpanratings.perl.org/d/Provision-Unix-VirtualOS
Search CPAN
http://search.cpan.org/dist/Provision-Unix-VirtualOS
Matt Simerson <msimerson@cpan.org>
This software is copyright (c) 2015 by The Network People, Inc..
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
To install Provision::Unix, copy and paste the appropriate command in to your terminal.
cpanm
cpanm Provision::Unix
CPAN shell
perl -MCPAN -e shell install Provision::Unix
For more information on module installation, please visit the detailed CPAN module installation guide.