Page tree

Contents

  1. Set up the following networks for your KVM environment. You may need to manually modify certain files to suit your environment. Refer to the respective documentation for your KVM Hypervisor.
    • MGMT_network
    • LAN1_network
    • LAN2_network
  2. Create XML files for the vNIOS virtual appliance you want to deploy as described in the Defining XML files for vNIOS Appliances section, and then open the KVM console and execute the following commands to define and start the vNIOS virtual appliance. You need to include the paths for the XML files if you save them in a different directory.
    virsh net-define <Name of the MGMT XML file>
    virsh net-define <Name of the LAN1 XML file>
    virsh define <Name of the vnios XML file>
    virsh start <VM Name>
  3. Optionally, if you are deploying a vNIOS reporting server, create an XML file as described in the Defining an XML File for Reporting Servers section, and then open the KVM console and execute the following commands to define and start the vNIOS reporting instance:
    chown -R qemu.qemu <the directory to which you uploaded the qcow2 files> (For example, if you store the qcow2 files in /storage/vm/reporting800, enter /storage/vm/reporting800.)
    virsh define <XML file> (You may need to include the path for the XML file if you save it in a different directory.)
    virsh start <VM Name>
  4. Configure the vNIOS instance as described in Configuring the vNIOS Instance in the KVM Environment.

Defining XML files for vNIOS Appliances

Instead of deploying a vNIOS virtual appliance through the GUI, you can create the following XML files for the appliance and then execute commands to define and start the appliance in KVM.

Sample XML File for MGMT

<network>
    <name>MGMT</name>
    <forward mode='bridge'/>
    <bridge name='virbr0' />
</network>

Sample XML File for LAN1

<network>
    <name>LAN1</name>
    <forward mode='bridge'/>
    <bridge name='virbr0' />
</network>

Sample XML File for a vNIOS Appliance

Following is a sample XML file for defining a vNIOS virtual appliance in KVM. Note that the VM name, memory, vCPU, and location of the qcow2 file (highlighted in red in the following example) may vary. You can change these parameters according to your deployment.
<domain type='kvm'>
  <name> Infoblox-TE-825 </name>
  <memory unit='KiB'> 2097152 </memory>
  <vcpu placement='static'> 2 </vcpu>
  <os>
   <type arch='x86_64' machine=' pc '>hvm</type>
   <boot dev='hd'/>
  </os>
<features>
  <acpi/>
  <apic/>
  <pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
   <devices>
    <emulator> /usr/bin/qemu-system-x86_64 </emulator>
    <disk type='file' device='disk'>
     <driver name='qemu' type='qcow2' cache='none'/>
     <source
   file=' /var/lib/libvirt/images/nios-7.3.2-316478-2016-02-17-19-34-52-55G-820-disk1.q cow2 '/>
     <target dev='vda' bus='virtio'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
   <interface type='network'>
    <rom bar='off' />
    <source network='MGMT'/>
    <model type='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
   </interface>
   <interface type='network'>
    <source network='LAN1'/>
    <rom bar='off' />
    <model type='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
   </interface>
   <serial type='pty'>
    <target port='0'/>
   </serial>
   <console type='pty'>
    <target type='serial' port='0'/>
   </console>
   <memballoon model='virtio'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
   </memballoon>
  </devices>
</domain>

Defining an XML File for Reporting Servers

If you are deploying a reporting server in KVM, you must define an XML file that includes information such as the name of the VM, memory size, number of CPUs, location of the qcow2 files, file format, and others before you can spin up the vNIOS instance.
Following is a sample XML file for defining a reporting server in KVM. Note that the VM name, memory, vCPU, and location of the qcow2 files (highlighted in red in the following example) may vary. You can change these parameters according to your deployment.
Depending on the KVM tool you are using to deploy the reporting server, you can use the following sample file as a reference to create your own XML file.
<domain type='kvm'>
 <name> reporting805 </name>
 <memory unit='KiB'> 8388608 </memory>
 <vcpu placement='static'> 2 </vcpu>
 <os>
   <type arch='x86_64' machine=' pc '>hvm</type>
   <boot dev='hd'/>
  </os>
  <features>
   <acpi/>
   <apic/>
   <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
   <emulator> /usr/bin/qemu-system-x86_64 </emulator>
   <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2' cache='none'/>
    <source
file=' /storage/vm/reporting800/nios-7.2.6-316673-2016-02-18-14-00-10-300G-800-disk1.qcow2 '/>
    <target dev='vda' bus='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
   </disk>
   <disk type='file' device='disk'>
   <driver name='qemu' type='qcow2' cache='none'/>
   <source
file=' /storage/vm/reporting800/nios-7.2.6-36673-2016-02-18-14-00-10-300G-800-disk2.qcow2 '/>
   <target dev='vdb' bus='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
  </disk>
  <interface type='network'>
   <rom bar='off' />
   <source network='MGMT_network'/>
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
  </interface>
  <interface type='network'>
   <source network='LAN1_network'/>
   <rom bar='off' />
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
  </interface>
  <interface type='network'>
   <source network='HA_network'/>
   <rom bar='off' />
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
  </interface>
  <interface type='network'>
   <source network='LAN2_network'/>
   <rom bar='off' />
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
  </interface>
  <serial type='pty'>
   <target port='0'/>
  </serial>
  <console type='pty'>
   <target type='serial' port='0'/>
  </console>
  <memballoon model='virtio'>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
  </memballoon>
</devices>
</domain>

Configuring the vNIOS Instance in the KVM Environment

To configure the vNIOS instance:

  1. From the KVM administration tool, such as Virtual Machine Manager, select the vNIOS instance.
  2. Open the KVM console.
  3. When the Infoblox login prompt appears, log in with the default user name and password.
    login: admin
    password:
    infoblox
    The Infoblox prompt appears: Infoblox >
  4. You must have valid licenses before you can configure the vNIOS appliance. To obtain permanent licenses, first use the Infoblox > show version command to obtain the serial number of the vNIOS appliance, and then visit the Infoblox Support web site at https://support.infoblox.com. Log in with the user ID and password you receive when you register your product online at http://www.infoblox.com/support/customer/evaluation-and-registration.
    If the vNIOS virtual appliance does not have the Infoblox licenses required to run NIOS services and to join a Grid, you can use the set temp_license command to generate and install a temporary 60-day license.
  5. From the list of licenses, select to add the GridNIOS, and other relevant licenses for your vNIOS virtual appliance. For the vNIOS reporting appliance, you must also select the Reporting license.

    Note

    You must have both the Grid and NIOS licenses for the vNIOS virtual appliance to join a Grid.

  6. Use the CLI command set network to configure the network settings.
    Infoblox > set network
    NOTICE: All HA configurations are performed from the GUI. This interface is used only to
    configure a standalone node or to join a Grid.
    Enter IP address:
    10.1.1.22
    Enter netmask: [Default: 255.255.255.0]: 255.255.255.0
    Enter gateway address [Default: 10.1.1.1]: 10.1.1.1
    Become Grid member? (y or n): n

    Note

    For an HA Grid Master, ensure that you specify these settings for both the active and passive nodes.

    After you confirm your network settings, the Infoblox Grid Manager automatically restarts. You can then proceed to setting up a Grid, as described in Setting Up a Grid.


Managing vNIOS Instances

You can use the following commands to manage the vNIOS for KVM through a GUI virt-manager:

virsh console <VM name>: To access the VM console.
virsh start <VM name>: To start a VM.
virsh list and virsh list --all: To list VMs defined on the host. Note that the --all option includes VMs that are currently offline.
virsh undefine <VM name>: To remove the VM definition from the KVM environment. Use this command only after the VM has been shut down.

  • No labels

This page has no comments.