#libvirt
Explore tagged Tumblr posts
Text
gnome-boxes vms using virt-manager bridged network
I started with using gnome-boxes.
it was flaky
i've started primarliy using virt-manager ( virsh --connect qemu:///session to be able to talk to the domains)
i wanted to share services from my hypervisor host (ubuntu 20.04 currently). primarily, because i'm still on old versions of libvirt/hypervisor kernel/whever else virtiofs isn't available to me without extra work, so i want to nfs share host files because sshfs doesn't support symlinks.
i found a couple of really good explanations of the process.
stack overflow ( https://bit.ly/47xx3S0 )
blog post ( https://bit.ly/3H9myJJ )
unfortunately that blog post going through the specifics of creating /etc/qemu/bridge.conf explicitly sets readability of the file to 640. AND thanks to bad error handling in virt-manager i was being told when trying to start the domain.
"Error starting domain: 'utf-8' codec can't decode byte 0xab in position 159: invalid start byte"
thankfully i came to my senses and started with virsh start because that got me a helpful error explaining that there was no access to read /etc/qemu/bridge.conf. makes sense if it's not a world readable file and i'm starting the domain in "user mode"...
1 note
·
View note
Text
Ubuntu/Debian 11 install Libvert and related packages – For vagrant use
Step 1 : Update the repo for Ubuntu / Debian system. sudo apt update Step 2 : Install Libvert and relevant packages: If you have not installed vagrant yet, add it in the following line. (vagrant) sudo apt install -y libvirt-daemon-system libvirt-dev libvirt-clients virt-manager qemu-kvm ebtables libguestfs-tools ruby-fog-libvirt Step 3 : Start the Libvert service sudo systemctl start…
View On WordPress
0 notes
Text
>Spend 50k in top of the line computer
>Install windows 10
>Install Ubuntu on windows 10 using wsl
>Install ArchLinux on Ubuntu using libvirt/qemu
>Install Docker on Archlinux
>Install windows 2000 on Docker
>Install DosBox on windows 2000
>Install Doom on windows 2000
Experience Doom thru 4 layers of virtualization.
2 notes
·
View notes
Text
OpenShift Virtualization Architecture: Inside KubeVirt and Beyond
OpenShift Virtualization, powered by KubeVirt, enables organizations to run virtual machines (VMs) alongside containerized workloads within the same Kubernetes platform. This unified infrastructure offers seamless integration, efficiency, and scalability. Let’s delve into the architecture that makes OpenShift Virtualization a robust solution for modern workloads.
The Core of OpenShift Virtualization: KubeVirt
What is KubeVirt?
KubeVirt is an open-source project that extends Kubernetes to manage and run VMs natively. By leveraging Kubernetes' orchestration capabilities, KubeVirt bridges the gap between traditional VM-based applications and modern containerized workloads.
Key Components of KubeVirt Architecture
Virtual Machine (VM) Custom Resource Definition (CRD):
Defines the specifications and lifecycle of VMs as Kubernetes-native resources.
Enables seamless VM creation, updates, and deletion using Kubernetes APIs.
Virt-Controller:
Ensures the desired state of VMs.
Manages operations like VM start, stop, and restart.
Virt-Launcher:
A pod that hosts the VM instance.
Ensures isolation and integration with Kubernetes networking and storage.
Virt-Handler:
Runs on each node to manage VM-related operations.
Communicates with the Virt-Controller to execute tasks such as attaching disks or configuring networking.
Libvirt and QEMU/KVM:
Underlying technologies that provide VM execution capabilities.
Offer high performance and compatibility with existing VM workloads.
Integration with Kubernetes Ecosystem
Networking
OpenShift Virtualization integrates with Kubernetes networking solutions, such as:
Multus: Enables multiple network interfaces for VMs and containers.
SR-IOV: Provides high-performance networking for VMs.
Storage
Persistent storage for VMs is achieved using Kubernetes StorageClasses, ensuring that VMs have access to reliable and scalable storage solutions, such as:
Ceph RBD
NFS
GlusterFS
Security
Security is built into OpenShift Virtualization with:
SELinux: Enforces fine-grained access control.
RBAC: Manages access to VM resources via Kubernetes roles and bindings.
Beyond KubeVirt: Expanding Capabilities
Hybrid Workloads
OpenShift Virtualization enables hybrid workloads by allowing applications to:
Combine VM-based legacy components with containerized microservices.
Transition legacy apps into cloud-native environments gradually.
Operator Framework
OpenShift Virtualization leverages Operators to automate lifecycle management tasks like deployment, scaling, and updates for VM workloads.
Performance Optimization
Supports GPU passthrough for high-performance workloads, such as AI/ML.
Leverages advanced networking and storage features for demanding applications.
Real-World Use Cases
Dev-Test Environments: Developers can run VMs alongside containers to test different environments and dependencies.
Data Center Consolidation: Consolidate traditional and modern workloads on a unified Kubernetes platform, reducing operational overhead.
Hybrid Cloud Strategy: Extend VMs from on-premises to cloud environments seamlessly with OpenShift.
Conclusion
OpenShift Virtualization, with its KubeVirt foundation, is a game-changer for organizations seeking to modernize their IT infrastructure. By enabling VMs and containers to coexist and collaborate, OpenShift bridges the past and future of application workloads, unlocking unparalleled efficiency and scalability.
Whether you're modernizing legacy systems or innovating with cutting-edge technologies, OpenShift Virtualization provides the tools to succeed in today’s dynamic IT landscape.
For more information visit: https://www.hawkstack.com/
0 notes
Text
Becoming a Red Hat Enterprise Linux (RHEL) Expert: Your Path to Mastery and Market Value
In the rapidly evolving landscape of IT, specialization in specific technologies can set you apart and significantly boost your market value. For system administrators and IT professionals, mastering Red Hat Enterprise Linux (RHEL) is a strategic move that can open doors to numerous career opportunities. Here’s a comprehensive guide to becoming a RHEL expert, highlighting the key certifications and the technologies they cover.
1. Red Hat Certified System Administrator (RHCSA) - EX200
The journey to becoming a RHEL expert begins with the RHCSA certification. This foundational certification is designed to ensure you have the essential skills required for basic system administration tasks.
Key Technologies Covered:
System Configuration and Management: Install and configure software, manage basic networking, and set up local storage.
User and Group Management: Create and manage user and group accounts, implement security policies, and manage user privileges.
File System Management: Understand and manage file systems and permissions, including configuring and mounting file systems.
Achieving the RHCSA certification demonstrates your ability to perform essential system administration tasks across a wide variety of environments and deployment scenarios.
2. Red Hat Certified Engineer (RHCE) - EX294
Building on the RHCSA, the RHCE certification focuses on automation and DevOps skills. This certification is ideal for those looking to advance their ability to manage and automate Red Hat Enterprise Linux systems.
Key Technologies Covered:
Automation with Ansible: Install and configure Ansible, create and manage Ansible playbooks, and use Ansible to automate system configuration and management tasks.
Shell Scripting and Command Line Automation: Develop and execute shell scripts, understand and use Bash and other scripting languages to automate routine tasks.
System Security: Implement advanced security measures, manage SELinux policies, and ensure compliance with security standards.
The RHCE certification is a testament to your ability to automate Red Hat Enterprise Linux tasks, streamlining operations, and increasing efficiency.
3. Red Hat Certified Specialist in Security: Linux (EX415)
For those with a keen interest in security, the Red Hat Certified Specialist in Security certification validates your ability to secure RHEL systems.
Key Technologies Covered:
SELinux: Configure and manage Security-Enhanced Linux (SELinux) policies and contexts to secure a Red Hat system.
Firewall Management: Set up and manage firewalls using firewalld and other related tools.
Cryptography: Implement and manage encryption techniques, including GPG and SSL/TLS, to secure data in transit and at rest.
This certification highlights your expertise in hardening and securing Red Hat Enterprise Linux environments, making you an invaluable asset in any organization’s security framework.
4. Red Hat Certified Specialist in Virtualization (EX318)
Virtualization is a critical component of modern IT infrastructure. The Red Hat Certified Specialist in Virtualization certification proves your skills in managing virtualized environments.
Key Technologies Covered:
KVM and libvirt: Install and configure Kernel-based Virtual Machine (KVM) and manage virtual machines using libvirt.
Virtual Network Configuration: Set up and manage virtual networks, ensuring optimal performance and security.
Storage Management for Virtualization: Configure and manage storage for virtual machines, including iSCSI and NFS.
With this certification, you can demonstrate your ability to deploy and manage virtualized environments, a crucial skill in today’s IT landscape.
5. Red Hat Certified Specialist in OpenShift Administration (EX280)
As containerization and Kubernetes become mainstream, proficiency in Red Hat OpenShift is increasingly valuable. The Red Hat Certified Specialist in OpenShift Administration certification validates your skills in managing Red Hat’s Kubernetes-based container platform.
Key Technologies Covered:
OpenShift Installation and Configuration: Install, configure, and manage OpenShift clusters.
Container Management: Deploy, manage, and troubleshoot containerized applications.
OpenShift Security: Implement security best practices in an OpenShift environment, including user management and role-based access control.
This certification ensures you are equipped to manage and deploy applications in a cloud-native environment, aligning with modern DevOps practices.
Conclusion
Embarking on the Red Hat Enterprise Linux Expert Track is a strategic move for any IT professional. Each certification builds on the previous one, gradually enhancing your skills and knowledge. By achieving these certifications, you position yourself as a highly skilled and marketable expert in RHEL technologies.
Whether you’re starting with basic system administration or moving towards advanced automation, security, virtualization, or containerization, the Red Hat certification path provides a clear and rewarding route to career advancement. Invest in your future by becoming a Red Hat Enterprise Linux expert and unlock a world of opportunities in the IT industry.
For more details click www.qcsdclabs.com
#redhatcourses#docker#linux#information technology#containerorchestration#container#kubernetes#containersecurity#dockerswarm#aws
0 notes
Text
Is a Bangkok VPS VM Available?
If you’re looking for more power than shared hosting but can’t afford a dedicated server, consider Thailand VPS. This type of web hosting offers a lot of benefits, including high speed connectivity and unlimited bandwidth.
VPS also provides you with root access to your server, allowing you to install software and programs as you please. This is a huge advantage over shared hosting, which limits your flexibility and control.
KVM a VM
KVM is an open-source hypervisor that works with Linux. It offers many benefits, including performance and cost savings. It also has a large community. Its source code is available, which means that it’s easy to fix issues. It’s also possible to use a paid version of the hypervisor with technical support. There are several versions of the software for managing KVM VMs, but the most popular is libvirt.
Unlike VMware, KVM doesn’t require an underlying OS to virtualize hardware. Its bare metal architecture gives it an inherent performance advantage over other hypervisors. It also supports a wide range of certified Linux hardware platforms and enables kernel same-page merging. It also uses the qcow2 disk image format, which supports zlib-based compression and optional AES encryption.
The KVM virtual machine is an ideal solution for developers who want to run multiple websites on one server. This feature allows them to increase the functional capacity of their sites and improve website speed. It also helps them to create a secure environment for their websites.
To manage a KVM VM, you can use virt-manager, which is a graphical application for connecting to and configuring a KVM host. virt-manager is available for Windows, MacOS, and Linux. It can be used to connect to a VM, edit its settings, and clone it. It can also be used to perform VM live migration.
InterWorx Control Panel
InterWorx is a control panel software for servers that makes it simpler for web site users and system administrators to supervise operations on their websites, servers, and domains. It also offers tools for high availability and server clustering. There are two main sections of the platform: NodeWorx and SiteWorx.
Unlike cPanel, which uses a traditional UI, InterWorx focuses on functionality and performance. Its sleek interface is easy to navigate, and its features are well documented. It is also available in multiple languages. It has many useful plugins for Webalizer, AWStats and Analog, which help with monitoring and analysis. It is also easy to install WordPress.
In addition, InterWorx provides a number of useful features that are not available in other control panels, such as disk usage quotas and the ability to set up password-protected directories. It is also able to automatically renew SSL certificates, ensuring that all of your website’s visitors have a secure connection.
Another important feature of the platform is its ability to manage multiple reseller accounts. This allows you to resell your server space to multiple users and create their own SiteWorx accounts. This feature is especially helpful for developers and end users who use their sites to host other websites. It is easy to customize and configure the account settings for each user.
1Gbps High Performance Connections
Bangkok web hosting is a highly flexible type of web hosting. It is ideal for users who want more scalability, security, and backup resources than a Shared Server or dedicated server can offer. The VPS server also offers high-end hardware for faster performance. This type of server can be used to host a variety of applications, including document serving and website rendering. It is also suitable for gamers who want to play online games.
While a VPS isn’t an actual physical server, it does have a lot of power and can handle high traffic websites. This type of hosting is more stable and reliable than shared hosting plans, but it can also be more expensive than a bare metal server. VPS hosting is often a better option for medium-sized businesses that can’t afford a dedicated server.
The VPS server in Thailand also allows for complete root access, meaning that you can install your own software and programs. This is especially useful for webmasters who want to run multiple websites on one server. It also allows you to reboot the server without affecting any other programs or websites. This flexibility makes it a popular choice for small and mid-sized business owners. It is also cheaper than a dedicated server and more scalable. It is also ideal for businesses with remote offices and locations around the world.
Unlimited Bandwidth
Unlimited bandwidth is one of the key benefits that make economical KVM VPS hosting a compelling choice for businesses of all sizes. This feature is essential for ensuring that visitors to your website can access content quickly without any restrictions or delays. Bandwidth is a measure of the amount of data that can be relayed between your website and its visitors in a given time period.
TheServerHost offers a variety of plans for its VPS hosting services. All of these plans come with unmetered bandwidth and disk space. This means that you can host a high-traffic website without incurring any overage charges. Moreover, these plans offer full root access. This allows you to install software, programs, and applications on your server. This is a big advantage over basic web hosting, where neighboring websites can cause performance issues on your site. VPS servers are a good option for people who cannot afford a dedicated server but need more scalability, security, and backup resources than shared hosting. Using the latest technology, they are designed to provide maximum performance and reliability. They also include a control panel that makes it easy to manage and monitor your server. The control panel is based on InterWorx, which has a wide range of features to make life easier for system administrators and website owners.
1 note
·
View note
Text
Why libvirt supports only 14 PCIe hotplugged devices on x86-64
https://dottedmag.net/blog/libvirt-14-pcie-devices/
0 notes
Link
This is the correct way to share a folder from a Linux Host to Linux Guests without changing the permissions of KVM or Libvirt folders
1 note
·
View note
Text
virtio -
присутствует в 10 винде точно.
Там вы будете бороться с жуками, угу, "неизвестнвмт чудищами". Взято на аооружение сейчас технологическими корпорациями, с магнитомтрикцией и ультразвуком, и еще дискордом "в купэ" - чистый геноцид.
Вот что бы аы взяли на заметку.
это такой же драйвер для ос, для PCI устройств, за такие вещи нужно расстреливать вот что нужно делать с разработчиками, и с их подельниками, которые в продукт внедряют вредоносный код.
Нормальный пдекватный человек, такие вещи в оперкционную систему не запихнет. Только фашистское дерьмо, купленное определенными лицами, такое может сделать.
0 notes
Text
Установка и настройка QEMU/KVM в Ubuntu
В этой статье я хочу рассмотреть установку и настройку QEMU/KVM в Ubuntu.
Оглавление
ВведениеПроверка аппаратной поддержкиПодготовка сервераУстановка и запускНастройка сетиНастройка сетевого мостаВиртуальные сети (NAT forwarding)Создание виртуальной машиныУправление виртуальной машиной Введение в QEMU/KVM KVM (Kernel-based Virtual Machine) — это комплекс программ для виртуализации с аппаратной поддержкой в среде Linux x86. Виртуализация позволяет нам устанавливать полностью изолированные, но работающие бок о бок операционные системы на одном и том же железе. Гипервизор KVM представляет из себя загружаемый модуль ядра Linux. Он обеспечивает только уровень абстракции устройств. Таким образом одного гипервизора KVM недостаточно для запуска виртуальной ОС. Нужна еще эмуляция процессора, дисков, сети, видео, шины. Для этого существует QEMU. QEMU (Quick Emulator) — эмулятор различных устройств, который позволяет запускать операционные системы, предназначенные для одной архитектуры, на другой. Обычно такой комплекс программ для виртуализации называют QEMU/KVM. Проверка аппаратной поддержки виртуализации Во-первых, перед настройкой KVM необходимо проверить совместимость сервера с технологиями виртуализации: cat /proc/cpuinfo | egrep -c "(vmx|svx)" Числа отличные от нуля говорят о том, что процессор имеет поддержку аппаратной виртуализации Intel-VT или AMD-V . Подготовка сервера Во-вторых, для удобства создадим каталоги для хранения образов жестких дисков наших виртуальных машин и образов ISO, с которых будет производиться установка операционных систем. sudo mkdir -p /kvm/{hdd,iso} В результате будет создан каталог /kvm/hdd для виртуальных жестких дисков и каталог /kvm/iso для образов ISO. Установка и запуск QEMU/KVM в Ubuntu В качестве интерфейса к технологиии виртуализации QEMU/KVM в Ubuntu мы будем использовать библиотеку libvirt. С помощью следующей команды мы установим гипервизор, эмулятор, библиотеку и утилиты управления. sudo apt-get install qemu-kvm libvirt-bin virtinst libosinfo-bin Где qemu-kvm — сам гипервизор; libvirt-bin — библиотека управления гипервизором; virtinst — утилита управления виртуальными машинами; libosinfo-bin — утилита для просмотра списка вариантов гостевых операционных систем. После успешной установки всех пакетов настроим автоматический запуск сервиса. sudo systemctl enable libvirtd Запустим его. sudo systemctl start libvirtd Пользователя, под которым будем работать с виртуальными машинами, включим в группу libvirt: sudo usermod -aG libvirt user И установим права доступа на ранее созданные каталоги: sudo chgrp libvirt /kvm/{hdd,iso} sudo chmod g+w /kvm/hdd Настраивать виртуальные машины, хранилища и сети можно как из командной строки, так и с помощью GUI-инструмента virt-manager. Причем установить его можно как на сервер, так и на другой компьютер, например, на ваш ноутбук. В последнем случае вам придется добавить удаленное соединение к libvirt. Установку virt-manager и работу при помощи него с libvirt и виртуальными машинами мы рассмотрим в следующей статье. Настройка сети Итак, виртуальные машины могут работать через свою виртуальную сеть с NAT или получать IP-адреса из локальной сети через сетевой мост, который нам необходимо настроить. Настройка сетевого моста В старых версиях Ubuntu большая часть настроек конфигурации сети Ethernet находится в файле /etc/network/interfaces. На всякий случай создадим его резервную копию: mkdir -p ~/backup && sudo cp /etc/network/interfaces ~/backup Затем устанавливаем утилиты для конфигурирования Ethernet-моста: sudo apt-get install bridge-utils Открываем файл /etc/network/interfaces в своем любимом редакторе (vim, nano): sudo vim /etc/network/interfaces И приводим его к примерно такому виду: source /etc/network/interfaces.d/* auto lo iface lo inet loopback #allow-hotplug eno1 #iface eno1 inet static # address 192.168.7.2/24 # gateway 192.168.7.1 # dns-nameservers 127.0.0.1 192.168.7.1 8.8.8.8 # dns-search home.lan auto br0 iface br0 inet static address 192.168.7.2/24 gateway 192.168.7.1 bridge_ports eno1 bridge_stp on bridge_fd 2 bridge_hello 2 bridge_maxage 20 dns-nameservers 127.0.0.1 192.168.7.1 8.8.8.8 dns-search home.lan Все, что закомментировано — старые настройки сети; br0 — название интерфейса создаваемого моста; eno1 — сетевой интерфейс, через который будет работать мост. Если вы получаете адрес динамически через DHCP, то конфигурация сократится до такой: source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto br0 iface br0 inet dhcp bridge_ports eno1 bridge_stp on bridge_fd 2 bridge_hello 2 bridge_maxage 20 Внимательно проверяем конфигурацию и перезапускаем службу сети: sudo systemctl restart networking Начиная с релиза Ubuntu 17.10, для управления конфигурацией сети по умолчанию используется утилита Netplan, которая добавляет новый уровень абстракции при настройке сетевых интерфейсов. Конфигурация сети хранится в файлах формата YAML. Предоставляется эта информация бэкендам (network renderers), таким как NetworkManager или systemd-networkd. Файлы конфигурации Netplan хранятся в папке /etc/netplan. Для настройки сети открываем в редакторе файл 01-netcfg.yaml vim /etc/netplan/01-netcfg.yaml и приводим его к такому виду: network: version: 2 renderer: networkd ethernets: ens3: dhcp4: false dhcp6: false bridges: br0: dhcp4: false dhcp6: false interfaces: addresses: gateway4: 192.168.7.1 nameservers: search: addresses: parameters: stp: true forward-delay: 2 hello-time: 2 max-age: 20 Ну а при использовании динамической адресации файл конфигурации будет выглядеть так: network: version: 2 renderer: networkd ethernets: ens3: dhcp4: false dhcp6: false bridges: br0: dhcp4: true dhcp6: true interfaces: parameters: stp: true forward-delay: 2 hello-time: 2 max-age: 20 В файлах конфигурации указываем свои адреса, имена интерфейсов и доменов и после тщательной проверки применяем сетевые настройки: sudo netplan apply Виртульные сети (NAT forwarding) Каждая стандартная установка libvirt обеспечивает подключение виртуальных машин на основе NAT из коробки. Это так называемая виртуальная сеть по умолчанию. Вы можете проверить, что она доступна таким образом: sudo virsh net-list --all Name State Autostart Persistent ---------------------------------------------------------- default active yes yes Для того, чтобы виртуальные машины с сетевым интерфейсом NAT могли выходить в интернет необходимо настроить перенаправление сетевого трафика. Для этого надо убрать ��омментарий строки #net.ipv4.ip_forward=1 в файле /etc/sysctl.d/99-sysctl.conf и сохранить настройки: sudo vim /etc/sysctl.d/99-sysctl.conf sudo sysctl -p /etc/sysctl.d/99-sysctl.conf Создание виртуальной машины Для создания виртуальной машины нам понадобятся две утилиты: osinfo-query — для получения списка доступных для установки вариантов операционных систем и virt-install — непосредственно для самой установки. Итак, создадим нашу первую виртуальную виртуальную машину с ОС Ubuntu 16.04, 1024MiB ОЗУ, 1 процессором, сетью через мост и 12GiB жестким диском. sudo virt-install \ --name ubuntu1604s \ --os-type=linux --os-variant=ubuntu16.04 \ --vcpus=1 \ --ram 1024 \ --network network=bridge:br0 \ --disk path=/kvm/hdd/ubuntu1604s.qcow2,format=qcow2,size=12,bus=virtio \ --cdrom=/kvm/iso/ubuntu-16.04.6-server-amd64.iso \ --graphics vnc,listen=0.0.0.0 --noautoconsole \ --hvm --virt-type=kvm Обратите внимание на параметр --os-variant. Он указывает гипервизору под какую именно ОС следует адаптировать настройки. Список доступных вариантов можно получить, выполнив команду: osinfo-query os Подробнее с параметрами virt-install вы можете ознакомиться на страницах руководства, а я приведу команду создания ВМ c сетью через NAT: sudo virt-install \ --name ubuntu1604s \ --os-type=linux --os-variant=ubuntu16.04 \ --autostart \ --vcpus=2 --cpu host --check-cpu \ --ram 2048 \ --network network=default,model=virtio \ --disk path=/kvm/vhdd/ubuntu1604s.qcow2,format=qcow2,size=12,bus=virtio \ --cdrom=/kvm/iso/ubuntu-16.04.6-server-amd64.iso \ --graphics vnc,listen=0.0.0.0,password=vncpwd --noautoconsole \ --hvm --virt-type=kvm После запуска установки в консоли сервера вы увидите текст похожий на этот: Domain installation still in progress. Waiting for installation to complete. Значит все нормально и для продолжения установки ОС в виртуальной машине нам нужно соединиться к ней по VNC. Чтобы узнать номер порта на котором он поднят для нашей ВМ откройте новую консоль или в текущей переведите задание в фоновый режим с помощью CTRL+Z, bg и выполните команду: sudo virsh dumpxml ubuntu1604s | grep graphics В моем случае это порт 5903: или выполнив команду sudo virsh vncdisplay ubuntu1604s вы получите примерно такой результат: :3 Это число нужно сложить с базовым портом 5900. Далее подключаемся с помощью клиента VNC (Remmina, TightVNC) к нашему серверу по полученному порту и устанавливаем Ubuntu 16.04 в нашей ВМ.
После успешного завершения установки в консоли вы увидите примерно следующее: Domain has shutdown. Continuing. Domain creation completed. Restarting guest. Управление виртуальной машиной Для управления гостевыми системами и гипервизором существует текстовая утилита virsh. Она использует libvirt API и служит альтернативой графическому менеджеру виртуальных машин virt-manager. Я коснусь только основных команд управления ВМ, так как описание всех возможностей утилиты — тема для отдельной статьи. Список всех доступных команд вы можете увидеть так: virsh help Описание параметров отдельной команды: virsh help command где command — это команда из списка, который мы получили выше. Для просмотра списка всех виртуальных машин используйте: sudo vish list --all Вот что она показала у меня: Id Name State ---------------------------------------------------- 1 ubuntu16 running 3 centos8 running 6 ubuntu18 running 10 ubuntu1604server running - win10 shut off - win2k16 shut off - win2k16-2 shut off - win7 shut off Если же вам нужны только работающие в данный момент виртуалки, то введите: sudo virsh list Для запуска виртуальной машины выполните в консоли: sudo virsh start domain где domain — имя виртуальной машины из списка, который мы получили выше. Для выключения: sudo virsh shutdown domain И для ее перезагрузки: sudo virsh reboot domain Редактирование конфигурации ВМ: sudo virsh edit domain Read the full article
1 note
·
View note
Text
Libvirt KVM high CPU usage with USB attached device
Today I have needed to attach permanently an USB device to a Linux Virtual machine hosted on Ubuntu 18.04 running on KVM hypervisor.
All works fine except that the CPU usage on the Host was oddly high even if the Guest OS was idle.
The CPU usage on Host was always over 12% that is very high if at the same time the Guest is doing nothing. I mean that while the Guest CPU was between 0% and 2% the Host thread was between 12% and 15%.
The only apparent solution found was switching 'hpet' timer from 'no' to 'yes' as discussed in a thread on the Proxmox Forum.
But it didn't the trick (it was related to Windows Guest).
The solution
At the end I found that switching the controller USB model from USB2 to USB3 changed the scenario making Host CPU usage (when Guest is idle) between 2% and 5% that is definitely acceptable. So easy!
2 notes
·
View notes
Text
Vagrantfile for installing Nginx unit in a Vagrant VM
Nginx unit is a lightweight Universal Web app server that can run static files, PHP, Python, Ruby, Go, NodeJS, Java and Perl. It is fantastic, and you should learn to use it, is all I can say. Requirements before running this Vagrant file: You need Vagrant installed. On Linux, I prefer libvirt instead of Virtualbox. How to install Libvirt. You can uninstall Virtualbox, if you have libvirt…
View On WordPress
1 note
·
View note
Link
2 notes
·
View notes
Text
KubeVirt v0.3.0-alpha.3: Kubernetes native networking and storage
First post for quite some time. A side effect of being busy to get streamline our KubeVirt user experience.
KubeVirt v0.3.0 was not released at the beginnig of the month.
That release was intended to be a little bigger, because it included a large architecture change (to the good). The change itself was amazingly friendly and went in without much problems - even if it took some time.
But, the work which was building upon this patch in the storage and network areas was delayed and didn't make it in time. Thus we skipped the release in order to let storage and network catch up.
The important thing about these two areas is, that KubeVirt was able to connect a VM to a network, and was able to boot of a iSCSI target, but this was not really tightly integrated with Kubernetes.
Now, just this week two patches landed which actually do integrate these areas with Kubernetes.
Storage
The first is storage - mainly written by Artyom, and finalized by David - which allows a user to use a persistent volume as the backing storage for a VM disk:
metadata: name: myvm apiVersion: kubevirt.io/v1alpha1 kind: VirtualMachine spec: domain: devices: disks: - name: mypvcdisk volumeName: mypvc lun: {} volumes: - name: mypvc persistentVolumeClaim: claimName: mypvc
This means that any storage solution supported by Kubernetes to provide PVs can be used to store virtual machine images. This is a big step forward in terms of compatibility.
This actually works by taking this claim, and attaching it to the VM's pod definition, in order to let the kubelet then mount the respective volume to the VM's pod. Once that is done, KubeVirt will take care to connect the disk image within that PV to the VM itself. This is only possible because the architecture change caused libvirt to run inside every VM pod, and thus allow the VM to consume the pod resources.
Side note, another project is in progress to actually let a user upload a virtual machine disk to the cluster in a convenient way: https://github.com/kubevirt/containerized-data-importer.
Network
The second change is about network which Vladik worked on for some time. This change also required the architectural changes, in order to allow the VM and libvirt to consume the pod's network resource.
Just like with pods the user does not need to do anything to get basic network connectivity. KubeVirt will connect the VM to the NIC of the pod in order to give it the most compatible intergation. Thus now you are able to expose a TCP or UDP port of the VM to the outside world using regular Kubernetes Services.
A side note here is that despite this integration we are now looking to enhance this further to allow the usage of side cars like Istio.
Alpha Release
The three changes - and their delay - caused the delay of v0.3.0 - which will now be released in the beginnig of March. But we have done a few pre-releases in order to allow interested users to try this code right now:
KubeVirt v0.3.0-alpha.3 is the most recent alpha release and should work fairly well.
More
But these items were just a small fraction of what we have been doing.
If you look at the kubevirt org on GitHub you will notice many more repositories there, covering storage, cockpit, and deployment with ansible - and it will be another post to write about how all of this is fitting together.
Welcome aboard!
KubeVirt is really speeding up and we are still looking for support. So if you are interested in working on a bleeding edge project tightly coupled with Kubernetes, but also having it's own notion, and an great team, then just reach out to me.
1 note
·
View note
Text
How to install QEMU/KVM and create a Windows 10 virtual machine on Debian
If you’re using Linux, you don’t need VirtualBox or VMware to create virtual machines.
You can use KVM – the kernel-based virtual machine – to run both Windows and Linux in virtual machines.
You can use KVM directly or with other command-line tools, but the graphical Virtual Machine Manager (Virt-Manager) application will feel most familiar to people that have used other virtual machine programs.
Virtual machines are amazing for two reasons! They completely defuse the argument over having to choose which operating system to use, because you can use them all.
However, online tutorials are only cool as long as they stay up-to-date, and the reason for this blog entry is that I’ve noticed a lot of KVM tutorials online are around 2017, use old packages, old commands, or are just plain obsolete. This blog post attempts to remedy this by offering a solution that is current for Debian in 2021.
I wanted to install a Windows 10 virtual machine on my Debian build. To begin with, you’re going to need an up-to-date copy of Windows 10 in an ISO. You can download that directly from Microsoft at https://www.microsoft.com/en-us/software-download/windows10ISO
Now open your Terminal.
Can You Go Virtual?
KVM only works if your CPU has hardware virtualization support – either Intel VT-x or AMD-V. There are two commands that can be run to determine if you have hardware virtualization support.
egrep -c "(svm|vmx)" /proc/cpuinfo
This command will count the number of processor cores that can run svm or vmx virtualization. The resulting answer MUST be higher than 0.
egrep --color -i "svm|vmx" /proc/cpuinfo
This command lists all the processor modes in /cpuinfo and highlights either svm or vmx. If you don’t see at least one of those modes highlighted, abort!
Let’s Install QEMU/KVM
Once you’ve confirmed that your computer supports virtualization, let’s move on to the actual install process.
QEMU/KVM needs libvirt installed to work correctly. Older tutorials advised installing the package libvirt-bin; this package, it seems, no longer exists in Debian repos.
Instead of trying to figure out by trial-and-error which libvirt package to install, we’re going to ignore the libvirt requirement for the moment, and let APT choose which package and version to install
sudo apt install qemu-kvm bridge-utils virt-manager
And it looks like that worked! APT traced the package tree and figured out that we need libvirt0 and will install it to satisfy the dependencies.
Don’t Get More Complicated
Other, older, tutorials I’ve seen advise adding the user to the libvirtd group. However, as we’ve already seen, libvirt packages moved on, and it seems libvirtd did too.
There is no need to add your user to the libvirtd group, because there is no libvirtd group, and you don’t need to be a member of this group to run virtual machines.
The only requirement at this point is that you must run any virtual-machine as root or sudo.
When you first run virt-manager, it will ask you to enter your root or sudo password to connect to QEMU/KVM
Installing Windows 10 in a Virtual Environment
Now, down to business!
The Virtual Machine Manager window likely looks like this.
Do what it says. Double Click! Once the “Not Connected” message goes away, QEMU/KVM is ready to be built and virtualized on.
Right-click on QEMU/KVM and click “New”
This will create a New Virtual Machine, that will have QEMU/KVM as its hypervisor.
Remember that Windows 10 ISO we downloaded at the beginning of all this? We’re going to Browse to find it... and click “Choose Volume” when it’s selected.
This should automatically detect that it’s a Windows 10 ISO and select the operating system type below. However, if it doesn’t uncheck the box “Automatically detect from the installation media / source” and find the operating system you are installing.
You can change your Memory and CPU settings to whatever you prefer at this point. Remember to keep a little in reserve for the host system. I changed mine to 8192 Memory, and 3 CPU’s
For Step 4 set up your storage! 40 GiB should be considered a minimum for Windows 10, as Microsoft loves their bloatware!
Step 5 summarizes all the details you entered on steps 1-4.
If this pops up when you hit finish, YES, you need the Virtual Network active...
This brings you to the main configuration screen. Click “Begin Installation” at the top of the window... Windows 10 will begin installing.
Now we wait...
And wait...
And wait...
Windows 10 takes a long time to install compared to Linux distros doesn’t it?
Making External Connections
So now that Windows 10 is installed, you’ll likely need more than just the barebones operating system to do your work.
Would you like to use Zoom? You’ll need a webcam for that, and that means you’ll need to connect a USB device into your virtual machine and set-up the webcam.
Need to move files quickly in and out of the virtual machine? Plug a USB stick into the computer, and set up a USB connection in the virtual machine so it can access the USB as a drive.
This all happens after Windows 10 is installed, and the virtual machine has been powered off and is inactive. Your hypervisor QEMU/KVM window should look like this.
Double-click on the Virtual Machine to bring up the Overview / Launcher window.
Here you can change all the initial setup options you made, as well as add new options! On the left-side pane at the bottom of the screen, you will see “+ Add Hardware”. Click that to set up a new USB connection.
I’m going to add my existing webcam from Debian into this virtual machine.
Click USB Host Device on the left side, and find the existing USB device on the right side that you want to add to the virtual machine.
As you can see from the picture, I’m adding my webcam, but the entry right below the webcam is for a SanDisk Cruzer USB stick that I could also add (that would work as an external drive in Windows).
When the device is added, you’ll see a new hardware connection in the left-side pane of the virtual machine overview screen.
Switch over to the Graphical Console...
...then click the Play button to start booting your virtual machine and test to make sure the changes you made are working correctly.
1 note
·
View note