Abstract coming soon.
We are writing the year 2012, and QEMU is still facing a challenge the Linux kernel already overcame: the BQL (Big QEMU Lock). This lock limits both scaling of userspace I/O paths and affects their latency, also preventing the use of QEMU for hard or even soft real-time workloads.
This talk will refresh the problem statement, analyze achievements of the last year and then look into current proposals to proceed with breaking the BQL. Aspects to be covered are
The presentation will be enriched with traps and pitfalls discovered via prototype implementations in the past year and shall trigger further discussions & ideas.
KVM has a lot of experience and has done quite well in some common virtualization use cases, like server consolidation, but we are observing an increasing demand for big SMP VMs and resource heavy enterprise workloads. These conditions exercise KVM in different ways, requiring dozens of vCPUs, terabytes of memory, and thousands of IOPs, sometimes within just one VM. This talk will discuss our experience of analyzing multiple workloads in big VM configurations.
This talk is intended for both KVM developers, to discuss what we can do to improve large VM performance, and for KVM users, to show current best practices. The audience should have a good technical background in virtualization and ideally some familiarity with enterprise workloads.
This talk will provide an overview of the two new testing frameworks in QEMU: qtest and qemu-test. I will discuss how to construct tests using qtest including walking through an example and will also cover how to write tests using qemu-test. Finally, the talk will outline my plans for requiring test cases from future contributions.
This talk will target current and future QEMU contributors. The expectation is that the audience has pre-existing knowledge of QEMU internals.
This talk will dive into the architecture and design of oVirt Node with discussions of it’s major features. We’ll look at the different aspects of the image including deployment methods, extensibility, and advantages and disadvantages of this packaging model. We’ll also explore some of the major recent additions, like plugin support and Stateless operation, as well as some of the features that are on our current roadmap.
This talk is primarily an overview. It's geared toward people looking to deploy, use, or extend ovirt-node.
oVirt provides data center virtualization management capabilities. In this session Itamar Heim will review the architecture and components comprising oVirt. The architecture review will continue the oVirt overview session, focusing on the roles and interactions of the engine, ui, vdsm, node, etc.
Enlightenments are enhancements made to the operating system to help reduce the cost of certain operating system functions. Presently, all recent Microsoft OSes support Hyper-V enlightened I/O and hypervisor aware kernels. Number of Hyper-V Enlightenments, like virtual APIC, spinlocks and invariant TSC can be implemented in KVM.
This presentation should be interesting to a wide audience, but mostly targeted to developers.
Anthony Liguori has contributed the QEMU Object Model (QOM) as new infrastructure for device modeling and inspection at the beginning of this year. Highlight some of the changes for device authors this requires and provide an outlook of what new possibilities this offers over former qdev. Focus of this talk will be my ongoing CPU remodeling - vision, achievements for v1.1 and v1.2, next goals.
I assume that Anthony will say some words about QOM in his key note. This presentation will not cover the why/how but rather the how-to and where-to for device authors in the status quo as well as some DOs and DON'Ts concerning CPU*State for all contributors. Depending on upstream progress this talk might also cover a brief overview of differences between softmmu and linux-user wrt CPU. I don't plan to go into x86 CPU hotplug details, that could well be covered by Igor/Eduardo in a separate talk.
This talk covers major network related enhancement from oVirt 3.0 to the upcoming oVirt 3.2. Starting from the 3.0 supported host's networking topology up to the 3.2 topology while going over the new drag and drop user interface, infrastructure, and features.
The current Linux bridging code is over 12 years old but it has evolved into a major component of many virtualized cloud infrastructures. This talk will cover recently added feature. The presentation will also announce several new security features that protect the network from hostile guests. Several other software networking solutions like macvlan and Openvswitch will also be discussed.
This is a technical talk intended at developers familiar with networking and/or virtualization.
QEMU has supported live migration for several years now, but maintaining backward/forward migration compatibility between versions has been a challenge due to the close coupling of it's network protocol to the data structures representing migratable guest state, which are prone to change as new features are added and code is refactored.
The QEMU Interface Description Language, QIDL, is a simple language that can be used to annotate data structures so that we can easily serialize them into arbitrary formats, including dynamic data structures which can be manipulated at runtime.
This talk provides an overview of QIDL, and explores how we can leverage the data structures it generates to improve live-migration compatibility by better decoupling the network protocol from QEMU's internal representations of guest state.
Deep dive to recently added storage features in ovirt (hot plug disk, live snapshot, storage live migration, shared disk, posix domains, nfs v4 and domain options, floating disks, direct lun, multiple storage domains, etc.).
As early adopters we have seen KVM mature very rapidly from almost its inception to the point that it now offers a very solid foundation for the datacenter and the cloud. That said, using KVM in the enterprise can be a rough ride at times, due to both KVM-specific issues and a virtualization ecosystem still very much in flux. In this talk we will discuss the types of problems we faced a how we can avoid them or mitigate them in the future.
This hopefully of interest to both developers and IT architects.
In this session we cover two aspects of live migration:
We analyze Live Migration performance state:
This talk gives an overview on the state of the qemu usb subsystem. What happened last year? What are the plans for the future? Where do we stand in terms of USB 3.0 support?
Postcopy live migration is yet another migration mechanism that allows users to change the execution host of a VM within one second while keeping visible disruption to a minimum. In addition, the whole migration process is basically shorter than normal live migration. It will provide great benefits for load balancing and energy savings using VMs. We have developed postcopy live migration whose quality is for production use and have evaluated its characteristic. Especially our implementation takes advantage of asynchronous page fault features which can't be utilized by precopy approach.
In this talk, I show new evaluation results of postcopy live migration and our analysis. The target audience is virtualization developers and advanced users who is looking for new features.
Come and discover first-hand oVirt during this practical lab session. oVirt community members will be on-hand to help guide you through the oVirt management interface, running off your oVirt Live USB key. To use oVirt effectively during the lab, you will need a laptop, with a 64-bit processor including AMD-v or VT-x virtualization extensions. We recommend a minimum of 4GB memory for good performance of the live system.
This presentation first introduces what we have achieved using KVM's reverse mappings: fast dirty logging and scalable algorithm for invalidating huge pages. The former is important for live migration and the latter is used by mmu_notifier.
Then, after explaining what was the key to achieving these, we talk about what we can expect from reverse mappings and what we should care about to make our system scalable.
Finally, we present our idea of using reverse mappings more to improve the scalability of live migration. This will become more important when QEMU's dirty bitmap refactoring is completed.
Spice, the opensource remote virtual desktop protocol, aims to provide a complete open source solution for interaction with virtualized desktop devices. As such Spice is undergoing rapid development atm. This talk will look back at what has been achieved the last year, and look forward to what the Spice team plans to work on for the coming year.
The talk is for developers and users who are using Spice, plan to deploy Spice in the future, or are interested in Spice in general. The audience is expected to be familiar with generic virtualization concepts, but no deep technical knowledge is required.
Qemu is currently based upon a Pentium Pro chipset, which was first released in 1996. It still continues to serve us quite well, but there are a number of limitations, especially in the PCI space. I am currently updating a patchset first brought forward by Isaku Ymahata to add a new machine model based on Intel's Q35 chipset. I will discuss the new features that Q35 introduces, including the topology, the chipset devices, and the pci express features (aer, ari, hotplug, power management). I will provide an update on its status - testing, performance, and any remaining merge hurdles.
The intended audience is qemu/kvm developers. I'd like to get them interested in the new chipset, and and to suggest potential new development areas that Q35 opens.
Libguestfs is a C library that provides a way to access and modify virtual machine disk images. It uses qemu and the Linux kernel, so we can manipulate just about any disk image, filesystem, partitioning scheme, LVM, Windows disks, and more. Above this layer are many specialized "virt-*" tools for carrying out specific tasks. In this talk, Richard Jones will give a live demonstration of libguestfs and the virt tools, and talk about the new features available in libguestfs 1.20.
Historically most usage of virtualization has focused on running entire operating systems in virtual machines or containers. The libvirt-sandbox toolkit builds on libvirt, KVM & LXC, to provide a high level API and command line tools to facilitate the use of virtualization as a technology for creating secure application sandboxes, without the burden of maintaining additional OS installations. The talk will cover the architecture of the sandbox technology, the challenges faced in its design & implementation, use cases it can address and the scope for future development.
The talk is suitable for a broad audience, covering system administrators, application developers & virtualization platform developers. A basic understanding of virtualization and security concepts is assumed. The audience will learn what capabilities the API & tools provide & how they can be applied to their environment
KVM autotest is a large set of functional and performance tests for KVM (both kernel and userspace). The design goals of the project were to provide infrastructure to perform extensive and systematic tests, and it's largely considered a QA only affair.
However, during the last couple of years, we've been working on bringing the benefits of this flexible test framework for developers, a fundamentally different use case. This required re-thinking the structure of the project.
This presentation aims to show the work that has been done in making the tests more approachable and useable for KVM developers:
Separation of tests from autotest core
We'll talk about what was done and what's on the pipeline, with a demo.
Abstract coming soon.
This talk will present a high level description of current work on virtio, vhost - in general with focus on paravirtualized networking in particular.
The talk will start with a quick overview of a paravirtualized networking in KVM. It will next describe new enhancements in this field developed in the last year, most of them performance-related.
The talk will include a description of upcoming challenges in enhancing paravirtualized
networking in KVM.
For a selected subset of the enhancements the talk will include some background and motivation, an architecture-level view of the implementation
and a short description of the benefits to the user.
The talk is targeted at developers with high level understanding of KVM and networking, and interest in their internals.
This talk introduces the three options to integrate with the oVirt engine: direct API calls, the SDK, and the command shell. The session includes live demo of the SDK and CLI usage.
In this session we will give an overview of Deltacloud, and show how one can use it in order to work with standard cloud APIs such as EC2 and CIMI on top of oVirt. We will describe the motivation, show some examples of basic operations of using EC2 and CIMI to perform basic operations on top of oVirt. Relevant audience: Users and Integrators.
Rik van Riel and Andrea Arcangeli will go over the KVM memory management changes from the last year, as well as possible changes for the next year. Topics include THP, ballooning, NUMA and more. The goal is a shorter presentation, with plenty of time for open discussion.
oVirt web administration application (WebAdmin) is a powerful tool to manage various assets of the virtualization infrastructure. In addition to existing functionality, there can be times when administrators want to expose additional features of their infrastructure through WebAdmin user interface.
In this session, Vojtech will present the concept and implementation of UI plugins, upcoming oVirt feature that allows third-party developers to extend WebAdmin user interface and related functionality. UI plugins integrate with WebAdmin directly on the client through JavaScript programming language, which makes the plugin infrastructure simple and flexible.
Attend this session to learn more about UI plugins, update on current implementation, and live demo showing how to write and deploy custom plugin. This session is intended for anyone interested in extending oVirt WebAdmin functionality.
QEMU's original memory API, was complicated, hard to use, incorrect, insecure, did not scale, and consumed a lot of memory. None of these was particularly problematic with the original use cases of emulating embedded boards, or perhaps running a virtualized desktop system to use "the other OS". However, for enterprise and cloud users running hundreds of untrusted guests on a single host, the API and its implementation presents a problem.
This talk will cover the new QEMU memory API, its design considerations, and how it addresses the limitations of the old implementation.
This talk will describe how oVirt support was added to GNOME Boxes, a Vala/C application. It will present the libgovirt library, a GObject library wrapping oVirt REST API, and then expand on the work that needed to be done in Boxes. Finally, we will talk about the future improvements that can be done for this support.
The audience should have basic development experience as it will describe my experience with wrapping the oVirt REST API in C, and then using it in a Vala application.
This talk should be about 20 minutes long.
Introduction and samples to ovirt custom hooks for extending and changing the behavior of ovirt/vdsm.
The block layer is one of qemu's most complex subsystems, and it has seen a very high and even increasing development activity recently. This talk will give an overview of the features of the block layer and its basic objects, highlighting the changes since last year and outlining some plans for the future.
It will span the whole area from guest devices (IDE, AHCI, virtio-blk/scsi) to block drivers implementing different image formats and protocols (especially qcow2) and background jobs operating on block devices, referring to the more detailed talks that may be given on some of the topics.
When communicating with an emulated device from a guest, we usually do MMIO or PIO accesses to program some operation and DMA and interrupts for the back channel.
DMA is fast, as QEMU has full accesses to our guest's memory. Interrupts have been accelerated before using the in-kernel interrupt controller. But how about port I/O? Is PIO fast when exiting to user space? Is MMIO fast when exiting to user space? How much performance do we lose by going through user space?
This talk will show performance numbers on the overhead that handling PIO/MMIO incurs on each read/write. It will also show methods on how to avoid having to exit to QEMU for all exits.
SLA@oVirt is quite challenging. Allowing users to have policies to prioritize virtual machines, limit CPU and RAM consumption, and allow overcommitment are not easy tasks. Now throw in VM affinity, VM High-Availability and see what we're up against.
In this talk, oVirt users, developers and others will get a review of existing SLA and scheduling elements in today's oVirt, as well as new features added and being added into current and future versions of oVirt. Relevant architecture and API changes across oVirt project will be discussed, and feedback is more than welcome.
QEMU (and hence KVM) has long supported thin provisioning, through both sparse raw files and image formats such as qcow2. However, there are several limitations in the implementation of this feature, which make it much less effective as the lifetime of a virtual machine image grows. This talk will cover how thin provisioning can help both virtual machine and host administrators, as well as when/how it can be used now. It will also present a plan for making this feature more generally, effectively and easily usable.
This talk is aimed at system administrators and developers. While relevant concepts will be introduced during the talk, some familiarity with storage technology is expected.
The VFIO userspace driver interface is now available in Linux v3.6 release candidates and the matching Qemu driver will be merged into the Qemu 1.3 release. By the time of this talk, VFIO will be available in the latest stable kernels and the Qemu development tree. VFIO breaks physical device assignment free from KVM, making it available to more architectures, more platforms and more device types. In this talk we'll take a high level look at VFIO and IOMMU grouping with a focus on how to make use of it, the restrictions and benefits it adds, and how it compares to KVM PCI device assignment in setup, functionality, and performance.
With the fast adoption of Infrastructure-as-a-Service stacks, many are beginning to realize that virtual networking is needed to automate, self-provision, and scale the network layer. There are two oft-mentioned approaches to network virtualization: an overlay-based virtual networking model and an OpenFlow-controlled switch fabric model.
This session would explain both models and clearly make the case that an overlay-based approach is best suited for IaaS networking.
This talk gives an overview of GlusterFS for scale-out storage management of KVM disk images. GlusterFS creates network attached storage on commodity hardware, including features for elastically adding/removing nodes and georeplication. Recent improvements in GlusterFS and KVM make it easy to run VM disk images on GlusterFS volumes. We also focus on GlusterFS architecture and how it could be extended for virtualization-specific needs.
Previous experience with KVM or GlusterFS is not necessary, but a general understanding of virtualization and disk images is required. Users of NFS and iSCSI may be particularly interested in this talk to see how GlusterFS approaches networked storage differently and is uniquely flexible.
Recently quite a lot of development has been done to implement VFIO, an architecture-independent device-passthrough for qemu.
This infrastructure has been merged in linux 3.5 and qemu upstream.
Its primary goal is to support SR-IOV with qemu, so that real device-passthrough will be available on all architectures.
However, some devices like the megasas driver already have a semi-virtualized interface to the hardware. So with VFIO we can lift that hardware interface directly to the guest with just minimal processing. This should give us near bare-metal performance.
This talk will give an overview over VFIO and how megasas can operate on top of it.
We are enabling the new VT features for interrupt/APIC virtualization in KVM. Although we reduced unique overhead of virtualization over the time, we still see some cases where virtualization of interrupts and APIC is a major source of overhead and latencies. The new features will eliminate and reduce overhead of significant portion of the VM exits associated with interrupt handling for virtualization. This talk explains the new VT extensions in details, including 1) APIC register virtualization, 2) virtual interrupt delivery, and 3) posted-interrupt processing, and then we discuss how we enable those in KVM.
The audience is expected to know about the internals of KVM and x86 virtualization, especially I/O and interrupt handling. If the audience is interested in I/O intensive or real-time systems in virtualization, he/she would have good insights from this talk.
Over the last year, QEMU's support for live block operations has grown to encompass atomic snapshots of multiple disks, merging of snapshots via block streaming and block commit, and block mirroring support.
While this talk is suitable for technical end-users, it deals with features that are primarily accessible by means of QAPI and QMP commands. It will focus on the snapshot and merging commands, how these operations are performed, and their limitations. Block mirroring will also be covered in similar detail. In addition, this talk will feature a demonstration of live atomic snapshots of multiple devices, and subsequent live merging of the resulting images by means of block commit and block streaming.
Gluster management is integrated in oVirt. This session will cover how gluster basics and introduce using gluster as storage backend from ovirt.
Sharing of physical devices on the processor between the KVM guests and host without significant performance impact continues to be a challenge. Queues are one of the fundamental building blocks for providing pass-through interface and sharing of the physical device. QorIQ processors support multiple transmit and receive queues in hardware. These queues provide the interface to different hardware accelerators (e.g. crypto engine, pattern matching engine) and I/O ports (e.g. Ethernet, RapidIO). Each such queue can be independently assigned to the virtual machines to provide direct access to the physical device. In this paper we describe use of this architecture for sharing hardware accelerators and I/O ports with device pass-through. In addition we describe how this architecture enables the creation of virtual Ethernet ports for efficient inter virtual machine communication.
1) A very short overview of storage choices in KVM
+ IDE, AHCI, SCSI, virito-scsi, virtio-blk, device assignment, network based (glusterfs, sheepdog, etc.)
+ performance comparison (esp. virtio-scsi v.s virtio-blk)
+ why improve virtio-blk
2) Host side improvement for virtio-blk
+ userspace based virito-blk solution
- QEMU current v.s QEMU data-plane v.s kvm tool's virio-blk
+ vhost based virito-blk solution
- using existing kernel aio interface
- using new in kernel aio interface
- using in kernel bio interface
+ userspace solution v.s. vhost solution
3) Guest side improvement for virtio-blk
+ bio based virtio-blk
+ bio based v.s. request based virtio-blk
4) Future work
+ multiqueue virtio-blk
This talk will focus on:
Multiqueue networking of kvm guest were introduced to eliminate the bottleneck of current single queue model and scale the performance for smp guest running on hosts with multiqueue cards. Multiqueue capable kvm guest will have a higher network performance compared to single queue. This presentation discusses the design and implementation of extending the kernel/qemu components of both host and guest to be multiqueue capable. Performance numbers and pending issues will also coverd in the talk.
The developers, customers and hardware vendors who are interested in the solution of high performance virtualized networking were targerted at this talk. They would expect a kind of high performance solution with multiqueue and virtio-net. Some basic knowledge of kvm, virtio and high performance networking were required for this talk.
Microsoft developed extensive framework for certifying HW and device drivers for Windows.
In many cases guests running WHQL tests can be used as a great test case for QEMU andor underlying host subsystems.
The talk will go over WHQL certification process and deep dive into technical details of the existing tests. As part of the presentation I would welcome open discussion regarding what can be learned from those tools in order to benefit the stability and robustness of open source SW.
An overview of the various infrastructure tools and services available in the ovirt.org domain. We’ll discuss various aspects from how different tools are leveraged with a heavy focus on the use of Jenkins for build and test automation, Gerrit for source code management, and Puppet for configuring the various servers for different uses. We’ll also discuss how we grew the infrastructure from a just a couple of EC2 hosts to where we are today, to where we’re planning to go in the future.
This is primarily geared toward people interested in how we go about managing and coordinating the various pieces of infrastructure in the oVirt site. It will range from high level discussion of what we're trying to accomplish to diving into some of the technical details. I'd like this talk to be very interactive, but will be prepared to present in the event there aren't a lot of questions.
ARM has introduced hardware virtualization extensions to the Cortex-A15 and Cortex-A7 cores. This talk will briefly introduce the ARM Virtualization Extensions and then cover the core ARM kernel work related to KVM/ARM, including modifying the boot process for Linux into the new Hyp processor mode. The talk will cover challenging implementation aspects specific to the ARM port, such as in-kernel MMIO-instruction decoding, Thumb-2 support, CP15 (control register) emulation and support, second-stage page table management, identity mappings, and support for both ARM and Thumb-2 instruction sets. Further, the talk will cover the ARM (VGIC) and Generic Timers virtualization support.
The talk is highly technical and intended for Kernel and KVM developers. The audience do not require prior knowledge of the ARM architecture and we plan to show a live demo of the system.
Modularizing and aggregating physical resources in a datacenter depends not only on low-latency networking, but also on software techniques to deliver such capabilities. In the session we will present some practical features and results of our work, as well as discuss implementation details. Among these features are delivering high-performance, transparent, and partially fault tolerant memory aggregation; and reducing the downtime of live migration using post-copy implementations.
We will present and discuss methods of transparently integrating with the MMU at the OS level, without impacting a running VM; and feature some performance benchmarks, including low overhead, scalability, and high bandwidth consumption. In addition, we demonstrate methods of seamlessly handling fail-overs, using RAID-1 features; and experimental ideas and benchmarks regarding page prefetching in such a system.
This talk will dive into the method and implementation of automated testing with oVirt Node. We’ll discuss the challenges and problems with testing in an automated fashion. We’ll then explore how the challenges have been met and overcome. We’ll dive into the framework and design of the the various test cases and how they can be run on both physical hardware and virtual machines.
IBM Mainframes use an unique I/O mechanism different from those on other architectures: Channel I/O. This talk will present an overview of the basic concepts: Subchannels, channel paths, channel programs, and how they are exploited today by Linux. It will also discuss the challenges of modelling these concepts in light of the exisisting KVM infrastructure, and how to build a virtual channel subsystem that offers the facilities needed by Linux.
Target audience are developers and other technically-minded people who would like to spend half an hour learning more about what makes mainframes different and interesting. Knowledge about the basic workings of KVM and QEMU is required; familarity with mainframes is not.
Having recently passes its second birthday, OpenStack is a relatively new entrant into the world of open-source virtualization. Since its announcement, it has gained incredible traction and momentum with hundreds of developers contributing to each release. OpenStack's success - and the potential for it to be deployed pervasively at massive scale, particularly in public clouds - presents an opportunity for KVM's continued growth and adoption.
Mark, a former KVM developer, will introduce the OpenStack project, its architecture and current status. Mark will then talk in some detail about how OpenStack currently uses KVM and libvirt before setting the scene for a a discussion about how OpenStack could adopt more of KVM's unique features to the benefit of both projects.
oVirt has a comprehensive API and SDK that provides an interface to ovirt-engine but no such API exists at the node level. This presentation describes libvdsm which replaces the current internal protocol to provide a stable and supportable API. Libvdsm can provide many benefits to oVirt including: better modularization, integration of third party add-ons, standalone vdsm deployments, and a foundation for REST and Messgage Queue brokers. The design and implementation will be discussed with specific attention given to design choices and their impact on supportability and usability of the API. libvdsm is designed to evolve. Examples of managing backwards compatibility, capabilities, new features, and deprecation will be presented. This work is under active development and the presenter will report on progress and next steps.
Boxes is a new GNOME application for easily handling other systems: local virtual machines and remote desktop. A local machine is powered by KVM and SPICE, and you can access remote desktop via libvirt, SPICE and VNC. During this talk I will demonstrate latest version and describe the design of Boxes. I will discuss the importance of Boxes as part of the GNOME project. Finally, a list of missing features and the roadmap for the next cycle will be presented.
The talk is indented for Linux users, system administrators and developers a like. Audience will be expected to have very basic understanding of, and experience with virtual machines, Linux and GNOME. Experience with virtual machine managers, like virt-manager will be an advantage but not required.
This talk will bring an update on the status of the KVM port on IBM POWER server machines. Additionally, I will describe the POWER IO architecture, specifically around PCI, how it differs from x86, our support for paravirtualized IOMMUs, how our Enhanced Error Handling infrastructure works and the challenges related to integrating this with KVM.
Learn about VDSM internals.
oVirt engine internals.
This session will review end-to-end the implementation of storage live migration.
Introduction to ovirt GWT UI internals.
A talk diving into the details of using the oVirt Node framework with projects different to oVirt. We’ll dive into details of how oVirt Node is different depending in which environment it is being used. There will be a heavy focus on how it is or can be used with OpenStack as the IaaS platform.
The VM scheduler is the heart of a private cloud, it selects host for your virtual machine, decides about VM migrations and aligns the load on the hosts. While this is not a simple task, performance and the costs of the virtualization system is largely dependent on the decisions this component makes. JBoss Drools can help us to make the decision logic more readable and easier to extend by specifying rules rather than trying to rewrite an existing algorithm. This approach promises better performance for your private could through better decisions
This BoFs will look into how we can improve the process of troubleshooting an oVirt environment. The session will start with an overview of techniques currently employed downstream and issues faced with these techniques before branching out into possible improvements that can be made upstream within oVirt.
Given the nature of this BoFs both users and developers are welcome to attend and share their ideas.