Loading…
DevConf.CZ 2020 has ended
Storage / Ceph / Gluster [clear filter]
Friday, January 24
 

10:30am CET

Using Zoned Block Devices in Fedora
Shingled Magnetic Recording (SMR) hard drives are a new type of "zoned" storage device that behave differently from standard block storage devices, but provide greater capacity per drive. While the different interface provides some challenges, these devices can be very useful for "near-line" large object storage applications.

In the past year, several improvements in the Linux kernel and utility programs have made it easier to use zoned block devices. This session provides a practical overview on how to use SMR drives, focusing on the developer experience in Fedora, while concentrating on the mechanics and restrictions of using a zoned block storage device.

Speakers
avatar for Bryan Gurney

Bryan Gurney

Senior Software Engineer, Red Hat
I'm a software engineer on Virtual Data Optimizer, a Linux kernel module that provides block-level deduplicaton and compression. I specialize in testing, performance, and advanced support of VDO, as well as hardware performance and behavior.



Friday January 24, 2020 10:30am - 10:55am CET
E104 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

11:00am CET

Estimating dm-vdo storage savings
dm-vdo is a device mapper target that provides block level
deduplication, compression, and thin provisioning. It can be difficult
to predict how much storage can be saved with dm-vdo because
deduplication and compression are very data dependent. Now there is an
open source vdoestimator tool that can be used to scan a filesystem or
block device for duplicate and compressible blocks using the same
indexing and compression technology used by dm-vdo. While there are
general guidelines for provisioning a dm-vdo volume for some common
workloads, the vdoestimator may help to get a better estimate for
specific data.

This session will demonstrate how to use the vdoestimator tool and
interpret the results with some examples. A brief overview of how
dm-vdo and the vdoestimator work will help to clarify how best to do
that.

Speakers
JW

John Wiele

Senior Developer, Red Hat
Software developer since time immemorial.



Friday January 24, 2020 11:00am - 11:25am CET
E104 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

5:30pm CET

Using LVM writecache for faster write performance
LVM recently introduced a second form of caching focused on improving write performance to a volume. LVM has previously supported a "hot spot" cache that is used for both reading and writing. The hot spot cache (using the dm-cache kernel component) adjusts cache content over time so that the most used parts of a volume are kept on a faster device. With the new writecache feature (using the dm-writecache kernel component), all writes go to a faster device (PMEM or SSD) prior to being written to the slower primary device. The writecache is especially beneficial for programs like databases where low commit latency is needed. The session will describe the differences between the two forms of caching, and demonstrate how to configure each using LVM. Attendees should be familiar with basic LVM terms and usage.

Speakers
avatar for Nikhil Kshirsagar

Nikhil Kshirsagar

Red Hat
SSME in Storage Technologies within CEE at Red Hat



Friday January 24, 2020 5:30pm - 5:55pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia
 
Saturday, January 25
 

2:00pm CET

MD cluster raid
MD cluster is a host based raid device. It can avoid single point of failure and provide HA storage for a HA cluster.
In this presentation, I'll talk the following points:
1) The coming availability of MD Cluster RAID
2) What is it and what could it be used for? (e.g. active / active cluster file systems like GFS, multi-site high-availability)
3) How it works and a bit about the technologies it relies on (e.g. DLM)
_x005F*) this might include some explanation of what each layer needs to protect in a cluster setting
4) history - comparisons to cmirror and how/why MD cluster RAID evolved
5) perhaps a demo depending on time

Speakers
XN

Xiao Ni

Beijing, RedHat
2009 - 2010 I did jobs related with website 2010 - 2013 I started to learn kernel knowledge and did some jobs related with cache block device 2014 - 2016 I've joined RedHat as a QE 2017 - 2019 I'm doing the jobs related with mdadm/md



Saturday January 25, 2020 2:00pm - 2:25pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

2:30pm CET

Protecting your resources with sanlock
Have you ever wonder how exactly sanlock protect you resources, e.g. disks to
prevent multiple VMs to write to the same disk? Have you used sanlock and been
afraid of changing sanlock timeouts as you have no idea what it exactly does?
In this talk we will outline the ideas on which sanlock is built upon, namely
Disk Paxos and Delta Leases. This will clarify, how sanlock works and also
some of it's configuration parameters.

Speakers
avatar for Vojtech Juranek

Vojtech Juranek

Developer, Red Hat
Works at Red Hat on storage part of oVirt project and is a contributor to various open source projects.



Saturday January 25, 2020 2:30pm - 2:55pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

3:00pm CET

Explicitly Supporting Stretch Clusters in Ceph
Ceph is an open source distributed object store, network block device, and file system designed for reliability, performance, and scalability. While Ceph is designed for use in a single data center, users have deployed “stretch” clusters across multiple data centers for many years, and deploying Ceph to back Red Hat’s OpenShift Container Storage product required us to support that workload explicitly and well — in particular, in the face of netsplits.
This requires improvements to our “monitor” leader elections and to the “OSD” peering process to keep data available without breaking our data integrity guarantees. This talk presents the whole cycle of that work from an algorithm and programmer perspective: the dangers we identified, the changes we needed, the architecture changes to support faster test iteration and coding, and the results.

Speakers
avatar for Gregory Farnum

Gregory Farnum

Software Engineering Manager, IBM
Greg Farnum has been in the core Ceph development group since 2009. Greg has done major work on all components of the Ceph ecosystem, previously served as the CephFS tech lead, and manages IBM’s CephFS engineering team.



Saturday January 25, 2020 3:00pm - 3:25pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

3:30pm CET

Petabyte CephFS for Satellite Data Processing
Imagine receiving every day about 7 Terrabyte of Earth Observation Satellite Data, that needs to be stored somewhere
Imagine that these data needs to be processed every day
Imagine that you are not always in control of the source code of the processing algorithms
Imagine that these data has a really huge amount of 30 Petabyte

There are not that much storage solutions out there which can fulfill these demands.

During the session i will present how our approach was in terms of:
- evaluating solutions
- evaluating hardware
- load and performance test solution
- bringing the beast to production and keeping it there

I will also tell you something about:
- problems we encountered (and their solution)
- our concrete usage patterns
- security implications and concerns

Speakers
avatar for martin strigl

martin strigl

Head of Operations, Catalysts - a Cloudflight Company
After buying my first CD Pack of SUSE Linux 4.3 back in 1996 and compiling allmost every interesting bit of Software (including X) from scratch i switched Distribution and quickly fell in love with RedHat Linux by using it regularly beginning with Zoot. Ever since then tried to contribute... Read More →



Saturday January 25, 2020 3:30pm - 4:25pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

4:30pm CET

Evolution of Geo-replication in Gluster
As data is becoming more and more important in the world, we can't afford to lose it even if there is a natural calamity. We will see how Geo-Replication came in to solve this problem for us and how it evolved over the days.
Through this session, the users will learn how easy it is to set up Georep for Gluster to use it for their storage and back up their data with minimal understanding of storage and linux. Having a basic Gluster knowledge will make it even more easy

Speakers
HG

Hari Gowtham Gopal

Software engineer, RedHat
Hari Gowtham is a Software Engineer at Red Hat working on GlusterFS, a distributed filesystem. Have worked on a number of components in and around Gluster and is an active community member, also taking care of the release management for Gluster. To know more about me https://github.com/harigowtham... Read More →



Saturday January 25, 2020 4:30pm - 4:55pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

4:30pm CET

Reworking Observability In Ceph
Jaeger and Opentracing provides ready to use tracing services for distributed systems and are becoming widely used de-facto standard because of their ease of use. Making use of these libraries, Ceph, can reach to a much-improved monitoring state, supporting visibility to its background distributed processes. This would, in turn, add up to the way Ceph is being debugged, “making Ceph more transparent” in identifying abnormalities.
In this session, the audience will get to learn about using distributed tracing in large scale distributed systems like Ceph, an overview of Jaegertracing in Ceph and how someone can use it for debugging Ceph.

Speakers
avatar for Deepika Upadhyay

Deepika Upadhyay

Software Engineer, Red Hat
Deepika Upadhyay was an Outreachy intern for Summer’19, during which she worked on adding Jaeger and Opentracing(distributed tracing libraries) to Ceph, now she’s continuing her work being a full-time employee for Ceph.



Saturday January 25, 2020 4:30pm - 4:55pm CET
D0206 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

5:00pm CET

BitLocker disk encryption on Linux
Working with encrypted devices in both GNU/Linux and Microsoft Windows requires installing additional tools either on one of the systems or on both. This will change with adding support for BitLocker, full disk encryption technology for Microsoft Windows, to cryptsetup which is currently being worked on. It is possible to support working with BitLocker devices on Linux using existing technologies like Device Mapper and with only relatively small changes to existing tools present these devices to users in the same way we present native LUKS/dm-crypt devices which will make using encrypted devices like flash drives much easier and much more user friendly in environments with both GNU/Linux and Microsoft Windows.

Speakers
VT

Vojtech Trefny

Software Engineer, Red Hat
Vojtech works as an Software Engineer at Red Hat on storage management tools and libraries like UDisks, blivet (Python library used by Anaconda installer) or libblockdev.



Saturday January 25, 2020 5:00pm - 5:25pm CET
A113 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

5:00pm CET

Managed Block Storage in oVirt
How oVirt leverages storage vendors provided offloading API to do fast storage side actions.

oVirt is a mature open source datacenter virtualization solution used by many organizations, such as in a Brussels airport and many more.

oVirt supports a broad range of storage backends, including iSCSI, FC, NFS and Gluster. There are certain operations in oVirt that require exclusive access to the disks, and when working with large volumes this prevents any other operations on that volume for a long time, greatly impacting performance.

So how can we reduce the locking time in oVirt for operations such as snapshotting and cloning?

In this talk, we will present how oVirt leverages storage vendors provided offloading API to do fast storage side actions.

Audiences: Virtualization users, developers, and admins interested in new oVirt features to boost storage operation performance.

Speakers
avatar for Fred Rolland

Fred Rolland

Principal Software Engineer, Red Hat
Freddy is a Principal Software engineer, currently working at Red Hat's OpenShift KNI edge group. Before that, he was part of RHV and OpenShift Virtualization Storage Team.Beside coding, Freddy has a great interest in education, teaching middle school students about Linux and Python... Read More →
avatar for Eyal Shenitzky

Eyal Shenitzky

Eyal is a Software Engineer at Red Hat, working on Red Hat Virtualization storage team. He has been involved in many of the product features such as Managed Block Storage, VM leases, disaster recovery and contribute to the VDSM product. He holds a B.Sc in Software Engineering from... Read More →



Saturday January 25, 2020 5:00pm - 5:25pm CET
D0206 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia
 
Sunday, January 26
 

1:00pm CET

A SMART-er Ceph: Predicting Hard Drive Failure
More than a million terabytes of data gets generated every day, and every bit of that can be valuable. Therefore, modern data storage solutions need to be reliable, scalable, and efficient. Storage systems like RAID and Ceph use replicas or erasure-coded redundancy to provide fault-tolerance. So, while scaling up to exabyte-level is possible, it can be resource-intensive and expensive.
However, these issues can be mitigated by some clever use of machine learning. We can use ML models to predict the remaining-useful-life or time-to-failure of hard drives, and then create or destroy replicas according to those predictions. In this way, storage can be made more resource-efficient. This talk will discuss the techniques we used and the models we built for this task. Our open-source-built model outperforms the ProphetStor model that is currently on upstream Ceph. Additionally, we frame the problem in a Kaggle competition format to provide a platform to the community to contribute their ideas

Speakers
avatar for Karanraj Chauhan

Karanraj Chauhan

Data Scientist, Red Hat
I like math, machine learning, and deep learning. Big fan of CPUs, GPUs, FPGAs, and other such lightning powered stones.



Sunday January 26, 2020 1:00pm - 1:55pm CET
E104 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia

2:00pm CET

Building Blocks: Raw Block PVs in Rook-Ceph
In Kubernetes, raw block PersistentVolumes (PVs) allow applications to consume storage in a new way. In particular, Rook-Ceph now makes use of them to provide the backing store for its clustered storage in a more Kubernetes-like fashion and with improved security. Now we can rethink the notion of how we structure our storage clusters, moving the focus away from static nodes and basing them on more dynamic, resilient devices.

This talk will go over how we incorporated raw block PVs, how the operator manages them, and how we can now define storage cluster. It will also include a demo of the resiliency of these new types of devices. By the end of the talk, you'll not only know how to use raw block PVs but also why and when to use them.

Speakers
avatar for Jose Rivera

Jose Rivera

Senior Software Engineer, Red Hat, Inc.
Jose Rivera is a Senior Software Engineer at Red Hat. He's worked in and around storage for over 10 years, with experiences spanning across multiple networked and software-defined storage projects such as Samba (SMB) and GlusterFS. Currently he works on OpenShift Container Storage... Read More →
avatar for Rohan Gupta

Rohan Gupta

Rohan Gupta Graduated from college in 2018 and is currently an intern at Red Hat. Previously he participated in Google Summer of Code 2018 with CNCF and worked on the Rook project.



Sunday January 26, 2020 2:00pm - 2:55pm CET
E104 Faculty of Information Technology Brno University of Technology, Božetěchova, Brno-Královo Pole, Czechia
 
Filter sessions
Apply filters to sessions.