Ceph Days Korea
Save the Date - Ceph is coming to Korea! ¶
A full-day event dedicated to sharing Ceph’s transformative power and fostering the vibrant Ceph community in South Korea.
The expert Ceph team, Ceph’s customers and partners, and the Ceph community join forces to discuss things like the status of the Ceph project, recent Ceph project improvements and roadmap, and Ceph community news. The day ends with a networking reception, to foster more Ceph learning.
Important Dates ¶
- CFP Opens: 2023-02-24
- CFP Closes: 2023-04-28
- Speakers receive confirmation of acceptance: 2023-05-12
- Schedule Announcement: 2023-05-16
- Sponsorship Deadline: 2023-04-28
- Event Date: 2023-06-14
|Keynote: The journey of memory innovation with Ceph
|Keynote: The present and future of hard drives, and storage
|Keynote: IBM and IBM Storage Ceph's Future
Congratulations on becoming a member of a family in the open source community, and I would like to talk about Ceph's plan through synergy with IBM.
|Distributed storage system architecture and Ceph’s strength
From local storage systems to regular NAS, explore the reliability structurally and discuss considerations from a distributed system perspective. Finally, talk about the advantages and disadvantages of Ceph and what workloads are useful.
|Role of RocksDB in Ceph
In Ceph, RocksDB is used by default as a Metadata store for stored objects. However, this is not limited to providing critical features for top-tier applications such as RadosGW and MDS, and has a critical impact on performance. Surprisingly, however, many people tend to think of RocksDB as a black box and not pay attention to it. While looking at the internal logic of Ceph and RocksDB, I would like to look at the impact of RocksDB and introduce some points to note.
|Ceph case study and large-scale cluster operation plan in NAVER
In this pressentation, we will look at NAVER's case study of Ceph and how to operate storage. We would like to explain the problems and solutions that we struggled with when introducing Ceph and provide useful information for companies that want to adopt Ceph.
|A New MDS Partitioning for CephFS
This talk will present a new MDS partitioning strategy for CephFS that combines static pinning and dynamic partitioning with the bal_rank_mask option based on user metadata workload analysis. We will also share our experiences with the implementation of these optimizations in our production service and the results of our experiments. Finally, we will discuss how we can contribute our work to the Ceph community.
|Revisiting S3 features on Ceph Rados
In this presentation, we will first explore the S3 API execution path from a client to the Ceph Object Storage Daemon (OSD). It will cover how RGW translates S3 requests into internal Rados requests and how OSD stores S3 objects and metadata in the case of Bluestore. Second, we will analyze S3 performance with and without versioning-related features on three different S3-compatible storage platforms: Ceph, MinIO, and OpenStack Swift with Swift3. We conducted a synthetic benchmark to measure S3 performance, especially ListObject performance, while considering versioning-related features.
Seoul National Univ.
Join the Ceph announcement list, or follow Ceph on social media for Ceph event updates: