Table of Contents

CEPH: Architecture Overview

Documentation
Name: CEPH: Architecture Overview
Description: This document is the start line to understand this technology
Modification date :11/02/2019
Owner:dodger
Notify changes to:dodger
Tags: ceph, oss
Scalate to:The_fucking_bofh

Initial concepts

What is:

What is NOT:


You can read/watch a basic introduction here: https://ceph.com/ceph-storage/

Basic architecture

VERY BASIC (2 minutes approach)

From a very simple point of view: Ceph acts as a disk.
With the correct OS library like can be the kernel module for “ext4”/“btrfs”, you'll be able to read/write directly.
This library is called librados.
And interact with RADOS the reliable autonomic distributed object store which is the Object storage itselv.
Continuing with this simple view, like ext4 for example, you'll have data blocks and journal blocks to maintain consistency; that are OSD (object store data) and MON (monitoring) nodes:


So you'll have something like this:

BASIC Architecture

Going deeper, you'll find that the data placement over OSD nodes is calculated by an algorithm called CRUSH: Controlled Replication Under Scalable Hashing which is:


So when a client want to write data in the CEPH cluster though RADOS, librados in the client side invokes CRUSH to perform the calculation on where of the available OSD's write the data.

That result in a very strong architecture with no single point of failure, cause you'll not have a/many node/s taking care of metadata.
Its also really fast: you'll have n*osd servers to perform read/writes.
Its robust, if any of the OSD node fails, the data is replicated N times (where N is a config option) through other OSD's and will be accessible with the CRUSH calculation.
Also if any OSD fail, the MONitors will re-map the cluster and OSD's will re-replicate the data to have copied N times in the cluster.

In a graph

Usage Cases

CEPH as REST Object Storage

The unique diference in this case is there's a new component involved, the gateway which translate HTTP/REST into librados.
That's all.
You'll have a lot of overhead/performance gap using the gateway instead of using librados directly…
So if you take the previous graph, simplified, you'll have:

CEPH as Filesystem Architecture

Official documentation: https://docs.ceph.com/docs/master/cephfs/

Again, the diference in using CEPH as “filesystem” is that there's another component: the “Metadata server”.
Metadata a role similar to Monitor:

Real Life

ePayments PRO/PRE

Node list:

Clover schematics (here comes the monster)

HAproxies:

Object Gateways:

Monitors:

Metadata servers:

Data servers:











Clover

As object gateway

As Filesystem (cephfs)



Public ceph schema



Considerations for newcomers

When requesting access to any of our object storages or if you're a newcomer, you should know that:



Here you have a template to request a new user for the object storage:

Good morning #infrastructure,
We're facing a new project that involves store tons of objects and We want to use our incredible Ceph installation.
Please provide us a new User so we can store all the data from this project.

Name of the project: "This_template_sucks"
Environment: DEVELOPMENT
Expected number of buckets: 666


Thanks for your effort, best regards!