ceph introduce

简介:
ceph 存储集群主要包含osd, mon, mds组件.
osd组件负责存储数据, 数据复制, 恢复, 均衡, 提供监控数据等.
mon组件负责维护集群状态, 包括monitor map, osd map, pg(placement group) map, crush map等. 所有状态的变更都被记录.
mds是可选组件, 只有在使用ceph 文件系统是才需要用到, 负责userspace接口.

用户数据被切分成objects, 经crush算法决定存储在哪些placement group, 并计算出了placement group存储在哪些OSD, CRUSH算法同时负责计算ceph集群的扩展, 均衡, 动态恢复等.

ceph存储集群至少需要1个监控节点, 2个OSD节点(如果配置了2份数据拷贝到话, 2个OSD才能使集群到达active+clean的状态)才能运转起来.
要使用ceph 对象存储, 块设备为云平台(如openstack)提供服务, 或者使用ceph文件系统的话, 首选要部署ceph storage cluster.
部署ceph storage cluster, 必须先部署ceph node.


[参考]

INTRO TO CEPH

Whether you want to provide Ceph Object Storage and/or Ceph Block Device services to Cloud Platforms, deploy a Ceph Filesystem or use Ceph for another purpose, all Ceph Storage Cluster deployments begin with setting up each Ceph Node, your network and the Ceph Storage Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor and at least two Ceph OSD Daemons. The Ceph Metadata Server is essential when running Ceph Filesystem clients.

ceph introduce - 德哥@Digoal - PostgreSQL research

  • Ceph OSDs: A Ceph OSD Daemon (Ceph OSD) stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph Monitors by checking other Ceph OSD Daemons for a heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to achieve an active + clean state when the cluster makes two copies of your data (Ceph makes 2 copies by default, but you can adjust it).
  • Monitors: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map. Ceph maintains a history (called an “epoch”) of each state change in the Ceph Monitors, Ceph OSD Daemons, and PGs.
  • MDSs: A Ceph Metadata Server (MDS) stores metadata on behalf of the Ceph Filesystem (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system users to execute basic commands like lsfind, etc. without placing an enormous burden on the Ceph Storage Cluster.

Ceph stores a client’s data as objects within storage pools. Using the CRUSH algorithm, Ceph calculates which placement group should contain the object, and further calculates which Ceph OSD Daemon should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically.

相关文章
|
关系型数据库 Shell Docker
|
存储 关系型数据库 块存储

热门文章

最新文章