Ceph on centos (RADOS Gateway, S3 Buckets)

  • Category: 電腦相關
  • Last Updated: Tuesday, 11 July 2017 17:09
  • Published: Monday, 10 July 2017 13:32
  • Written by sam

本次目標及對象

主機 ceph1 ceph2 ceph3

10.0.252.117-119

3 個 mon 安裝在三臺主機,

1 個admin 在ceph1

3 個 osd 安裝在三臺主機會各提供一個osd,測試性質,各200G

1 個 rados 本次要測試 object

此次採用centos 7

系統完成後,請先設定IP及hostname

我的如下

10.0.252.117    ceph1
10.0.252.118    ceph2
10.0.252.119    ceph3

先加上防火牆規則,因為我們三臺都需要osd及mon,所以都要加

firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
firewall-cmd --reload

安裝

yum -y install --enablerepo=extras centos-release-ceph

建立使用者

[root@ceph1 ~]# adduser sam
[root@ceph1 ~]# echo 123456 | passwd sam --stdin
[root@ceph1 ~]# cat << EOF >/etc/sudoers.d/sam
sam ALL = (root) NOPASSWD:ALL
Defaults:sam !requiretty
EOF
[root@ceph1 ~]# chmod 440 /etc/sudoers.d/sam

切換使用者

[root@ceph1 ~]# su - sam
[sam@ceph1 ~]$ ssh-keygen -b 4096
[sam@ceph1 ~]$ ssh-agent bash
[sam@ceph1 ~]$ ssh-add
[sam@ceph1 ~]$ for node in ceph2 ceph3; do ssh-copy-id $node; done

測試一下

[sam@ceph1 ~]$ ssh ceph2
[sam@ceph2 ~]$ logout
Connection to ceph2 closed.
[sam@ceph1 ~]$ ssh ceph3
[sam@ceph3 ~]$ logout
Connection to ceph3 closed.

再來是加上磁碟,也就是做osd的部份,預計採用xfs,最後是檢查格式是否正確

[sam@ceph1 ~]$ sudo fdisk -l /dev/sdb

Disk /dev/sdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[sam@ceph1 ~]$ sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
[sam@ceph1 ~]$ sudo mkfs.xfs /dev/sdb -f
meta-data=/dev/sdb               isize=512    agcount=4, agsize=13107200 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=52428800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=25600, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[sam@ceph1 ~]$ sudo blkid -o value -s TYPE /dev/sdb
xfs

再來是安裝ceph

[sam@ceph1 ~]$ sudo rpm -Uhv http://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm
[sam@ceph1 ~]$ sudo yum update -y && sudo yum install ceph-deploy -y

安裝mon和osd

[sam@ceph1 ~]$ ceph-deploy install --mon ceph1 ceph2 ceph3
[sam@ceph1 ~]$ ceph-deploy install --osd ceph1 ceph2 ceph3

建立設定檔

[sam@ceph1 ~]$ ceph-deploy new ceph1 ceph2 ceph3

加入細項設定檔

cat << EOF >> ceph.conf
osd_journal_size = 10000
osd_pool_default_size = 3
osd_pool_default_min_size = 3
osd_crush_chooseleaf_type = 1
osd_crush_update_on_start = true
max_open_files = 131072
osd pool default pg num = 128
osd pool default pgp num = 128
mon_pg_warn_max_per_osd = 0
EOF

建立mon

[sam@ceph1 ~]$ ceph-deploy --overwrite-conf config push ceph2 ceph3 --sync config
[sam@ceph1 ~]$ ceph-deploy mon create-initial
[sam@ceph1 ~]$ ceph-deploy install --cli ceph1
[sam@ceph1 ~]$ ceph-deploy admin ceph1

加入osd

[ceph1][DEBUG ] ceph-deploy disk list ceph1 ceph2 ceph3

[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[ceph1][DEBUG ] /dev/dm-0 other, xfs, mounted on /
[ceph1][DEBUG ] /dev/dm-1 swap, swap
[ceph1][DEBUG ] /dev/sda :
[ceph1][DEBUG ]  /dev/sda2 other, LVM2_member
[ceph1][DEBUG ]  /dev/sda1 other, xfs, mounted on /boot
[ceph1][DEBUG ] /dev/sdb other, xfs  -->> 之前作的xfs
[ceph1][DEBUG ] /dev/sr0 other, iso9660

[ceph1][DEBUG ] ceph-deploy disk zap ceph1:/dev/sdb ceph2:/dev/sdb ceph3:/dev/sdb

[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph1
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph1][DEBUG ] zeroing last few blocks of device
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: sudo /usr/sbin/ceph-disk zap /dev/sdb
[ceph1][DEBUG ] Creating new GPT entries.
[ceph1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[ceph1][DEBUG ] other utilities.
[ceph1][DEBUG ] Creating new GPT entries.
[ceph1][DEBUG ] The operation has completed successfully.

[sam@ceph1 ~]$ ceph-deploy osd prepare ceph1:/dev/sdb ceph2:/dev/sdb ceph3:/dev/sdb
[ceph3][INFO  ] checking OSD status...
[ceph3][DEBUG ] find the location of an executable
[ceph3][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph3 is now ready for osd use. -->>磁碟準備好了

[sam@ceph1 ~]$ ceph-deploy osd activate ceph1:/dev/sdb1 ceph2:/dev/sdb1 ceph3:/dev/sdb1

[sam@ceph1 ~]$ ceph-deploy disk list ceph1 ceph2 ceph3
[ceph1][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1
[ceph1][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
[ceph1][DEBUG ] /dev/sr0 other, iso9660

[sam@ceph1 ~]$ fdisk -l /dev/sdb
#         Start          End    Size  Type            Name
 1     20482048    419430366  190.2G  unknown         ceph data
 2         2048     20482047    9.8G  unknown         ceph journal

看一下狀態,3個mon,3個osd

[sam@ceph1 ceph]$ sudo ceph -s
    cluster 587284b0-c1fd-4c66-bdf2-cb561ec511be
     health HEALTH_OK
     monmap e3: 3 mons at {ceph1=10.0.252.117:6789/0,ceph2=10.0.252.118:6789/0,ceph3=10.0.252.119:6789/0}
            election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e25: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v63: 64 pgs, 1 pools, 0 bytes data, 0 objects
            102 MB used, 570 GB / 570 GB avail
                  64 active+clean

再來是RGW

[sam@ceph1 ~]$ ceph-deploy install --rgw ceph1 ceph2 ceph3
[ceph1][DEBUG ] Running transaction
[ceph1][DEBUG ]   Installing : mailcap-2.1.41-2.el7.noarch                                  1/2
[ceph1][DEBUG ]   Installing : 1:ceph-radosgw-10.2.8-0.el7.x86_64                           2/2
[ceph1][DEBUG ]   Verifying  : mailcap-2.1.41-2.el7.noarch                                  1/2
[ceph1][DEBUG ]   Verifying  : 1:ceph-radosgw-10.2.8-0.el7.x86_64                           2/2
[ceph1][DEBUG ]
[ceph1][DEBUG ] Installed:
[ceph1][DEBUG ]   ceph-radosgw.x86_64 1:10.2.8-0.el7
[ceph1][DEBUG ]
[ceph1][DEBUG ] Dependency Installed:
[ceph1][DEBUG ]   mailcap.noarch 0:2.1.41-2.el7
[ceph1][DEBUG ]
[ceph1][DEBUG ] Complete!

新增預設埠號

[sam@ceph1 ~]$ sudo firewall-cmd --zone=public --add-port=7480/tcp --permanent &;& sudo firewall-cmd --reload
success
success

啟動並測試

[sam@ceph1 ceph]$ ceph-deploy rgw create ceph1 ceph2 ceph3
[sam@ceph1 ceph]$ curl ceph1:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
[sam@ceph1 ceph]$ curl ceph2:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
[sam@ceph1 ceph]$ curl ceph3:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

建立帳號

[sam@ceph1 ceph]$ radosgw-admin user create --uid=aft --display-name=aft --email=This email address is being protected from spambots. You need JavaScript enabled to view it.
{
    "user_id": "aft",
    "display_name": "aft",
    "email": "This email address is being protected from spambots. You need JavaScript enabled to view it.",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "aft",
            "access_key": "4DADYQMRB8XGI7JOKXBG",
            "secret_key": "FNyonL8ywC0H14P35a6tx54tPwUk8nvBdsiXTU3g"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "temp_url_keys": []
}

s3cmd

[sam@ceph1 ceph]$ sudo yum install -y s3cmd
[sam@ceph1 ceph]$ s3cmd --configure
[sam@ceph1 ceph]$ vi ~/.s3cfg
改成自己的
host_base = ceph1:7480
host_bucket = %(bucket)ceph1:7480

建立及刪除bucket

[sam@ceph1 ceph]$ s3cmd mb s3://mybucket
Bucket 's3://mybucket/' created
[sam@ceph1 ceph]$ s3cmd rb s3://mybucket
Bucket 's3://mybucket/' removed

設定好網域名稱,並用另臺主機(mac)測試

samde-MacBook-Pro:~ sam$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 4DADYCDMRB(XGIxJOKXBG
Secret Key: FNyonL8ywC0H14P35a9tc54tPwck8nvBdsiXTU3g
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: aft-buckettt.gotdns.com:7480

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: aft-buckettt.gotdns.com:7480

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 

New settings:
  Access Key: 4DADYQMRB9CGI7JOKXBG
  Secret Key: FNyonL8ywC0H14P32a6tx54tPwUk8nvBdsiXTU3g
  Default Region: US
  S3 Endpoint: aft-buckettt.gotdns.com:7480
  DNS-style bucket+hostname:port template for accessing a bucket: aft-bucket.gotdns.com:7480
  Encryption password: 
  Path to GPG program: None
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/Users/sam/.s3cfg'
samde-MacBook-Pro:~ sam$ ls
Applications     Movies
Desktop       Music
Documents      Pictures
Downloads     Public
Library       sungirlbaby.2017-06-29.tar
samde-MacBook-Pro:~ sam$ vi .s3cfg 
samde-MacBook-Pro:~ sam$ s3cmd mb s3://abc
Bucket 's3://abc/' created
samde-MacBook-Pro:~ sam$ s3cmd rb s3://abc
Bucket 's3://abc/' removed

找一臺客戶端安裝s3cmd並測試上傳

samde-MacBook-Pro:~ sam$ s3cmd mb s3://test
samde-MacBook-Pro:~ sam$ touch test-sam1060710.txt
samde-MacBook-Pro:~ sam$ s3cmd put test-sam1060710.txt s3://test
upload: 'test-sam1060710.txt' -> 's3://test/test-sam1060710.txt'  [1 of 1]
 0 of 0     0% in    0s     0.00 B/s  done
samde-MacBook-Pro:~ sam$ cd Downloads/
samde-MacBook-Pro:Downloads sam$ s3cmd get s3://test/test-sam1060710.txt 
download: 's3://test/test-sam1060710.txt' -> './test-sam1060710.txt'  [1 of 1]
 0 of 0     0% in    0s     0.00 B/s  done
samde-MacBook-Pro:Downloads sam$ ls
test-sam1060710.txt

測試權限

samde-MacBook-Pro:~ sam$ curl http://aft-buckettt.gotdns.com:7480/test/test-sam1060710.txt
<?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><BucketName>test</BucketName><RequestId>tx000000000000000000032-0059631838-3768-default</RequestId><HostId>3768-default-default</HostId></Error>

改成public
samde-MacBook-Pro:~ sam$ s3cmd put --acl-public test-sam1060710.txt s3://test
upload: 'test-sam1060710.txt' -> 's3://test/test-sam1060710.txt'  [1 of 1]
 4 of 4   100% in    0s    52.60 B/s  done
Public URL of the object is: http://aft-buckettt.gotdns.com:7480/test/test-sam1060710.txt

samde-MacBook-Pro:~ sam$ curl http://aft-buckettt.gotdns.com:7480/test/test-sam1060710.txt
123  -->> 正確內容

改成埠號80好了

[client.rgw.ceph1]
rgw_frontends = "civetweb port=80"
[root@ceph1 ~]# sudo -u sam ceph-deploy --overwrite-conf config push ceph2 ceph3
[root@ceph1 ~]# firewall-cmd --zone=public --add-port 80/tcp --permanent
[root@ceph1 ~]# firewall-cmd --reload
[root@ceph1 ~]# firewall-cmd --list-all

重啟服務即可

剛好這一臺有nginx,順便弄一個proxy

root@debian:~# vi /etc/nginx/sites-enabled/sam
root@debian:~# cat /etc/nginx/sites-enabled/sam
upstream aft-buckettt.gotdns.com {
    server 10.0.252.117:7480;
    server 10.0.252.118:7480;
    server 10.0.252.119:7480;
}

server {
    listen 80;
    location / {
        proxy_pass http://aft-buckettt.gotdns.com/;
    }
}

這樣就把外部ip導到這個nginx就行了

###############

剛試傳了一下發現會跳出413錯誤

請新增一下最大檔案的上限在nginx 設定檔中

client_max_body_size 79979M