Assumption and purpose:
This is an attempt to compare two promising open source Object Store Technologies purely based on performance. The use case kept in mind is large or small scale public cloud storage provider & the attempt here is evaluate the best technology for said use case.
This is an attempt to compare two promising open source Object Store Technologies purely based on performance. The use case kept in mind is large or small scale public cloud storage provider & the attempt here is evaluate the best technology for said use case.
Feature delta between OpenStack Swift and Ceph Object Store is ignored here. Ceph is viewed only as Object Store serving Objects via Swift REST API (not RADOS Objects), Ceph’s other interfaces which provide file and block based access are ignored here.
Assumption here is both the technologies can be best compared when deployed on same hardware and topology & tested with same kind of workload. Data caching is discouraged while collecting numbers (Page Cache, Dentries and Inode are flushed every minute on each server). COSBench is used as benchmarking tool.
Note:
I got some suggestion to improve Ceph-RGW performance from Ceph community . I tried all of them, they do have some minor impact on the overall Ceph-RGW performance(<3%). However there is nothing that change the overall conclusion of the study.
It would not be called an apple to apple comparison but with multiple RGW-civetweb instances and HA proxy, I was able to get better results with Ceph-RGW. I will be posting them soon.
It would not be called an apple to apple comparison but with multiple RGW-civetweb instances and HA proxy, I was able to get better results with Ceph-RGW. I will be posting them soon.
Hardware:
There are two flavors of Dell Power Edge R620 servers used in the study. For simplicity I will now call them T1 & T2.
T1:
CPU: 2x Intel-E5-2680 10C 2.8GHz 25M$ (40 Logical CPU, with HT Enabled)
RAM: 4x 16GB RDIMM, dual rank x4 (64GB)
NIC1: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (For Management)
NIC2: Mellonix Connect-X3, 40 Gigabit Ethernet, Dual Port Full Duplex (For Data)
Storage: 160 GB HDD (For OS).
T2:
CPU: 2x Intel-E5-2680 10C 2.8GHz 25M$ (40 Logical CPU, with HT Enabled)
RAM: 8x 16GB RDIMM, dual rank x4 (128GB)
NIC1: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (For Management)
NIC2: Mellonix Connect-X3, 40 Gigabit Ethernet, Dual Port Full Duplex (For Data)
Storage1: 160 GB HDD (For OS)
Storage2: 10x 400GB Optimus Eco™ 2.5” SAS SSDs (4TB)
Interface SAS (4 Phy 6Gb/s)
Interface Ports Dual/Wide
Network Bandwidth Check:
Host-A$ date ; sudo iperf -c XXX.XXX.XXX.B -p 5001 -P4 -m ; date
Tue Nov 4 13:16:58 IST 2014
------------------------------------------------------------
Client connecting to XXX.XXX.XXX.B , TCP port 5001
TCP window size: 325 KByte (default)
------------------------------------------------------------
[ 5] local XXX.XXX.XXX.A port 43892 connected with XXX.XXX.XXX.B port 5001
[ 3] local XXX.XXX.XXX.A port 43891 connected with XXX.XXX.XXX.B port 5001
[ 6] local XXX.XXX.XXX.A port 43893 connected with XXX.XXX.XXX.B port 5001
[ 4] local XXX.XXX.XXX.A port 43890 connected with XXX.XXX.XXX.B port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 10.9 GBytes 9.35 Gbits/sec
[ 5] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[ 3] 0.0-10.0 sec 9.17 GBytes 7.88 Gbits/sec
[ 3] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[ 6] 0.0-10.0 sec 16.5 GBytes 14.2 Gbits/sec
[ 6] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[ 4] 0.0-10.0 sec 8.72 GBytes 7.49 Gbits/sec
[ 4] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[SUM] 0.0-10.0 sec 45.3 GBytes 38.9 Gbits/sec
Tue Nov 4 13:17:08 IST 2014
Host-B$ date ; sudo iperf -c 10.242.43.100 -p 4001 -P4 -m ; date
Tue Nov 4 13:17:01 IST 2014
------------------------------------------------------------
Client connecting to 10.242.43.100, TCP port 4001
TCP window size: 325 KByte (default)
------------------------------------------------------------
[ 4] local XXX.XXX.XXX.B port 59130 connected with XXX.XXX.XXX.A port 4001
[ 3] local XXX.XXX.XXX.B port 59131 connected with XXX.XXX.XXX.A port 4001
[ 6] local XXX.XXX.XXX.B port 59133 connected with XXX.XXX.XXX.A port 4001
[ 5] local XXX.XXX.XXX.B port 59132 connected with XXX.XXX.XXX.A port 4001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 14.6 GBytes 12.6 Gbits/sec
[ 4] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[ 3] 0.0-10.0 sec 7.90 GBytes 6.79 Gbits/sec
[ 3] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[ 6] 0.0-10.0 sec 14.7 GBytes 12.7 Gbits/sec
[ 6] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[ 5] 0.0-10.0 sec 8.40 GBytes 7.21 Gbits/sec
[ 5] MSS size 8960 bytes (MTU 9000 bytes, unknown interface)
[SUM] 0.0-10.0 sec 45.7 GBytes 39.2 Gbits/sec
Tue Nov 4 13:17:11 IST 2014
So the total available bandwidth is ~39Gbps(~5GBps) for inbound and ~39Gbps(~5GBps) for outbound traffic as well.
Topology & Setup:
Ceph setup has two more monitor nodes which are not show here.
Ceph RGW Setup:
Software Details
General Configuration:
- Ubuntu 14.04 (3.13.0-24-generic)
- Linux Tuning options for networking configured on all the nodes
#Configs recommended for Mellonix Connect –X3
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_low_latency = 1
net.core.netdev_max_backlog = 250000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
kernel.core_uses_pid = 1
MTU size of 9000 is used along with above options.
- A CRON job is configured to flush DRAM cache each minute on each node.
sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"
Ceph Configurations:-
- Ceph Version: 0.87
- RGW is used with Apache + FASTCGI as well as CivetWeb.
- Apache version : 2.4.7-1ubuntu4.1, with libapache2-mod-fastcgi -2.4.7~0910052141-1.1
- Ceph.conf is placed here. This contains the entire ceph optimization configuration, done in the experiment.
- Default region, zone and pools. All .rgw *pools created with default zone are set to use PG_NUM of 4096.
- Replica count is set to 3.(Max_Size=3, Min_Size=2)
- Apache configuration parameters:
ServerLimit 4096
ThreadLimit 200
StartServers 20
MinSpareThreads 30
MaxSpareThreads 100
ThreadsPerChild 128
MaxClients 4096
MaxRequestsPerChild 10000
- CivetWeb is used with all the default configurations. However ‘rgw_op_thread’ seems to control the CivetWeb’s configuration option ‘num_op_thread’, which is set to 128. This parameter seems to degrade the performance in term of response time, if increased beyond this point. (I tried setting this to 256/512 and it resulted in to CLOSE_WAIT state of more & more HTTP connections). I am hitting a CivetWeb bug related to this problem.
OpenStack Swift Configurations:
- OpenStack Swift Version : Icehouse/ Swift2.0
- Webserver: Default WSGI
- Inode Size of 256K is used all other XFS formatting and mounting options are as per recommendation made in Swift Deployment Guide.
- WSGI pipeline is trimmed down and only contains essential middleware.
Proxy Server WSGI pipeline looks like this:
pipeline = healthcheck cache tempauth proxy-server
- Each Storage node is configured as a zone in a region, and on each node there is disk that is dedicatedly used for Account and Container Databases. All the other disks are used for keeping Objects only. Ring files are populated based on these configurations.
- Proxy node only runs the proxy-server & memcached.
- Storage node run all the other swift services i.e account-server, container-server, object-server along with supporting services like auditors, updaters, replicaters.
COSBench & Workload Details
- COSBench Version: 0.4.0.e1
- COSBench Controller and driver both are configured on the same machine, as the hardware is capable of sustaining the workload.
- Small File/Object workload is as follows:
Object Size: 1MB
Containers: 100
Objects Per Container: 1000
- Large File/Object workload is as follows:
Object Size: 1GB
Containers: 10
Objects Per Container: 100
- Objects are written once in both the cases.
- Every workload is configured to use different COSBench worker count.
- For Small File Workload Worker counts are: 32,64,128,256,512
- For Large File Workload Worker counts are: 8,16,32,64,128
- Each Workload is executed for 900 Seconds, and objects are read randomly from the available set of Swift Objects.
- There is no difference in workloads for Ceph and Swift except the value of generated token. A token was generated after creating Swift users in both cases. This token is provided along with Storage-URL in workload configurations.
- Ceph Put all the Swift objects in Single Ceph Pool called ‘.rgw.buckets’.
Results:
Small File Workload:
Large File Workload:
Additional Details:
90%RT: Response Time of 90% requests
Max RT: Maximum Response Time taken by all successful requests.
Small Files
Worker Count
|
RGW-Apache
|
RGW-CivetWeb
|
Swift-Native
|
32
|
90%RT< 20 ms
Max RT=1450 ms |
90%RT< 20 ms
Max RT=1440 ms |
90%RT< 30 ms
Max RT=10,230 ms |
64
|
90%RT< 50 ms
Max RT=2000 ms |
90%RT< 60 ms
Max RT= 1,460 ms |
90%RT< 30 ms
Max RT=16,336 ms |
128
|
90%RT< 110 ms
Max RT=3090 ms |
90%RT< 120 ms
Max RT= 1,480 ms |
90%RT< 70 ms
Max RT=16,380ms |
256
|
90%RT< 210 ms
Max RT=1760 ms |
90%RT< 120 ms
Max RT= 3280 ms |
90%RT< 90 ms
Max RT=17,020ms |
512
|
90%RT< 330 ms
Max RT=33,120 ms |
90%RT< 200 ms
Max RT= 20,040 ms |
90%RT< 160 ms
Max RT=16,760ms |
Large Files:
Worker Count
|
RGW-Apache
|
RGW-CivetWeb
|
Swift-Native
|
8
|
90%RT< 3,110 ms
Max RT= 4,540 ms |
90%RT < 3,060 ms
Max RT= 5,380 ms |
90%RT < 6,740 ms
Max RT= 11,210 ms |
16
|
90%RT< 5,550 ms
Max RT= 7,980 ms |
90%RT < 5,780 ms
Max RT= 18,150 ms |
90%RT < 8,150 ms
Max RT= 13,710 ms |
32
|
90%RT< 10,860 ms
Max RT= 11,900 ms |
90%RT < 10,970 ms
Max RT= 12,120 ms |
90%RT < 9,800 ms
Max RT= 17,810 ms |
64
|
90%RT< 21,370 ms
Max RT= 24,200 ms |
90%RT < 21,190 ms
Max RT= 22,080 ms |
90%RT < 19,530 ms
Max RT= 38,760 ms |
128
|
90%RT< 42,410 ms
Max RT= 43,340 ms |
90%RT < 41,590 ms
Max RT= 44,210 ms |
90%RT < 46,800 ms
Max RT=74,810 ms |
Conclusion:
- Native Swift behaviour & results curve seems sane. A clear relation between concurrency and throughput is established.
- Ceph-RGW seems to have problem with RGW threading model, a flat throughput curve with increased concurrency is certainly not a good sign.
- Native Swift in general performs better in high concurrency environments.
- Ceph RGW gives better bandwidth at lower concurrency.
- Ceph RGW response time is excellent for Large Objects.
- For Small Objects at lower concurrency, Ceph -RGW seems very promising, however there is much to do, as concurrency plays a great role in Web Server environment.
- Ceph RGW major bottleneck is WebServer, however CivetWeb & Apache FASCGI gives comparable numbers. However CivetWeb is better than apache+fcgi in term of response time at high concurrency. CivetWeb has a inherent design limitation, which is already reported here.
- Digging further I also made an attempt to benchmark Ceph using RADOS bench, which directly uses Ceph Objects(Different from Swift Object interface it provides). I ran the bench the same node which is used as COSBench Controller +Driver. So in this case RGW is out of the picture , in summary my observations are as below:
Object Size& Threads
|
Avg Bandwidth (MB/Sec)
|
Avg Latency (Sec)
|
Runtime (Sec)
|
1M , t=128
|
3428.361
|
0.0373327
|
300
|
1M, t=256
|
3485.405
|
0.0734383
|
300
|
4M, t=128
|
4015.811
|
0.127454
|
300
|
4M,t=256
|
4080.127
|
0.250806
|
300
|
10M,t=128
|
3628.700
|
0.352318
|
300
|
10M,t=256
|
3609.026
|
0.701526
|
300
|
Bandwidth number can be directly used for representing IOPs. So in summary even RADOS bench is not giving bandwidth beyond ~4GB/s. Strange thing here is it is optimize for 4MB Size, increasing beyond this Object size is not giving higher OPs (bandwidth).
Other Remarks:
- Swift is more feature rich in terms of REST API.
- S3 API is supported by both.
- Finding good documentation is big pain in setting up Ceph.