It’s been almost three years I’m amongst Azure together with to endure honest I never had whatever issues. However, I flora something interesting happening recently. Everytime my server was striking past times high traffic book (either a shipping service went somwhat viral for some unknown argue – I hateful come upwardly on, my posts are therefore badly written fifty-fifty I don’t know why anyone but me would read those. .. together with also it happens when people uses my ain guide to DoS my ain server ( ¯\_(ツ)_/¯ ) – yeah, hapless my bad!) Ideally this is non a occupation for me because my server is over-spected for to bargain amongst 5k-10k users per solar daytime but Azure late seems to have got changed something putting limitations on IOPS together with Disk R/W priority. What that agency is when I larn striking past times high trafic book everything would exactly ho-hum down. Again, non a large bargain but it bothers me when I have got something that is out of my control. I similar that command over my ain server together with resources. Almost out of curiosity, I looked roughly together with flora UpCloud.com who claims they have got the beyond SSD performance together with highest IOPS! Now, everytime I’ve seen these I larn really skeptical but since signup was costless (well form of, I got a referral code from a friend that adds 25$ credit to the account). I signed upwardly hesitantly together with decided to practice a server together with practice some testing together with therefore compare that to Azure Virtual Machine. The results are (very) interesting!
UpCloud VM Config
I went inwards together with deployed a measure VM inwards UpCloud amongst next config:
- RAM: 8192 MB
- CPU: four CPU
- Disk: 160 GB
- Transfer: 5120 GB
- Network Controller: VirtIO(Default)
- Disk Controller: VirtIO(Default)
- OS: Ubuntu 18.04 LTS
- Firewall: Not Included
Benchmark CPU amongst Geenbench 3
It’s pretty standard, you lot either role Geekbench three or four but since three is to a greater extent than widely used, I went for Geekbench 3. Installation is really at nowadays frontward inwards Ubuntu
root@www: # uname -a
Linux technoused.blogspot.com 4.15.0-32-generic #35-Ubuntu SMP Friday Aug 10 17:58:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
root@www: #
root@www: # sudo dpkg --add-architecture i386
root@www: # sudo apt-get update
root@www: # sudo apt-get install libc6:i386 libstdc++6:i386 -y
After this but needed to download Geekbench 3, untar
it together with run it.
root@www: # wget http://cdn.primatelabs.com/Geekbench-3.4.1-Linux.tar.gz /
--2018-09-19 07:21:07-- http://cdn.primatelabs.com/Geekbench-3.4.1-Linux.tar.gz
Resolving cdn.primatelabs.com (cdn.primatelabs.com)... 52.85.112.102, 52.85.112.71, 52.85.112.143, ...
Connecting to cdn.primatelabs.com (cdn.primatelabs.com)|52.85.112.102|:80... connected.
HTTP asking sent, awaiting response... 200 OK
Length: 9990361 (9.5M) [application/x-gzip]
Saving to: 'Geekbench-3.4.1-Linux.tar.gz'
Geekbench-3.4.1-Lin 100%[===================>] 9.53M 28.2MB/s inwards 0.3s
2018-09-19 07:21:08 (28.2 MB/s) - 'Geekbench-3.4.1-Linux.tar.gz' saved [9990361/9990361]
FINISHED --2018-09-19 07:21:08--
Total wall clock time: 0.6s
Downloaded: 1 files, 9.5M inwards 0.3s (28.2 MB/s)
Untarring it:
root@www: # tar -zxvf /Geekbench-3.4.1-Linux.tar.gz && cd /dist/Geekbench-3.4.1-Linux/
Running Geekbench 3
root@www: /dist/Geekbench-3.4.1-Linux# ./geekbench
Geekbench three does multiple tests to mensurate CPU performance, calculates floating signal together with checks retention speed data. In short, it does things together with creates a spider web study which it uploads to a website. My exam results for UpCloud VM is available here:
Uploading results to the Geekbench Browser. This could have got a infinitesimal or two
depending on the speed of your network connection.
Upload succeeded. Visit the next link together with stance your results online:
http://browser.primatelabs.com/geekbench3/8681159
Visit the next link together with add together this consequence to your profile:
http://browser.primatelabs.com/geekbench3/claim/8681159?key=466728
Benchmark Disk speed amongst fio
This is where my Azure VM was struggling! It’s of import that I gear upwardly the disk lag past times testing equally much I can. fio is a pocket-sized but fantastic tool to practice disk I/O benchmarking together with stress test. Installation is really simple:
root@www: /dist/Geekbench-3.4.1-Linux# sudo apt-get install fio -y
Now lets run some IOPS tests. BTW, IOPS is I/O operations per second. Usually you lot desire to exam IOPS thru random read write to disk. With fio together with IOPS the higher the IOPS, the faster the storage. For a comparison, a measure 7,200 rpm SATA drive HDD would have got a score of 75-100 IOPS.
Random read/write performance
root@www: /dist/Geekbench-3.4.1-Linux# fio --name=randrw --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randrw --rwmixread=75 --gtod_reduce=1
randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
randrw: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][27.3%][r=283MiB/s,w=95.4MiB/s][r=72.5k,w=24.4k IOPS][eta 0Jobs: 1 (f=1): [m(1)][36.4%][r=270MiB/s,w=90.6MiB/s][r=69.2k,w=23.2k IOPS][eta 0Jobs: 1 (f=1): [m(1)][45.5%][r=274MiB/s,w=91.8MiB/s][r=70.1k,w=23.5k IOPS][eta 0Jobs: 1 (f=1): [m(1)][54.5%][r=271MiB/s,w=91.7MiB/s][r=69.5k,w=23.5k IOPS][eta 0Jobs: 1 (f=1): [m(1)][63.6%][r=258MiB/s,w=85.7MiB/s][r=65.9k,w=21.9k IOPS][eta 0Jobs: 1 (f=1): [m(1)][72.7%][r=282MiB/s,w=94.0MiB/s][r=72.1k,w=24.1k IOPS][eta 0Jobs: 1 (f=1): [m(1)][81.8%][r=281MiB/s,w=95.0MiB/s][r=72.0k,w=24.3k IOPS][eta 0Jobs: 1 (f=1): [m(1)][90.9%][r=284MiB/s,w=94.3MiB/s][r=72.8k,w=24.1k IOPS][eta 0Jobs: 1 (f=1): [m(1)][100.0%][r=284MiB/s,w=93.5MiB/s][r=72.6k,w=23.9k IOPS][eta 00m:00s]
randrw: (groupid=0, jobs=1): err= 0: pid=2973: quarta-feira Sep xix 07:23:34 2018
read: IOPS=70.7k, BW=276MiB/s (290MB/s)(3070MiB/11111msec)
bw ( KiB/s): min=261488, max=295496, per=99.92%, avg=282700.18, stdev=8544.88, samples=22
iops : min=65372, max=73874, avg=70675.23, stdev=2136.26, samples=22
write: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(1026MiB/11111msec)
bw ( KiB/s): min=86056, max=98592, per=99.90%, avg=94466.68, stdev=2923.62, samples=22
iops : min=21514, max=24648, avg=23616.64, stdev=730.91, samples=22
cpu : usr=9.46%, sys=29.91%, ctx=27208, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
consummate : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=785920,262656,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run condition grouping 0 (all jobs):
READ: bw=276MiB/s (290MB/s), 276MiB/s-276MiB/s (290MB/s-290MB/s), io=3070MiB (3219MB), run=11111-11111msec
WRITE: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=1026MiB (1076MB), run=11111-11111msec
Disk stats (read/write):
vda: ios=781617/261218, merge=0/11, ticks=412260/184136, in_queue=589192, util=98.54%
Summary of Random Read/Write Test
read: IOPS=70.7k, BW=276MiB/s (290MB/s)(3070MiB/11111msec)
write: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(1026MiB/11111msec)
READ: bw=276MiB/s (290MB/s), 276MiB/s-276MiB/s (290MB/s-290MB/s), io=3070MiB (3219MB), run=11111-11111msec
WRITE: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=1026MiB (1076MB), run=11111-11111msec
Random read performance
root@www: /dist/Geekbench-3.4.1-Linux# fio --name=randread --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randread --gtod_reduce=1
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
randread: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=390MiB/s,w=0KiB/s][r=99.9k,w=0 IOPS][eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=2979: quarta-feira Sep xix 07:24:05 2018
read: IOPS=99.3k, BW=388MiB/s (407MB/s)(4096MiB/10558msec)
bw ( KiB/s): min=384696, max=405384, per=99.85%, avg=396673.14, stdev=4738.98, samples=21
iops : min=96174, max=101346, avg=99168.29, stdev=1184.75, samples=21
cpu : usr=8.67%, sys=31.31%, ctx=24115, majf=0, minf=71
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
consummate : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=1048576,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run condition grouping 0 (all jobs):
READ: bw=388MiB/s (407MB/s), 388MiB/s-388MiB/s (407MB/s-407MB/s), io=4096MiB (4295MB), run=10558-10558msec
Disk stats (read/write):
vda: ios=1033147/4, merge=0/2, ticks=537228/4, in_queue=529948, util=96.65%
read: IOPS=99.3k, BW=388MiB/s (407MB/s)(4096MiB/10558msec)
READ: bw=388MiB/s (407MB/s), 388MiB/s-388MiB/s (407MB/s-407MB/s), io=4096MiB (4295MB), run=10558-10558msec
Random write performance
root@www: /dist/Geekbench-3.4.1-Linux# fio --name=randwrite --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randwrite --gtod_reduce=1
randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
randwrite: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [w(1)][93.3%][r=0KiB/s,w=279MiB/s][r=0,w=71.4k IOPS][eta 00m:01s]
randwrite: (groupid=0, jobs=1): err= 0: pid=2983: quarta-feira Sep xix 07:24:28 2018
write: IOPS=70.7k, BW=276MiB/s (289MB/s)(4096MiB/14836msec)
bw ( KiB/s): min=256336, max=312784, per=99.79%, avg=282123.59, stdev=14126.73, samples=29
iops : min=64084, max=78196, avg=70530.90, stdev=3531.68, samples=29
cpu : usr=6.65%, sys=58.88%, ctx=22261, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=0,1048576,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run condition grouping 0 (all jobs):
WRITE: bw=276MiB/s (289MB/s), 276MiB/s-276MiB/s (289MB/s-289MB/s), io=4096MiB (4295MB), run=14836-14836msec
Disk stats (read/write):
vda: ios=0/1041069, merge=0/236522, ticks=0/654296, in_queue=646988, util=97.84%
write: IOPS=70.7k, BW=276MiB/s (289MB/s)(4096MiB/14836msec)
WRITE: bw=276MiB/s (289MB/s), 276MiB/s-276MiB/s (289MB/s-289MB/s), io=4096MiB (4295MB), run=14836-14836msec
I’ve highlighted the read together with write IOPS inwards red
inwards each code section.
Benchmark disk latency amongst IOPing
Now exactly because a disk tin dismiss read write faster doesn’t hateful it volition downwards good inwards existent life. There’s latency exam that you lot tin dismiss practice that volition order you lot what’s the delay betwixt each request. IOPing does exactly that. It’s a really pocket-sized tool that runs I/O requests to the disks to benchmark the fourth dimension to reply. The results display disk latency inwards the same way ping –test measures network latency. (this is similar to running whatever command amongst time
inwards front end of them, you lot larn the fourth dimension output for that command.). Installing is i time again simple.
root@www: /dist/Geekbench-3.4.1-Linux# cd
root@www: # pwd
/root
root@www: # wget https://launchpad.net/ubuntu/+archive/primary/+files/ioping_0.9-2_amd64.deb
--2018-09-19 07:25:25-- https://launchpad.net/ubuntu/+archive/primary/+files/ioping_0.9-2_amd64.deb
Resolving launchpad.net (launchpad.net)... 91.189.89.223, 91.189.89.222
Connecting to launchpad.net (launchpad.net)|91.189.89.223|:443... connected.
HTTP asking sent, awaiting response... 303 See Other
Location: https://launchpadlibrarian.net/238178369/ioping_0.9-2_amd64.deb [following]
--2018-09-19 07:25:25-- https://launchpadlibrarian.net/238178369/ioping_0.9-2_amd64.deb
Resolving launchpadlibrarian.net (launchpadlibrarian.net)... 91.189.89.228, 91.189.89.229
Connecting to launchpadlibrarian.net (launchpadlibrarian.net)|91.189.89.228|:443... connected.
HTTP asking sent, awaiting response... 200 OK
Length: 13320 (13K) [application/x-debian-package]
Saving to: 'ioping_0.9-2_amd64.deb'
ioping_0.9-2_amd64. 100%[===================>] 13.01K --.-KB/s inwards 0s
2018-09-19 07:25:25 (170 MB/s) - 'ioping_0.9-2_amd64.deb' saved [13320/13320]
Install it:
root@www: # dpkg -i ioping_0.9-2_amd64.deb
Selecting previously unselected packet ioping.
(Reading database ... 67958 files together with directories currently installed.)
Preparing to unpack ioping_0.9-2_amd64.deb ...
Unpacking ioping (0.9-2) ...
Setting upwardly ioping (0.9-2) ...
Processing triggers for man-db (2.8.3-2) ...
Run the test
The fourth dimension higher upwardly shows the I/O latency measured inwards microseconds. The lower the delay, the amend the performance. So nosotros are looking at an avg of 199ms
I guess!
Azure VM Config
I decided to pretty much clone my VM inwards Azure to practice a novel i to run the same tests. Azure doesn’t have got the same VM settings, therefore I had to become chip large on CPU together with Memory but I went for Premium SSD option. My Azure VM config: D2s_v3, costs you lot almost $65USD per month.
- RAM: 8192 MB
- CPU: 2 CPU
- Disk: sixteen GB
- Transfer: (unlimited?)
- Network Controller: (doesn’t say)Default
- Disk Controller: (doesn’t say)Default
- OS: Ubuntu 18.04 LTS
- Firewall: Included
One affair I ever liked almost Azure is it’s super stable together with I never had whatever issues other than random ho-hum disk response times. But let’s exam away…
Benchmark CPU amongst Geenbench 3
root@ubuntu18-ws01: /dist/Geekbench-3.4.1-Linux# ./geekbench
Results here:
Uploading results to the Geekbench Browser. This could have got a infinitesimal or two
depending on the speed of your network connection.
Upload succeeded. Visit the next link together with stance your results online:
http://browser.primatelabs.com/geekbench3/8681168
Visit the next link together with add together this consequence to your profile:
http://browser.primatelabs.com/geekbench3/claim/8681168?key=542136
I went inwards together with created a comparing betwixt UpCloud together with Azure Geekbench results which tin dismiss endure flora here: http://browser.geekbench.com/geekbench3/compare/8681168?baseline=8681159
Well, that’s something I wasn’t expecting, a departure of that much!
Benchmark Disk speed amongst fio
Random read/write performance
root@ubuntu18-ws01: /dist/Geekbench-3.4.1-Linux# fio --name=randrw --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randrw --rwmixread=75 --gtod_reduce=1
randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
randrw: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][99.8%][r=11.9MiB/s,w=4116KiB/s][r=3055,w=1029 IOPS][eta 00m:01s]
randrw: (groupid=0, jobs=1): err= 0: pid=2875: quarta-feira Sep xix 08:08:39 2018
read: IOPS=1409, BW=5640KiB/s (5775kB/s)(3070MiB/557409msec)
bw ( KiB/s): min= 392, max=13384, per=100.00%, avg=11865.60, stdev=1735.60, samples=529
iops : min= 98, max= 3346, avg=2966.38, stdev=433.90, samples=529
write: IOPS=471, BW=1885KiB/s (1930kB/s)(1026MiB/557409msec)
bw ( KiB/s): min= 8, max= 4568, per=100.00%, avg=3957.91, stdev=621.97, samples=530
iops : min= 2, max= 1142, avg=989.46, stdev=155.49, samples=530
cpu : usr=0.50%, sys=1.66%, ctx=101279, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
consummate : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=785920,262656,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run condition grouping 0 (all jobs):
READ: bw=5640KiB/s (5775kB/s), 5640KiB/s-5640KiB/s (5775kB/s-5775kB/s), io=3070MiB (3219MB), run=557409-557409msec
WRITE: bw=1885KiB/s (1930kB/s), 1885KiB/s-1885KiB/s (1930kB/s-1930kB/s), io=1026MiB (1076MB), run=557409-557409msec
Disk stats (read/write):
sda: ios=785945/262856, merge=0/184, ticks=26366708/9518024, in_queue=15619640, util=44.79%
read: IOPS=1409, BW=5640KiB/s (5775kB/s)(3070MiB/557409msec)
write: IOPS=471, BW=1885KiB/s (1930kB/s)(1026MiB/557409msec)
READ: bw=5640KiB/s (5775kB/s), 5640KiB/s-5640KiB/s (5775kB/s-5775kB/s), io=3070MiB (3219MB), run=557409-557409msec
WRITE: bw=1885KiB/s (1930kB/s), 1885KiB/s-1885KiB/s (1930kB/s-1930kB/s), io=1026MiB (1076MB), run=557409-557409msec
Random read performance
root@ubuntu18-ws01: /dist/Geekbench-3.4.1-Linux# fio --name=randread --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randread --gtod_reduce=1
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
randread: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=15.9MiB/s,w=0KiB/s][r=4080,w=0 IOPS][eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=3769: quarta-feira Sep xix 08:18:35 2018
read: IOPS=4077, BW=15.9MiB/s (16.7MB/s)(4096MiB/257192msec)
bw ( KiB/s): min=14939, max=17952, per=100.00%, avg=16362.75, stdev=155.96, samples=514
iops : min= 3734, max= 4488, avg=4090.48, stdev=38.95, samples=514
cpu : usr=1.06%, sys=3.55%, ctx=106825, majf=0, minf=72
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
consummate : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=1048576,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run condition grouping 0 (all jobs):
READ: bw=15.9MiB/s (16.7MB/s), 15.9MiB/s-15.9MiB/s (16.7MB/s-16.7MB/s), io=4096MiB (4295MB), run=257192-257192msec
Disk stats (read/write):
sda: ios=1047864/624, merge=0/86, ticks=16321336/8056, in_queue=15811120, util=98.31
read: IOPS=4077, BW=15.9MiB/s (16.7MB/s)(4096MiB/257192msec)
READ: bw=15.9MiB/s (16.7MB/s), 15.9MiB/s-15.9MiB/s (16.7MB/s-16.7MB/s), io=4096MiB (4295MB), run=257192-257192msec
Random write performance
root@ubuntu18-ws01: /dist/Geekbench-3.4.1-Linux# fio --name=randwrite --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randwrite --gtod_reduce=1
randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
randwrite: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=15.0MiB/s][r=0,w=4084 IOPS][eta 00m:00s]
randwrite: (groupid=0, jobs=1): err= 0: pid=4120: quarta-feira Sep xix 08:42:24 2018
write: IOPS=764, BW=3059KiB/s (3133kB/s)(4096MiB/1370953msec)
bw ( KiB/s): min= 24, max=17064, per=100.00%, avg=13249.36, stdev=4911.17, samples=632
iops : min= 6, max= 4266, avg=3312.32, stdev=1227.79, samples=632
cpu : usr=0.21%, sys=1.42%, ctx=11177, majf=0, minf=6
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
consummate : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=0,1048576,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run condition grouping 0 (all jobs):
WRITE: bw=3059KiB/s (3133kB/s), 3059KiB/s-3059KiB/s (3133kB/s-3133kB/s), io=4096MiB (4295MB), run=1370953-1370953msec
Disk stats (read/write):
sda: ios=0/1049237, merge=0/26555, ticks=0/170290988, in_queue=15404152, util=18.95%
write: IOPS=764, BW=3059KiB/s (3133kB/s)(4096MiB/1370953msec)
WRITE: bw=3059KiB/s (3133kB/s), 3059KiB/s-3059KiB/s (3133kB/s-3133kB/s), io=4096MiB (4295MB), run=1370953-1370953msec
Benchmark disk latency amongst IOPing
Run the test
Avg of 217ms!
What is going on?
If you lot compare the Read/Write values for Azure together with UpCloud, it’s exactly absurd. I could non believe Azure was therefore slow!
UpCloud:
- READ: bw=276MiB/s (290MB/s), 276MiB/s-276MiB/s (290MB/s-290MB/s), io=3070MiB (3219MB), run=11111-11111msec
- WRITE: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=1026MiB (1076MB), run=11111-11111msec
Azure:
- READ: bw=5640KiB/s (5775kB/s), 5640KiB/s-5640KiB/s (5775kB/s-5775kB/s), io=3070MiB (3219MB), run=557409-557409msec
- WRITE: bw=1885KiB/s (1930kB/s), 1885KiB/s-1885KiB/s (1930kB/s-1930kB/s), io=1026MiB (1076MB), run=557409-557409msec
I hateful serisouly, nosotros are comparing 290MB/s amongst 5775kB/s READ speed together with 96.8MB/s amongst 1930kB/s write speed. This exactly cannot endure true! I hateful if this was true, I honestly don’t know how Azure fifty-fifty industrial plant together with stays inwards business. So I went together with did some research.
Apparently, Azure or AWS or some large hosts who doesn’t attention almost pocket-sized marketplace competition, they boundary these type of tests. That powerfulness explicate why the tests results are exactly therefore bad. I thought, good … lets seat that to exam anyway!How almost I download a 4GB ISO file, endeavour to re-create or deed or delete it on my Azure VM. Surely that is non stress testing, it’s to a greater extent than similar normal file-copy operations.
More Azure testing
Decided to download a Debian ISO (4.35GB inwards size) together with time
it.
root@ubuntu18-ws01: # fourth dimension wget https://cdimage.debian.org/debian-cd/current/amd64/iso-dvd/debian-9.5.0-amd64-DVD-3.iso
--2018-09-19 08:43:58-- https://cdimage.debian.org/debian-cd/current/amd64/iso-dvd/debian-9.5.0-amd64-DVD-3.iso
Resolving cdimage.debian.org (cdimage.debian.org)... 194.71.11.173, 194.71.11.165, 2001:6b0:19::173, ...
Connecting to cdimage.debian.org (cdimage.debian.org)|194.71.11.173|:443... connected.
HTTP asking sent, awaiting response... 302 Found
Location: https://gemmei.ftp.acc.umu.se/debian-cd/current/amd64/iso-dvd/debian-9.5.0-amd64-DVD-3.iso [following]
--2018-09-19 08:43:59-- https://gemmei.ftp.acc.umu.se/debian-cd/current/amd64/iso-dvd/debian-9.5.0-amd64-DVD-3.iso
Resolving gemmei.ftp.acc.umu.se (gemmei.ftp.acc.umu.se)... 194.71.11.137, 2001:6b0:19::137
Connecting to gemmei.ftp.acc.umu.se (gemmei.ftp.acc.umu.se)|194.71.11.137|:443... connected.
HTTP asking sent, awaiting response... 200 OK
Length: 4674906112 (4.4G) [application/x-iso9660-image]
Saving to: ‘debian-9.5.0-amd64-DVD-3.iso’
debian-9.5.0-amd64-DVD-3.iso 100%[====================================================================================================>] 4.35G 24.2MB/s inwards 3m 16s
2018-09-19 08:47:16 (22.7 MB/s) - ‘debian-9.5.0-amd64-DVD-3.iso’ saved [4674906112/4674906112]
real 3m17.655s
user 0m5.235s
sys 0m10.549s
hmm therefore it’s 3m17.655s
to download a 4.35GB file at 24.2MB/s speed. Bad? No. Could endure amend though but therefore i time again could endure express past times Debian server’s charge per unit of measurement limiting.
Now, let’s endeavour to brand a re-create of the file.
root@ubuntu18-ws01: # fourth dimension cp debian-9.5.0-amd64-DVD-3.iso debian-9.5.0-amd64-DVD-3-copy.iso
real 2m31.754s
user 0m0.028s
sys 0m4.797s
Seriously, it took 2m31.754s
to re-create a 4.35GB file!
Try to deed that file to /tmp directory?
root@ubuntu18-ws01: # fourth dimension mv debian-9.5.0-amd64-DVD-3-copy.iso /tmp
real 0m43.087s
user 0m0.006s
sys 0m0.000s
0m43.087s
to delete a file? My VM’s running inwards my ain Desktop inwards spinny HDD does that faster!
How almost delete the file?
root@ubuntu18-ws01: # fourth dimension rm /tmp/debian-9.5.0-amd64-DVD-3-copy.iso
real 0m8.040s
user 0m0.000s
sys 0m0.614s
0m8.040s
to delete a file!
Summary
I don’t know what’s but Azure’s Premium SSD doesn’t aspect similar it lives upwardly to the measure or promises. Azure powerfulness endure really stable together with large, but there’s things that tin dismiss endure improved. I hateful it’s exactly for Disk, fifty-fifty the CPU benchmark was exactly all over the place, Azure is nowhere close Upcloud! The alone practice goodness at nowadays I tin dismiss cry back of is the added Firewall, exactly Azure costs me roughly $65 USD per month, UpCloud is $40 USD together with peradventure added Firewall service. But I run my ain Firewalls, therefore I don’t cry back that’s a occupation for me. I also larn four CPU together with 8GB RAM along amongst 160GB HDD (not that I take away that much) amongst a savings.
Moving to UpCloud.com seems similar the correct move. I got few websites hosted on Azure, therefore it volition have got some fourth dimension but I am honestly thinking of moving my spider web server. BTW, I’ve added my promo code that should laissez passer on you lot $25 credit! ……
Also, I received multiple messages almost my server beingness slow(mostly from Brazilian Readers, therefore could endure portion specific). I could role some favors really; allow me know inwards comments department if my electrical flow hosting is fast or non or your full general comments regarding it. Comments are equally park anonymous together with doesn’t require signup, therefore experience free. Also, if you lot are using Upcloud, allow me know your experience! I am really non bad to deed together with I volition likely write a novel article afterward I’ve moved. Keep inwards touch!