Upload
votu
View
223
Download
0
Embed Size (px)
Citation preview
BOOTSTORM AND VM DISK
ACCESS: NVME VS. SATA Technical Report
ABSTRACT As solid state drives (SSDs) technology matures, the
performance it delivers is constrained by old disk
controller technologies. The NVMe protocol on PCIe
interface allows you to bypass the controller by
attaching directly a disk drive to the bus. In this
report we present the results of comparing NVMe
and SATA drives performance in the context of
virtualization. In the experiment we observed how
different kind of drives react to a VM bootstorm and
to intense simultaneous drive access from all the
running VMs.
Results show that NVMe interface offers significant
benefits over SATA: bootstorm time is on average
75% slower when using SATA; moreover, when all
the VMs perform disk access using diskspd
simultaneously NVMe outperforms SATA by a factor
greater than 4 offering on average more than
250MB/s bandwidth to each VM.
Antonio Cisternino, Maurizio Davini TR-03-16
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
Executive Summary NVMe SSD drives provide more performance in terms of bandwidth and lower latency than SAS and SATA based
counterparts (look for instance to [5]). However, it is always difficult to understand whether this increased
performance will correspond to a real benefit for a real world application. In this paper we have focused our
attention on two important aspects of virtualization: boot storm, and disk performance from a VM standpoint.
We present the results of our experiments to test the performance of virtual machines running on Microsoft
Hyper-V with virtual drives stored on PCIe® and SATA SSD drives (20nm MLC). The solid state technology used in
both drives kind tested is the same, therefore the tests allow to appreciate the effective difference that the
interface makes in terms of performance. Performed tests aim at understanding the potential impact of adopting
controller-less disks on a virtualization platform and consist in:
- checking the impact of PCIe interface when a boot storm1 occur;
- verifying the throughput that can be achieved concurrently by using the open source diskspd tool [7] on
all the virtual machines running on a single hypervisor.
The results have shown that employing PCIe drives with NVMe offer significant improvements in the virtualization
scenarios examined. In particular, we observed a speedup of 80% in the boot storm of 50 virtual machines, but
improvement already shows up when booting just 5 VMs simultaneously. Once the VMs finished the boot they
started generating sequential and random access on the virtual disk drive using the diskspd tool [7]; we measured
the average bandwidth provided by a PCIe drive and compared it with a SATA drive; results show that when 50VMs
access the virtual drives, stored on the same SSD physical drive, random access on a PCIe interface delivers in
average 273MB/s to every VM whereas a SATA interface is capable of providing only 90MB/s in the same
condition. Also in this case we noticed significant benefits also with smaller number of virtual machines in
execution.
Introduction This report is the third in a series dedicated to evaluating solid state drives (SSDs) performances. In the first report
[5] we compared the performance of SATA and NVMe PCIe SSDs connection using Hammer DB, showing that the
NVMe protocol used for attaching drives directly to the PCIe bus offers significant improvements with respect to
SATA controller in terms of bandwidth and latency under a database-like workload.
In the second report [6] we tested the impact of PCIe interface with respect to SATA when aggregating drives using
different form of software defined RAID: Microsoft Storage Spaces [9] and Intel RSTe [8]; experiments have shown
superior software RAID performance of PCIe interface, capable of delivering up to 10GB/s when aggregating four
drives using RAID0.
We are interested in exploring how changing the hardware interface used for accessing a SSD drive affects the
performance of a server acting as hypervisor for a number of virtual machines. We focused our attention on two
important areas for virtualization: boot storm (i.e. simultaneous boot of many virtual machines) and virtual disk
benchmarking (i.e. all VMs performing a disk benchmark simultaneously). Tests have been measured by combining
performance counters of the hypervisor host running Windows Server 2012 R2, and the output generated by each
VM running diskspd tool [7]. We used Windows 10 Enterprise as a guest OS executed by virtual machines.
1 i.e. when upon certain conditions the systems has to start a large number of virtual machines simultaneously
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
We monitored several performance counters, including CPU, memory, power absorbed, disk throughput and disk
queue length. Virtual drives have been tested for 40 seconds executing diskspd for generating sequential access
with 128KB blocks for read and write, and random access with 4KB blocks for read and write.
Measuring boot time is not easy since VMs are opaque for the hypervisor. We configured guest VMs for auto-
logon and to execute a power shell script as a startup script: a file indicating the current date and time has been
generated before starting the disk test procedure. We considered the boot of all VMs complete by taking the latest
boot completion time as indicated by generated files.
Virtual machines stored files containing the test results on a network share on the hypervisor available on an
internal virtual network defined on Hyper-v. We used a DHCP server to assign IP addresses to virtual machines,
and the IP of every virtual machine has been used as an identifier.
Since every VM test generates 6 files and 7 additional files contain the beginning time and the performance
counters recorded for the experiments, the experiment running 50 VMs generates 307 files. We analyzed the 2536
files generated by repeating the experiment for different VM numbers and drive kind using an F# script designed
to parse text files and access the binary log format generated by the perfmon tool.
Boot storm analysis Boot storm happens when a hypervisor starts simultaneously a number of virtual machines. Operating system
boot generates a significant amount of disk operations that may trash the performance leading to a very slow
start of virtual machines and of the services delivered by them. A possible mitigation is a deferred power on of
less critical virtual machines.
Previous experiments have already presented the superiority of NVMe interface over SATA, so it comes without
surprise that the boot duration is faster when VM virtual drives reside on NVMe drive. However, what the
experiment has clearly shown is that the benefits are significant starting from the boot storm of just 5 virtual
machines, where we found the boot time on NVMe taking half time of the SATA drive. With 50 virtual machines
the boot time is 251s when using NVMe and 452s when using SATA. The disk bandwidth usage in the graph
explains the benefits: after 5 VM the SATA drive reaches its maximum capacity delaying the requests made by
booting VMs.
During the boot storm we monitored power absorbed, memory and CPU usage. As expected, lesser boot time
corresponds to significant energy savings: boot time using NVMe is 44% less than boot time using SATA when
booting 50 VMs, and energy used is 41% less.
0
100
200
300
400
500
1 2 3 4 5 6 7 8 9 10 20 30 40 50
Tim
e (s
)
#VM
Boot duration (s)
NVMe
SATA
0
500000000
1000000000
1500000000
2000000000
2500000000
1 2 3 4 5 6 7 8 9 1020304050
Ban
dw
idth
#VM
Disk bandwidth (bytes/s)
NVMe - NDisk (B/s)
SATA - DDisk (B/s)
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
By using NVMe SSD bootstorm gets a significant speedup leading to faster boot and reduced downtime for VMs after a reboot
It can be easily observed that the system uses more resources in terms of memory and CPU when booting VMs
using NVMe interface: this is due to less idle time spent waiting data from the drive; in particular page faults are
the basic primitive used by Windows to implement memory mapping, and therefore file reading.
We can conclude that boot storm benefits from the superior bandwidth offered by NVMe drives, and the
advantage is tangible both in boot time and system efficiency leading to better use of the energy necessary for
the task.
NVMe interface offers benefits to bootstorm starting from just 5 VMs
Virtual disk test analysis Once virtual machines finish the boot, diskspd is run to simulate sequential and random access (with 128KB and
4KB block size respectively2) both for read and write. Each test of the four possible combinations has been run
for 10 seconds. However, since VMs do not finish boot simultaneously the I/O generated, the four tests were
only partly overlapped leading to disk access patterns more realistic than may appear at first sight.
2 These block sizes have been used as representative of many workloads and benchmarks.
0
50000000
100000000
150000000
1 2 3 4 5 6 7 8 9 10 20 30 40 50
Energy (mJ)
NVMe SATA
0
1000
2000
3000
4000
5000
1 2 3 4 5 6 7 8 9 10 20 30 40 50
Page fault / sec
NVMe SATA
0,00
0,20
0,40
0,60
0,80
1,00
1,20
1,40
1,60
1,80
1 2 3 4 5 6 7 8 9 10 20 30 40 50
CPU
NVMe - % CPU - Boot
NVMe - % Interrupt CPU - Boot
NVMe - % Privileged CPU - Boot
SATA - % CPU - Boot
SATA - % Interrupt CPU - Boot
SATA - % Privileged CPU - Boot
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
We measured several parameters during the test execution and compute average and standard deviation. For
the sake of brevity and clarity we discuss the average bandwidth measured within the running virtual machines,
and the power consumption as measured from the operating system. The rest of the data will be made available
as part of this technical report.
In the following graph we report the average bandwidth measured for the 8 different tests (four tests for each
kind of interface). For both NVMe and SATA interfaces the average bandwidth decreases as the number of
virtual machines competing for the drive increase. From the graph it is evident how virtual drives residing on
NVMe SSD drive always perform significantly better as the SATA controller constrain the potential throughput of
the SSD media.
NVMe SSDs offer significant improvements in bandwidth and latency in accessing virtual drives, key elements in scenarios such as VDI
A single disk accessed through the NVMe interface offers almost 3x more bandwidth for accessing the virtual
disk to each VM with respect to SATA. This is very important since with 50 virtual machines accessing
simultaneously the drive, the average bandwidth available for each VM on an NVMe SSD drive (almost 300MB/s)
is more than half of the bandwidth of a dedicated SATA SSD drive. This number is significant since nowadays
SATA SSD drive performance is often taken as a measure of high performance of a disk drive.
The following tables shows the data used to generate the graph above with the details in measured bandwidth
with different configurations. We compared the standard deviation of the data set and found the data obtained
measuring NVMe disk and SATA are in similar ranges indicating that average comparison is meaningful.
NVMe SATA
#VM Rand 4K read MB/s
Rand 4K write MB/s
Seq 128K read MB/s
Seq 128K write MB/s
Rand 4K read MB/s
Rand 4K write MB/s
Seq 128K read MB/s
Seq 128K write MB/s
1 2188 2183 2504 2519 511 512 513 515
2 1123 1478 1680 1220 371 399 373 357
3 745 1164 1212 805 253 290 298 229
4 737 1034 951 679 130 222 241 159
5 462 777 715 482 277 235 223 255
6 600 785 672 509 189 165 196 199
0
500
1000
1500
2000
2500
3000
1 2 3 4 5 6 7 8 9 10 20 30 40 50
Virtual disk bandwidth (MB/s)
NVMe - Rand 4K read avg. MB/s
NVMe - Rand 4K write avg. MB/s
NVMe - Seq 128K read avg. MB/s
NVMe - Seq 128K write avg. MB/s
SATA - Rand 4K read avg. MB/s
SATA - Rand 4K write avg. MB/s
SATA - Seq 128K read avg. MB/s
SATA - Seq 128K write avg. MB/s
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
7 432 690 625 394 172 199 173 150
8 416 565 546 420 148 168 169 163
9 307 490 535 321 133 151 159 148
10 427 575 534 379 128 147 143 134
20 333 419 319 347 147 159 138 138
30 323 433 354 312 117 123 100 110
40 283 324 285 291 100 105 102 102
50 274 280 290 281 91 98 89 95
We measured the power of the host during the virtual disk test using Windows performance counters. The
following graph shows the difference in power absorption between the test conducted using NVMe drive and
the other conducted on the SATA drive.
The noticeable aspect is that the system processes more than four times of data at the cost of less than 2%
increase on the power budget.
We found the virtual disk test very interesting, since often disk I/O is often cause for VM slowdown. The
bandwidth available using a single drive, is impossible to achieve through a 10Gbps connection
Disk interface comparison In this section we briefly compare the disk bandwidth usage recorded using one NVMe drive and one SATA drive.
Using the information collected by the scripts during the whole test we have been able to separate the boot
storm phase from the virtual disk performance test phase.
In the following graphs we report the ratio between the measured and bandwidth using the NVMe drive and the
one using the SATA drive. The three lines show the bandwidth ratio of the whole run (blue line), of the
bootstorm test (red line), and of the virtual disk test (green line).
245000
250000
255000
260000
265000
270000
275000
1 2 3 4 5 6 7 8 9 10 20 30 40 50
Power (mW)
NVMe
SATA
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
As we can see from the graph the NVMe interface offers always better performance with respect to SATA, and
during the whole test we measured an effective bandwidth usage of more than four times in favor of the former.
During the boot storm phase, when booting a single VM, a single SATA drive was able to deliver enough
performance to fulfill the demand. However, even booting just 2 VMs is enough to exhaust the SATA interface
and makes obvious the benefits of using the NVMe one.
The ability of NVMe to feed CPU with data reducing idle time contribute to improve energy efficiency of the system
Test description The test has been fully automated using two power shell scripts. The first script has been used to replicate the
drive of a preconfigured instance of Windows 10 Enterprise edition and create the virtual machines on each drive
(drive D has been mapped to a single SATA SSD drive, drive N has been mapped to a single NVMe drive). The
second script performed the testing procedure as follows:
- Let S be the sequence @(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50)
- For each N in S perform the following steps
1. Record the time of experiment beginning in a file
2. Start the performance counters recording
3. Start N virtual machines on drive D (SATA)
4. Wait for all the VMs generating a .done file to indicate the end of the VM test script completion
5. Stop the performance counter recording
6. Shutdown the virtual machines
7. Repeat steps 3-6 for drive N (NVMe)
After booting the pre-configured Windows 10 Enterprise image performed autologon and started a power shell
script performing the virtual disk test using diskspd. This tool is an open source tool developed by Microsoft and
used by its engineering team to evaluate storage performance. It is designed to generate a wide variety of disk
patterns given a huge number of command line switches. We restrained ourselves to just four tests (read and
write) executed for 10 seconds each:
- Seq128K56Q: Sequential R/W, 128KB block size, 56 queues, single thread
- Rand4K56Q: Random R/W, 4KB block size, 56 queues, single thread
- Seq1M1Q: Sequential R/W, 1MB block size, 1 queue, single thread
0
1
2
3
4
5
1 2 3 4 5 6 7 8 9 10 20 30 §§ 50
Bw NVMe / Bw SATA
Disk Disk - Boot Disk - Test
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
- Rand4K1Q: Random R/W, 4KB block size, 1 queue, single thread
The number of queues has been empirically determined by testing the bandwidth improvement for the Seq128K
for improvement on a particular volume with different queue numbers; on the server we tested 56 seemed to be
a good value.
The data recorded by the diskspd instances executed in each virtual machine, together with the files to record the
begin and the end of the execution are saved on a network share provided by the hypervisor host through an
internal network virtual switch. A DHCP server ensures that all the virtual machines receive an IP address in this
internal network.
The files containing the binary log generated by perfmon and by the power shell script running on each VM tested
have been analyzed with the help of an F# script included in appendix. The script reads all the counters and diskspd
output and generate a CSV file with the following columns (list grouped by category):
VMCount: number of VMs tested
DriveKind: kind of drive interface tested (SATA or NVMe)
Begin: date and time of test
End boot: date and time of boot conclusion (i.e. latest date and time of disk test start)
End test: date and time of test conclusion (i.e. latest date and time of disk test end)
Boot duration: boot duration in seconds
Average CPU: all, boot, test
o CPU Idle %
o CPU Interrupt %
o CPU User %
o CPU %
o CPU Privileged %
Average Disk (C, D – SATA, N – NVMe): all, boot, test
o Read (B/s)
o Queue Length
o Disk (B/s)
o Disk Write (B/s)
o Disk Transfers /sec
Average Page faults /sec: all, boot, test
Average Available memory (MB): all, boot, test
Average Power: all, boot, test
Rand4KR, Rand4KW, Seq128KR, Seq128KW
o Bytes (average and variance): bytes transferred
o IOs (average and variance): number of I/O operations
o MB/s (average and variance): transfer bandwidth
o IOps (average and variance): I/O operations per second
The performance counters data have been analyzed for the whole test and for the boot and test phases, in all
cases the average value has been computed. The data relative to diskspd execution on different VMs have been
reported as average and variance.
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
Conclusions In this report we discussed the results of the experiments on the impact of PCIe NVMe SSD disk drives with respect
to SATA SSD disk drives, in the boot storm of virtual machines and virtual disk benchmarking scenarios. In both
scenarios NVMe disk drives have shown significant improvements, capable of affecting real world workloads.
The boot storm test showed how the interface led to substantial speedup of the simultaneous boot of many virtual
machines starting from just 5 concurrent VMs.
When we performed diskspd benchmark from within running virtual machines we found that the NVMe interface
is capable of delivering significant bandwidth to each VM. In particular, with 50 VMs running on a single hypervisor
the average virtual disk bandwidth recorded is of almost 300MB/s that equivalent to assign to each VM a dedicated
disk performing more than half of a SATA SSD drive. Again, benefits from using NVMe drives exhibit even when
running a small number of virtual machines.
Results show that using NVMe local drives provides significant increment of disk bandwidth in the context of
virtualization allowing to better use the available cores and memory on single hypervisors. The available
bandwidth offered by NVMe drives is greater than a 10Gbps network connection that is often used to access
remote storage, contributing significantly to improve the density and efficiency of virtualization systems.
Numbers, also from previous reports, indicate that this will contribute to push hyperconverged systems with
software defined storage to better exploit the improved capability of SSDs and PCIe interface using the NVMe
protocol.
Bibliography 1. http://www.nvmexpress.org/
2. http://ark.intel.com/products/82934/Intel-SSD-DC-S3610-Series-1_6TB-2_5in-SATA-6Gbs-20nm-MLC
3. http://ark.intel.com/products/80992/Intel-SSD-DC-P3600-Series-1_6TB-12-Height-PCIe-3_0-20nm-MLC
4. http://ark.intel.com/products/81055/Intel-Xeon-Processor-E5-2683-v3-35M-Cache-2_00-GHz
5. http://www.itc.unipi.it/index.php/2016/02/23/comparison-of-solid-state-drives-ssds-on-different-bus-
interfaces/
6. http://www.itc.unipi.it/index.php/2016/05/01/solid-state-software-defined-storage-nvme-vs-sata/
7. https://github.com/Microsoft/diskspd
8. https://downloadcenter.intel.com/it/download/25771/-Driver-RAID-Intel-Rapid-Storage-Technology-
Enterprise-NVMe-Intel-RSTe-NVMe-
9. http://social.technet.microsoft.com/wiki/contents/articles/15198.storage-spaces-overview.aspx
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
Appendix
Hardware Configuration Tests have been performed on a Dell* R630 with the following configuration:
- Dell R630 with support for up to four NVMe drives
- Perc* H730 Mini controller for Boot drive
- 2 Intel® Xeon® E5-2683 v3 2GHz CPU
- 2 Intel® SSD DC S3710 Series (SATA boot drive)
- 4 Intel® SSD SC P3600 Series (NVMe PCIe)
- 4 Intel® SSD DC S3610 Series (SATA)
- 128Gb (8x16Gb) DDR4 RAM
System Setup The system has been installed with Windows Server 2012 R2 Standard edition with the drivers provided by Dell
for the R630 and the Intel® Solid State Drive Data Center Family for NVMe Drivers. We then installed all the latest
updates from Microsoft. The SATA RAID hardware control for Intel SSD DC S3610 Series has been configured in
pass-through mode. The system has joined the AD of our network. The versions of software used for testing are
diskspd 2.0.15, and Intel RSTe 4.5.0.2072.
Test scripts For completeness we include the full source of the scripts used for our test.
VM creation script $basedir = "C:\Users\cisterni\desktop" $nvme0 = Get-PhysicalDisk -FriendlyName PhysicalDisk5 $nvme1 = Get-PhysicalDisk -FriendlyName PhysicalDisk6 $nvme2 = Get-PhysicalDisk -FriendlyName PhysicalDisk7 $nvme3 = Get-PhysicalDisk -FriendlyName PhysicalDisk8 $sata0 = Get-PhysicalDisk -FriendlyName PhysicalDisk0 $sata1 = Get-PhysicalDisk -FriendlyName PhysicalDisk1 $sata2 = Get-PhysicalDisk -FriendlyName PhysicalDisk2 $sata3 = Get-PhysicalDisk -FriendlyName PhysicalDisk3 function createVolume ([string] $poolName, [string] $resiliency, [char] $letter, [string] $filesystem="NTFS") { $name = $poolName + "_vd" $drive = New-VirtualDisk -StoragePoolFriendlyName $poolName -ResiliencySettingName $resiliency -ProvisioningType Fixed -UseMaximumSize -FriendlyName $name initialize-disk -VirtualDisk $drive $vol = New-Partition -DiskId $drive.UniqueId -UseMaximumSize -DriveLetter $letter $vol | Format-Volume -Confirm:$false -FileSystem $filesystem -NewFileSystemLabel $poolName return $drive } $nvmes = @($nvme0, $nvme1, $nvme2, $nvme3) $satas = @($sata0, $sata1, $sata2, $sata3) $nvmepool = New-StoragePool -FriendlyName NVMe-VM -PhysicalDisks $nvmes -StorageSubSystemFriendlyName "Storage Spaces on intelssd" $satapool = New-StoragePool -FriendlyName SATA-VM -PhysicalDisks $satas -StorageSubSystemFriendlyName "Storage Spaces on intelssd" createVolume -poolName NVMe-VM -resiliency Simple -letter N createVolume -poolName SATA-VM -resiliency Simple -letter D mkdir N:\VM mkdir D:\VM
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
function CreateVM([int] $number, [char] $letter) { echo "Creating vdi-$number..." $vmname = "$letter-vdi-$number" $vmdest = $letter + ":\VM" $diskdest = $letter + ":\VM\$vmname.vhdx" copy $basedir\Win10EntGold.vhdx $diskdest $vm = new-vm -Name $vmname -path $vmdest -Generation 2 $vm | Add-VMHardDiskDrive -Path $diskdest $vm.NetworkAdapters[0] | Remove-VMNetworkAdapter $vm | Add-VMNetworkAdapter -SwitchName InternalNet $vm | Set-VMMemory -StartupBytes 536870912 -MinimumBytes 536870912 -MaximumBytes 2147483648 -DynamicMemoryEnabled $true $vm | Set-VMProcessor -Count 2 } function CreateVMs([int] $num, [char] $letter, [int] $from = 0) { for ($i = $from; $i -lt $num; $i++) { CreateVM -number $i -letter $letter } } CreateVMs -num 50 -letter N CreateVMs -num 50 -letter D
VM coordination script $collectors = New-Object -COM Pla.DataCollectorSetCollection; $sharedir = "C:\Users\cisterni\Desktop\BootStormLogs\Coord"; $resdir = "C:\Users\cisterni\Desktop\BootstormExp"; function GetBootstormCollector() { $collectors.GetDataCollectorSets($null, "Service\*"); $bootstormCollector = $null; $collectors._NewEnum | % { if ($_.Name -eq "Bootstorm") { $bootstormCollector = $_; echo $_ } }; } function StartVMs ([int] $num, [char] $letter) { $vms = @() for ($i = 0; $i -lt $num; $i++) { $vms += Get-VM "$letter-vdi-$i" } start-vm -VM $vms } function StopVMs ([int] $num, [char] $letter) { $vms = @() for ($i = 0; $i -lt $num; $i++) { $vms += Get-VM "$letter-vdi-$i" } Stop-VM -VM $vms } function WaitWorkloads ([string] $basedir = $sharedir, [string] $filepattern = "*.done", [int] $count) { Do { sleep -s 1 $m = Get-ChildItem "$basedir\$filepattern" | measure Write-Progress -Activity "Wait for tests" -Status ("Waiting for " + ($count - $m.Count)) -PercentComplete ($m.Count / $count) } While ( $m.Count -lt $count ) Write-Progress -Activity "Wait for tests" -Completed -Status "Tests completed" } function makeTest ([int] $count, [char] $letter) { Write-Progress -Activity "Test $count VMs on $letter" $bootstormCollector = GetBootstormCollector mkdir ("$resdir\exp-VM$count-Drv$letter") Get-Date -Format "yyyy-MM-dd HH:mm:ss" > "$resdir\exp-VM$count-Drv$letter\exp.begin" $bootstormCollector.start($true)
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
StartVMs -num $count -letter $letter WaitWorkloads -count $count $bootstormCollector.Stop($true) StopVMs -num $count -letter $letter Write-Progress -Activity "Test $count VMs on $letter" -Status "Moving results" # refresh the data associated with the collector $bootstormCollector = GetBootstormCollector $loc = $bootstormCollector.LatestOutputLocation icacls $loc /grant cisterni:F mv -Force ("$sharedir\*") $loc mv -Force $loc ("$resdir\exp-VM$count-Drv$letter") Write-Progress -Activity "Test $count VMs on $letter" -Completed -Status "Done." } function makeTestRound ([int] $count) { makeTest -count $count -letter 'D' makeTest -count $count -letter 'N' } function testSuite() { @(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50) | ForEach-Object { makeTestRound -count $_ } } testSuite
VM diskspd test script $dest = "\\192.168.10.2\Coord" function GetIP([string] $base) { $ip = $null Get-NetIPAddress | where { $_.IPv4Address -match $base } | ForEach-Object { $ip = $_.IPv4Address } $ip } $ip = GetIP -base "192.168.10." Get-Date -Format "yyyy-MM-dd HH:mm:ss" > "$dest\$ip.start" C:\users\admin\Desktop\Diskspd-v2.0.15\amd64fre\diskspd.exe -b128K -d10 -o56 -t1 -W -h -w0 C:\Users\admin\Desktop\testfile.dat > "$dest\$ip-seq128Kr.txt" C:\users\admin\Desktop\Diskspd-v2.0.15\amd64fre\diskspd.exe -b128K -d10 -o56 -t1 -W -h -w0 C:\Users\admin\Desktop\testfile.dat > "$dest\$ip-seq128Kw.txt" C:\users\admin\Desktop\Diskspd-v2.0.15\amd64fre\diskspd.exe -b128K -d10 -o56 -t1 -r -W -h -w0 C:\Users\admin\Desktop\testfile.dat > "$dest\$ip-rand4Kr.txt" C:\users\admin\Desktop\Diskspd-v2.0.15\amd64fre\diskspd.exe -b128K -d10 -o56 -t1 -r -W -h -w0 C:\Users\admin\Desktop\testfile.dat > "$dest\$ip-rand4Kw.txt" Get-Date -Format "yyyy-MM-dd HH:mm:ss" > "$dest\$ip.done"
F# script for parsing performance counters .blg files and diskspd outputs #I __SOURCE_DIRECTORY__
#r @"FSharp.Management.0.4.1\lib\net40\System.Management.Automation.dll"
#r @"FSharp.Management.0.4.1\lib\net40\FSharp.Management.PowerShell.dll"
#r
@"C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\Microsoft.PowerShell.Commands.Diagn
ostics\v4.0_3.0.0.0__31bf3856ad364e35\Microsoft.PowerShell.Commands.Diagnostics.
dll"
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
open FSharp.Management
open System.IO
open System.Text.RegularExpressions
open System.Collections.Generic
// Counter processing
type CounterLabels = Idle | Interrupt | User | Processor | Privileged |
DiskReadBps | CurrentDiskQueueLength | DiskBps | DiskWriteBps | DiskTransfers |
PageFaultsSec | AvailableMB | Power
let counterColumns =
dict
[
(Idle, 0)
(Interrupt, 1)
(User, 2)
(Processor, 3)
(Privileged, 4)
(DiskReadBps, 0)
(CurrentDiskQueueLength, 1)
(DiskBps, 2)
(DiskWriteBps, 3)
(DiskTransfers, 4)
(PageFaultsSec, 0)
(AvailableMB, 1)
(Power, 0)
]
let counterColumnsLabels =
dict
[
(Idle, "CPU Idle %")
(Interrupt, "CPU Interrupt %")
(User, "CPU User %")
(Processor, "CPU %")
(Privileged, "CPU Privileged %")
(DiskReadBps, "Disk Read (B/s)")
(CurrentDiskQueueLength, "Queue Length")
(DiskBps, "Disk (B/s)")
(DiskWriteBps, "Disk Write (B/s)")
(DiskTransfers, "Disk Transfers /sec")
(PageFaultsSec, "Page fauls /sec")
(AvailableMB, "Available memory (MB)")
(Power, "Power")
]
let getCounterNames
(set:Microsoft.PowerShell.Commands.GetCounter.PerformanceCounterSampleSet) =
set.CounterSamples |> Seq.map (fun v -> v.Path)
let getColumn
(set:seq<Microsoft.PowerShell.Commands.GetCounter.PerformanceCounterSampleSet>)
(idx:int) =
set |> Seq.map (fun v -> v.CounterSamples.[idx].CookedValue)
let getTimestampedColumn
(set:seq<Microsoft.PowerShell.Commands.GetCounter.PerformanceCounterSampleSet>)
(idx:int) =
set |> Seq.map (fun v -> (v.Timestamp, v.CounterSamples.[idx].CookedValue))
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
let avgTimestampedColumn (d:seq<System.DateTime*float>) =
let t0, _ = d |> Seq.head
let mutable T = t0
let mutable acc = 0.
d |> Seq.skip 1 |> Seq.iter (fun (t, v) ->
let dt = (t - T).TotalMilliseconds
acc <- acc + dt * v
T <- t
)
acc / (T - t0).TotalMilliseconds
let variance (data:seq<float>) =
let mutable n = 0
let mutable mean = 0.
let mutable M2 = 0.
for x in data do
n <- n + 1
let delta = x - mean
mean <- mean + delta/float(n)
M2 <- M2 + delta*(x - mean)
if n < 2 then
None
else
Some(mean, M2 / float(n - 1))
type PS = PowerShellProvider<
"Microsoft.PowerShell.Management;Microsoft.PowerShell.Core" >
let ImportCounter (fn:string) =
match PS.``Import-Counter``([| fn |], summary=false) with
| Success v ->
v.ImmediateBaseObject :?>
System.Collections.ObjectModel.Collection<System.Management.Automation.PSObject>
|> Seq.map (fun v -> v.ImmediateBaseObject :?>
Microsoft.PowerShell.Commands.GetCounter.PerformanceCounterSampleSet)
|> Seq.toArray
| _ -> failwith "Error"
[<Measure>] type s
[<Measure>] type B
[<Measure>] type KB
[<Measure>] type MB
[<Measure>] type IO
[<Measure>] type perc
type CPUUsage = { Usage : float<perc>; User : float<perc>; Kernel : float<perc>;
Idle : float<perc> }
type DriveType = NVMe | SATA
type DiskAccessType = Sequential | Random
type DiskAccessMode = Read | Write | ReadWrite
let matchCPU teststring =
let cpud = Regex.Match(teststring, @"CPU \| Usage \| User \| Kernel \|
Idle\r\n-------------------------------------------
\r\n.*?0\|.*?([\d\.]+)%\|.*?([\d\.]+)%\|.*?([\d\.]+)%\|.*?([\d\.]+)%\r\n.*?1\|.*
?([\d\.]+)%\|.*?([\d\.]+)%\|.*?([\d\.]+)%\|.*?([\d\.]+)%\r\n--------------------
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
-----------------------
\r\navg\.\|.*?([\d\.]+)%\|.*?([\d\.]+)%\|.*?([\d\.]+)%\|.*?([\d\.]+)%")
let pP (gn:int) = (cpud.Groups.[gn].Value |> float) * 1.<perc>
let t0 = { Usage = pP 1; User = pP 2; Kernel = pP 3; Idle = pP 4 }
let t1 = { Usage = pP 5; User = pP 6; Kernel = pP 7; Idle = pP 8 }
let tavg = { Usage = pP 9; User = pP 10; Kernel = pP 11; Idle = pP 12 }
(t0, t1, tavg)
type DiskSpdTestMeta = { Duration : float<s>; BlockSize : float<B>; AccessType :
DiskAccessType; Queues : int; EffectiveDuration : float<s>; ThreadCount : int;
CPUCount : int }
let matchMeta teststring =
let testd = Regex.Match(teststring, "duration:.*?(\d+)s.*?block
size:.*?(\d+).*?using (random|sequential) I/O.*?number of outstanding I/O
operations:.*?(\d+).*?actual test time:.*?([\d\.]+)s.*?thread
count:.*?(\d+).*?proc count:.*?(\d+)", RegexOptions.Singleline)
let pS (gn:int) = (testd.Groups.[gn].Value |> float) * 1.<s>
let pB (gn:int) = (testd.Groups.[gn].Value |> float) * 1.<B>
let pI (gn:int) = (testd.Groups.[gn].Value |> int)
let dT (gn:int) = if testd.Groups.[gn].Value = "random" then Random else
Sequential
{ Duration = pS 1; BlockSize = pB 2; AccessType = dT 3; Queues = pI 4;
EffectiveDuration = pS 5; ThreadCount = pI 6; CPUCount = pI 7 }
type DiskSpdThreadPerf = { Mode : DiskAccessMode; BytesCount : int64<B>; IOs :
int64<IO>; MBps : float<MB/s>; IOps : float<IO/s> }
let matchThroughput teststring =
let iomtot = @"Total IO\r\nthread \| bytes \| I/Os \|
MB/s \| I/O per s \| file\r\n-----------------------------------------------
-------------------------------\r\n 0 \|.*?(\d+) \|.*?(\d+) \|.*?([\d\.]+)
\|.*?([\d\.]+) \| .*?\r\n-------------------------------------------------------
-----------------------\r\ntotal:.*?\d+ \|.*?\d+ \|.*?[\d\.]+ \|.*?[\d\.]+"
let iomr = @"Read IO\r\nthread \| bytes \| I/Os \| MB/s
\| I/O per s \| file\r\n------------------------------------------------------
------------------------\r\n 0 \|.*?(\d+) \|.*?(\d+) \|.*?([\d\.]+)
\|.*?([\d\.]+) \| .*?\r\n-------------------------------------------------------
-----------------------\r\ntotal:.*?\d+ \|.*?\d+ \|.*?[\d\.]+ \|.*?[\d\.]+"
let iomw = @"Write IO\r\nthread \| bytes \| I/Os \| MB/s
\| I/O per s \| file\r\n------------------------------------------------------
------------------------\r\n 0 \|.*?(\d+) \|.*?(\d+) \|.*?([\d\.]+)
\|.*?([\d\.]+) \| .*?\r\n-------------------------------------------------------
-----------------------\r\ntotal:.*?\d+ \|.*?\d+ \|.*?[\d\.]+ \|.*?[\d\.]+"
let iostot = Regex.Match(teststring, iomtot, RegexOptions.Singleline)
let iosr = Regex.Match(teststring, iomr, RegexOptions.Singleline)
let iosw = Regex.Match(teststring, iomw, RegexOptions.Singleline)
let pB (gc:Match) (gn:int) = (gc.Groups.[gn].Value |> int64) * 1L<B>
let pI (gc:Match) (gn:int) = (gc.Groups.[gn].Value |> int64) * 1L<IO>
let pM (gc:Match) (gn:int) = (gc.Groups.[gn].Value |> float) * 1.<MB/s>
let pIs (gc:Match) (gn:int) = (gc.Groups.[gn].Value |> float) * 1.<IO/s>
(
{ Mode = ReadWrite; BytesCount = pB iostot 1; IOs = pI iostot 2; MBps = pM
iostot 3; IOps = pIs iostot 4 },
{ Mode = Read; BytesCount = pB iosr 1; IOs = pI iosr 2; MBps = pM iosr 3;
IOps = pIs iosr 4 },
{ Mode = Write; BytesCount = pB iosw 1; IOs = pI iosw 2; MBps = pM iosw 3;
IOps = pIs iosw 4 }
)
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
type DiskSpdExp =
{
BlockSize : float<KB>;
AccessType : DiskAccessType;
AccessMode : DiskAccessMode;
Meta : DiskSpdTestMeta;
CPU : CPUUsage*CPUUsage*CPUUsage;
Throughput : DiskSpdThreadPerf*DiskSpdThreadPerf*DiskSpdThreadPerf
}
type VMExp(name) =
let mutable vmbegin = System.DateTime.Now
let mutable vmend = System.DateTime.Now
let experiments = System.Collections.Generic.List<DiskSpdExp>()
member this.Name = name
member this.Begin with get() = vmbegin and set (v) = vmbegin <- v
member this.End with get() = vmend and set (v) = vmend <- v
member this.Experiments = experiments
let cpucols = [ Idle; Interrupt; User; Processor; Privileged ]
let diskcols = [DiskReadBps; CurrentDiskQueueLength; DiskBps; DiskWriteBps;
DiskTransfers ]
let memcols = [ PageFaultsSec; AvailableMB ]
let powcols = [ Power ]
let processExp (d, out:StreamWriter) =
printfn "Processing %s" d
let expm = Regex.Match(d, @"\\exp\-VM(\d+)\-Drv([DN])\\")
let vmcount = expm.Groups.[1].Value |> int
let drive = if expm.Groups.[2].Value = "D" then SATA else NVMe
let begexp = File.ReadAllLines(d + @"\..\exp.begin").[0] |>
System.DateTime.Parse
let data = new System.Collections.Generic.Dictionary<string, VMExp>()
let counters = new System.Collections.Generic.Dictionary<string,
Microsoft.PowerShell.Commands.GetCounter.PerformanceCounterSampleSet array>()
Directory.GetFiles(d) |> Array.iter (fun f ->
let fi = FileInfo(f)
match fi.Extension with
| ".blg" ->
let name = fi.Name.Substring(0, fi.Name.Length - fi.Extension.Length)
counters.Add(name, ImportCounter f)
| ".txt" ->
let m = Regex.Match(fi.Name, "(.*?)\\-(rand|seq)(4|128)K(r|w)\\.txt")
let name = m.Groups.[1].Value
let accessType = if m.Groups.[2].Value = "rand" then Random else
Sequential
let blksz = if m.Groups.[3].Value = "4" then 4.<KB> else 128.<KB>
let mode = if m.Groups.[4].Value = "r" then Read else Write
if not(data.ContainsKey(name)) then data.Add(name, new VMExp(name))
printfn "Processing %s" f
let content = File.ReadAllText(f)
let meta = matchMeta content
let cpus = matchCPU content
let throughput = matchThroughput content
data.[name].Experiments.Add({ BlockSize = blksz; AccessType = accessType;
AccessMode = mode; Meta = meta; CPU = cpus; Throughput = throughput })
| ".done" ->
let name = Regex.Match(fi.Name, "(.*)\\.done").Groups.[1].Value
let endt = File.ReadAllLines(f).[0] |> System.DateTime.Parse
if not(data.ContainsKey(name)) then data.Add(name, new VMExp(name))
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
data.[name].End <- endt
| ".start" ->
let name = Regex.Match(fi.Name, "(.*)\\.start").Groups.[1].Value
let start = File.ReadAllLines(f).[0] |> System.DateTime.Parse
if not(data.ContainsKey(name)) then data.Add(name, new VMExp(name))
data.[name].Begin <- start
| _ -> ()
)
let endboot = data |> Seq.map (fun v -> v.Value.Begin ) |> Seq.max
let endtest = data |> Seq.map (fun v -> v.Value.End) |> Seq.max
let splitCounters name =
let c = counters.[name]
let cb = c |> Seq.filter (fun v -> v.Timestamp <= endboot)
let ct = c |> Seq.filter (fun v -> v.Timestamp > endboot)
(c, cb, ct)
let avgColumn counters col =
let c, cb, ct = counters
let ac = counterColumns.[col] |> getTimestampedColumn c |>
avgTimestampedColumn
let acb = counterColumns.[col] |> getTimestampedColumn cb |>
avgTimestampedColumn
let act = counterColumns.[col] |> getTimestampedColumn ct |>
avgTimestampedColumn
(ac, acb, act)
let pdate (d:System.DateTime) = d.ToString("yyyy-MM-dd HH:mm:ss")
let chgcomma (s:string) = s.Replace(".", ",")
let pvs (l:CounterLabels list) (d:IDictionary<CounterLabels,
float*float*float>) (col:int) =
l |> Seq.map (fun v -> d.[v]) |> Seq.iter (fun (ac, acb, act) ->
match col with
| 0 -> out.Write(sprintf "%f;" ac |> chgcomma)
| 1 -> out.Write(sprintf "%f;" acb |> chgcomma)
| 2 -> out.Write(sprintf "%f;" act |> chgcomma)
| _ -> failwith "Invalid value"
)
out.Write(sprintf "%d;%A;%s;%s;%s;%s;" vmcount drive (pdate begexp) (pdate
endboot) (pdate endtest) ((sprintf "%A" (endboot - begexp).TotalSeconds) |>
chgcomma))
let cpu = splitCounters "CPU"
let cpuvals = cpucols |> Seq.map (fun v -> (v, avgColumn cpu v)) |> Seq.toList
|> dict
let diskC = splitCounters "DiskC"
let diskCvals = diskcols |> Seq.map (fun v -> (v, avgColumn diskC v)) |>
Seq.toList |> dict
let diskD = splitCounters "DiskD_SATA"
let diskDvals = diskcols |> Seq.map (fun v -> (v, avgColumn diskD v)) |>
Seq.toList |> dict
let diskN = splitCounters "DiskN_NVMe"
let diskNvals = diskcols |> Seq.map (fun v -> (v, avgColumn diskN v)) |>
Seq.toList |> dict
let mem = splitCounters "Memory"
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
let memvals = memcols |> Seq.map (fun v -> (v, avgColumn mem v)) |> Seq.toList
|> dict
let power = splitCounters "Power"
let powervals = powcols |> Seq.map (fun v -> (v, avgColumn power v)) |>
Seq.toList |> dict
[0; 1; 2] |> Seq.iter (pvs cpucols cpuvals)
[0; 1; 2] |> Seq.iter (pvs diskcols diskCvals)
[0; 1; 2] |> Seq.iter (pvs diskcols diskDvals)
[0; 1; 2] |> Seq.iter (pvs diskcols diskNvals)
[0; 1; 2] |> Seq.iter (pvs memcols memvals)
[0; 1; 2] |> Seq.iter (pvs powcols powervals)
let avg (l:seq<float>) =
let mean, var = if vmcount < 3 then (l |> Seq.average, 0.) else match
variance l with Some v -> v | None -> failwith "Invalid list"
(mean, sqrt(var))
let outExpStat accessType accessMode =
let exp = data |> Seq.map (fun vm -> vm.Value.Experiments |> Seq.filter (fun
exp -> exp.AccessType = accessType && exp.AccessMode = accessMode) |> Seq.head)
let expb, expb_var = exp |> Seq.map (fun exp -> let p, _, _ = exp.Throughput
in p.BytesCount |> float) |> avg
let expio, expio_var = exp |> Seq.map (fun exp -> let p, _, _ =
exp.Throughput in p.IOs |> float) |> avg
let expmbps, expmbps_var = exp |> Seq.map (fun exp -> let p, _, _ =
exp.Throughput in p.MBps |> float) |> avg
let expiops, expiops_var = exp |> Seq.map (fun exp -> let p, _, _ =
exp.Throughput in p.IOps |> float) |> avg
out.Write((sprintf "%f;%f;%f;%f;%f;%f;%f;%f;" expb expb_var expio expio_var
expmbps expmbps_var expiops expiops_var).Replace(".", ","))
outExpStat DiskAccessType.Random DiskAccessMode.Read
outExpStat DiskAccessType.Random DiskAccessMode.Write
outExpStat DiskAccessType.Sequential DiskAccessMode.Read
outExpStat DiskAccessType.Sequential DiskAccessMode.Write
out.WriteLine()
let processExpDir (out:StreamWriter) =
let printHeads cols pfx sfx =
out.Write(sprintf "%s;" (System.String.Join(";", cols |> Seq.map (fun v ->
pfx + counterColumnsLabels.[v] + sfx) |> Seq.toArray)))
out.Write("VMCount;DriveKind;Begin;End boot;End test;Boot duration;")
[
(cpucols, "", ""); (cpucols, "", " - Boot"); (cpucols, "", " - Test")
(diskcols, "DiskC", ""); (diskcols, "DiskC", " - Boot"); (diskcols, "DiskC",
" - Test")
(diskcols, "DiskD (SATA)", ""); (diskcols, "DiskD (SATA)", " - Boot");
(diskcols, "DiskD (SATA)", " - Test")
(diskcols, "DiskN (NVMe)", ""); (diskcols, "DiskN (NVMe)", " - Boot");
(diskcols, "DiskN (NVMe)", " - Test")
(memcols, "", ""); (memcols, "", " - Boot"); (memcols, "", " - Test")
(powcols, "", ""); (powcols, "", " - Boot"); (powcols, "", " - Test")
] |> Seq.iter (fun (c, p, s) -> printHeads c p s)
out.Write("Rand4KR Avg. Bytes;Rand4KR Var. Bytes;Rand4KR Avg. IOs;Rand4KR Var.
IOs;Rand4KR Avg. MB/s;Rand4KR Var. MB/s;Rand4KR Avg. IOps;Rand4KR Var. IOps;")
IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy
*Other names and brands may be claimed as the property of others.
out.Write("Rand4KW Avg. Bytes;Rand4KW Var. Bytes;Rand4KW Avg. IOs;Rand4KW Var.
IOs;Rand4KW Avg. MB/s;Rand4KW Var. MB/s;Rand4KW Avg. IOps;Rand4KW Var. IOps;")
out.Write("Seq128KR Avg. Bytes;Seq128KR Var. Bytes;Seq128KR Avg. IOs;Seq128KR
Var. IOs;Seq128KR Avg. MB/s;Seq128KR Var. MB/s;Seq128KR Avg. IOps;Seq128KR Var.
IOps;")
out.Write("Seq128KW Avg. Bytes;Seq128KW Var. Bytes;Seq128KW Avg. IOs;Seq128KW
Var. IOs;Seq128KW Avg. MB/s;Seq128KW Var. MB/s;Seq128KW Avg. IOps;Seq128KW Var.
IOps")
out.WriteLine();
Directory.EnumerateDirectories(__SOURCE_DIRECTORY__)
|> Seq.filter (fun d -> Regex.IsMatch(d, @"exp\-VM\d+\-Drv[DN]"))
|> Seq.map (fun d -> Directory.GetDirectories(d) |> Seq.head)
|> Seq.iter (fun d -> processExp(d, out))
let outn = __SOURCE_DIRECTORY__ + @"\summary.csv"
if File.Exists(outn) then File.Delete(outn)
let out = File.CreateText(outn)
processExpDir(out)
out.Close()
out.Dispose()
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You
should visit the referenced web site and confirm whether referenced data are accurate.
Intel, the Intel logo, Intel® Xeon®, Intel® SSD DC S3610, Intel® SSD DC S3710, and Intel® SSD DC P3600
are trademarks of Intel Corporation in the U.S. and/or other countries.