{"id":8683,"date":"2016-07-29T07:54:55","date_gmt":"2016-07-29T05:54:55","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/"},"modified":"2016-07-29T07:54:55","modified_gmt":"2016-07-29T05:54:55","slug":"exadata-x-5-bare-metal-vs-ovm","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/","title":{"rendered":"Exadata X-5 Bare Metal vs. OVM performance"},"content":{"rendered":"<h2>By Franck Pachot<\/h2>\n<p>.<br \/>\nThe Exadata X5 can be installed Bare Metal or Virtualized. The latter one, Oracle VM allows to create multiple clusters on one machine, is more complex for installation and for storage capacity planning. But it allows a small flexibility on options licencing. Those are the real challenges behind the choice. However, when we talk about virtualization, most of questions are about the overhead. Last week, we did some tests on same machine with different configuration, thanks to <a href=\"http:\/\/www.arrowecs.ch\/en\/\" target=\"_blank\" rel=\"noopener noreferrer\">Arrow<\/a> Oracle Authorized Solution Center.<br \/>\n<!--more--><br \/>\nComparison is not easy. Bare Metal has all resources. Virtualized has to distribute resources. And this test is very specific: all I\/O hitting the &#8216;extreme&#8217; flash cache because I don&#8217;t expect any virtualization overhead to be in milliseconds. So, don&#8217;t expect some universal conclusions from those tests. And don&#8217;t hesitate to comment about my way to read those numbers.<\/p>\n<h3>CPU<\/h3>\n<p>Do not expect a benchmark that shows the maximum capacity of the machine here. I&#8217;m comparing a bare metal node with 36 cores with a VM with 4 vCPUS. So I&#8217;ll compare a one thread workload only: <a href=\"https:\/\/kevinclosson.net\/slob\/\" target=\"_blank\" rel=\"noopener noreferrer\">SLOB<\/a> with one session and SCALE=100M UPDATE_PCT=0 RUN_TIME=120 WORK_UNIT=64<\/p>\n<p>Bare Metal load profile:<\/p>\n<pre><code>\nLoad Profile                    Per Second   Per Transaction  Per Exec  Per Call\n~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------\n             DB Time(s):               1.0              30.5      0.00      2.91\n              DB CPU(s):               1.0              29.0      0.00      2.76\n      Background CPU(s):               0.0               0.2      0.00      0.00\n      Redo size (bytes):          14,172.4         432,594.0\n  Logical read (blocks):         810,244.4      24,731,696.3\n          Block changes:              41.7           1,271.3\n Physical read (blocks):             111.6           3,407.8\nPhysical write (blocks):               0.0               0.3\n       Read IO requests:             111.3           3,397.3\n      Write IO requests:               0.0               0.3\n           Read IO (MB):               0.9              26.6\n          Write IO (MB):               0.0               0.0\n         Executes (SQL):          12,285.1         374,988.5\n<\/code><\/pre>\n<p>Virtualized load profile:<\/p>\n<pre><code>\nLoad Profile                    Per Second   Per Transaction  Per Exec  Per Call\n~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------\n             DB Time(s):               1.0              30.6      0.00      4.37\n              DB CPU(s):               1.0              29.8      0.00      4.26\n      Background CPU(s):               0.0               0.2      0.00      0.00\n      Redo size (bytes):          13,316.5         410,404.0\n  Logical read (blocks):         848,095.1      26,137,653.8\n          Block changes:              41.1           1,266.3\n Physical read (blocks):             109.1           3,361.3\nPhysical write (blocks):               0.0               0.3\n       Read IO requests:             103.8           3,198.5\n      Write IO requests:               0.0               0.3\n           Read IO (MB):               0.9              26.3\n          Write IO (MB):               0.0               0.0\n         Executes (SQL):          13,051.2         402,228.0\n<\/code><\/pre>\n<p>We can say that CPU and RAM performance is similar.<\/p>\n<h3>I\/O<\/h3>\n<p>Now about IOPS on the storage cell flash cache.<br \/>\nI&#8217;ll compare <a href=\"https:\/\/kevinclosson.net\/slob\/\" target=\"_blank\" rel=\"noopener noreferrer\">SLOB<\/a> with one session and SCALE=100000M UPDATE_PCT=100 RUN_TIME=120 WORK_UNIT=64<\/p>\n<p>Bare Metal load profile:<\/p>\n<pre><code>\nLoad Profile                    Per Second   Per Transaction  Per Exec  Per Call\n~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------\n             DB Time(s):               1.0               0.0      0.02      4.06\n              DB CPU(s):               0.1               0.0      0.00      0.49\n      Background CPU(s):               0.1               0.0      0.00      0.00\n      Redo size (bytes):       1,652,624.9          51,700.6\n  Logical read (blocks):           2,582.2              80.8\n          Block changes:           4,214.5             131.9\n Physical read (blocks):           2,060.6              64.5\nPhysical write (blocks):           1,818.0              56.9\n       Read IO requests:           2,051.0              64.2\n      Write IO requests:           1,738.6              54.4\n           Read IO (MB):              16.1               0.5\n          Write IO (MB):              14.2               0.4\n         Executes (SQL):              66.3               2.1\n              Rollbacks:               0.0               0.0\n           Transactions:              32.0\n<\/code><\/pre>\n<p>Virtualized load profile:<\/p>\n<pre><code>\nLoad Profile                    Per Second   Per Transaction  Per Exec  Per Call\n~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------\n             DB Time(s):               1.0               0.0      0.01      3.49\n              DB CPU(s):               0.3               0.0      0.00      1.01\n      Background CPU(s):               0.2               0.0      0.00      0.00\n      Redo size (bytes):       2,796,963.3          51,713.3\n  Logical read (blocks):           4,226.0              78.1\n          Block changes:           7,107.0             131.4\n Physical read (blocks):           3,470.6              64.2\nPhysical write (blocks):           3,278.7              60.6\n       Read IO requests:           3,462.0              64.0\n      Write IO requests:           3,132.0              57.9\n           Read IO (MB):              27.1               0.5\n          Write IO (MB):              25.6               0.5\n         Executes (SQL):              86.9               1.6\n              Rollbacks:               0.0               0.0\n           Transactions:              54.1\n<\/code><\/pre>\n<p>In two minutes we did more work here. Timed events show statistics about the &#8216;cell single block reads&#8217; which are nothing else than &#8216;db file sequential read&#8217; renamed to look more &#8216;Exadata&#8217;. No SmartScan happens here as they go to buffer cache and we cannot do any filtering for blocks that will be shared with other sessions. <\/p>\n<p>Bare Metal:<\/p>\n<pre><code>                                           Total Wait       Wait   % DB Wait\nEvent                                Waits Time (sec)    Avg(ms)   time Class\n------------------------------ ----------- ---------- ---------- ------ --------\ncell single block physical rea     249,854      115.7       0.46   94.9 User I\/O\nDB CPU                                           14.6              12.0<\/code><\/pre>\n<p>Virtualized:                                           Total Wait       Wait   % DB Wait<\/p>\n<pre><code>Event                                Waits Time (sec)    Avg(ms)   time Class\n------------------------------ ----------- ---------- ---------- ------ --------\ncell single block physical rea     425,071      109.3       0.26   89.4 User I\/O\nDB CPU                                           35.2              28.8<\/code><\/pre>\n<p>Lower latency here on average which explains why we did more work. But no conclusion before we know where the latency comes from. Averages hides the details, and it&#8217;s the same with the &#8216;IO Profile&#8217; section:<\/p>\n<p>Bare Metal<\/p>\n<pre><code>IO Profile                  Read+Write\/Second     Read\/Second    Write\/Second\n~~~~~~~~~~                  ----------------- --------------- ---------------\n            Total Requests:           3,826.6         2,055.1         1,771.5\n         Database Requests:           3,789.5         2,051.0         1,738.6\n        Optimized Requests:           3,720.7         1,985.1         1,735.6\n             Redo Requests:              32.5             0.0            32.5\n                Total (MB):              32.0            16.2            15.9\n             Database (MB):              30.3            16.1            14.2\n      Optimized Total (MB):              29.3            15.6            13.7\n                 Redo (MB):               1.7             0.0             1.7\n         Database (blocks):           3,878.6         2,060.6         1,818.0\n Via Buffer Cache (blocks):           3,878.6         2,060.6         1,818.0\n           Direct (blocks):               0.0             0.0             0.0<\/code><\/pre>\n<p>Virtualized<\/p>\n<pre><code>IO Profile                  Read+Write\/Second     Read\/Second    Write\/Second\n~~~~~~~~~~                  ----------------- --------------- ---------------\n            Total Requests:           6,652.2         3,467.0         3,185.2\n         Database Requests:           6,594.0         3,462.0         3,132.0\n        Optimized Requests:           6,582.7         3,461.2         3,121.5\n             Redo Requests:              54.7             0.0            54.7\n                Total (MB):              55.6            27.2            28.4\n             Database (MB):              52.7            27.1            25.6\n      Optimized Total (MB):              51.8            27.1            24.6\n                 Redo (MB):               2.8             0.0             2.8\n         Database (blocks):           6,749.3         3,470.6         3,278.7\n Via Buffer Cache (blocks):           6,749.3         3,470.6         3,278.7\n           Direct (blocks):               0.0             0.0             0.0<\/code><\/pre>\n<p>and for IO statistics.<br \/>\nBare Metal:<\/p>\n<pre><code>                 Reads:  Reqs    Data   Writes:  Reqs    Data    Waits:   Avg\nFunction Name      Data per sec per sec    Data per sec per sec   Count  Tm(ms)\n--------------- ------- ------- ------- ------- ------- ------- ------- -------\nBuffer Cache Re    1.9G  2050.9 16.093M      0M     0.0      0M  250.2K     0.5\nDBWR                 0M     0.0      0M    1.7G  1740.5 14.216M       0     N\/A\nLGWR                 0M     0.0      0M    201M    32.5  1.648M    3914     0.3\nOthers               8M     4.1   .066M      1M     0.5   .008M     560     0.0\nTOTAL:             1.9G  2055.0 16.159M    1.9G  1773.4 15.872M  254.6K     0.5<\/code><\/pre>\n<p>Virtualized:<\/p>\n<pre><code>                 Reads:  Reqs    Data   Writes:  Reqs    Data    Waits:   Avg\nFunction Name      Data per sec per sec    Data per sec per sec   Count  Tm(ms)\n--------------- ------- ------- ------- ------- ------- ------- ------- -------\nBuffer Cache Re    3.3G  3462.7  27.12M      0M     0.0      0M  425.6K     0.3\nDBWR                 0M     0.0      0M    3.1G  3133.9 25.639M       0     N\/A\nLGWR                 0M     0.0      0M    341M    54.7  2.775M    6665     0.3\nOthers              10M     5.0   .081M      1M     0.5   .008M     514     0.3\nTOTAL:             3.3G  3467.7 27.202M    3.4G  3189.0 28.422M  432.7K     0.3<\/code><\/pre>\n<p>I&#8217;ve put the physical read statistics side-by-side to compare:<\/p>\n<pre><code>\n                                 BARE METAL                        VIRTUALIZED\n&nbsp;\nStatistic                                     Total     per Trans              Total     per Trans\n-------------------------------- ------------------ ------------- ------------------ -------------\ncell flash cache read hits                  242,142          62.1            425,365          64.0\ncell logical write IO requests                5,032           1.3              8,351           1.3\ncell overwrites in flash cache              200,897          51.5            937,973         141.1\ncell physical IO interconnect by      8,145,832,448   2,089,210.7     14,331,230,720   2,156,044.9\ncell writes to flash cache                  638,514         163.8          1,149,990         173.0\nphysical read IO requests                   250,168          64.2            425,473          64.0\nphysical read bytes                   2,059,042,816     528,095.1      3,494,084,608     525,663.4\nphysical read partial requests                    4           0.0                  0           0.0\nphysical read requests optimized            242,136          62.1            425,365          64.0\nphysical read total IO requests             250,671          64.3            426,089          64.1\nphysical read total bytes             2,067,243,008     530,198.3      3,504,136,192     527,175.6\nphysical read total bytes optimi      1,993,089,024     511,179.5      3,497,918,464     526,240.2\nphysical read total multi block                   0           0.0                  0           0.0\nphysical reads                              251,348          64.5            426,524          64.2\nphysical reads cache                        251,348          64.5            426,524          64.2\nphysical reads cache prefetch                 1,180           0.3              1,051           0.2\nphysical reads direct                             0           0.0                  0           0.0\nphysical reads direct (lob)                       0           0.0                  0           0.0\nphysical reads prefetch warmup                1,165           0.3              1,016           0.2\nphysical write IO requests                  212,061          54.4            384,909          57.9\nphysical write bytes                  1,816,551,424     465,901.9      3,300,933,632     496,605.0\nphysical write requests optimize            211,699          54.3            383,624          57.7\nphysical write total IO requests            216,077          55.4            391,445          58.9\nphysical write total bytes            2,026,819,072     519,830.5      3,656,793,600     550,142.0\nphysical write total bytes optim      1,755,620,352     450,274.5      3,171,875,328     477,189.0\nphysical write total multi block                531           0.1                942           0.1\nphysical writes                             221,747          56.9            402,946          60.6\nphysical writes direct                            0           0.0                  0           0.0\nphysical writes direct (lob)                      0           0.0                  0           0.0\nphysical writes from cache                  221,747          56.9            402,946          60.6\nphysical writes non checkpoint              221,694          56.9            402,922          60.6\n<\/code><\/pre>\n<p>We already know that there were more work on the OVM run but comparing the &#8216;per transaction&#8217; statistics show similar things but a bit more &#8216;flash cache&#8217; &#8216;optimized&#8217; I\/O in the second run.<br \/>\nOf course, even if it&#8217;s the same machine, it has been re-imaged, database re-created, different volume and capacity. So maybe I hit more the cell flash on the second run than on the first one and more reads on spinning disks can explain the difference on single block reads latency.<\/p>\n<p>We need to get beyond the averages with the wait event histograms. They don&#8217;t show lower than millisecond in the AWR report (I&#8217;ve opened an enhancement request for 12.2 about that) but I collected them from the V$EVENT_HISTOGRAM_MICRO<\/p>\n<p>Bare Metal:<\/p>\n<pre><code>EVENT                                    WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT\n---------------------------------------- --------------- ---------- ------------------------------\ncell single block physical read                        1          0 1 microsecond\ncell single block physical read                        2          0 2 microseconds\ncell single block physical read                        4          0 4 microseconds\ncell single block physical read                        8          0 8 microseconds\ncell single block physical read                       16          0 16 microseconds\ncell single block physical read                       32          0 32 microseconds\ncell single block physical read                       64          0 64 microseconds\ncell single block physical read                      128        533 128 microseconds\ncell single block physical read                      256     240142 256 microseconds\ncell single block physical read                      512       7818 512 microseconds\ncell single block physical read                     1024        949 1 millisecond\ncell single block physical read                     2048        491 2 milliseconds\ncell single block physical read                     4096       1885 4 milliseconds\ncell single block physical read                     8192       3681 8 milliseconds\ncell single block physical read                    16384       2562 16 milliseconds\ncell single block physical read                    32768        257 32 milliseconds\ncell single block physical read                    65536         52 65 milliseconds\ncell single block physical read                   131072          3 131 milliseconds\ncell single block physical read                   262144          0 262 milliseconds\ncell single block physical read                   524288          1 524 milliseconds<\/code><\/pre>\n<p>Virtualized:<\/p>\n<pre><code>EVENT                                    WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT\n---------------------------------------- --------------- ---------- ------------------------------\ncell single block physical read                        1          0 1 microsecond\ncell single block physical read                        2          0 2 microseconds\ncell single block physical read                        4          0 4 microseconds\ncell single block physical read                        8          0 8 microseconds\ncell single block physical read                       16          0 16 microseconds\ncell single block physical read                       32          0 32 microseconds\ncell single block physical read                       64          0 64 microseconds\ncell single block physical read                      128          1 128 microseconds\ncell single block physical read                      256     322113 256 microseconds\ncell single block physical read                      512     105055 512 microseconds\ncell single block physical read                     1024       1822 1 millisecond\ncell single block physical read                     2048        813 2 milliseconds\ncell single block physical read                     4096        681 4 milliseconds\ncell single block physical read                     8192        283 8 milliseconds\ncell single block physical read                    16384        231 16 milliseconds\ncell single block physical read                    32768         64 32 milliseconds\ncell single block physical read                    65536         11 65 milliseconds\ncell single block physical read                   131072          3 131 milliseconds<\/code><\/pre>\n<p>In the first run we see more reads around 8ms which confirms the previous guess that we had more flash cache hit on the second run.<br \/>\nThe waits between 128 and 512 milliseconds are from the cell flash storage and this is where I want to see if virtualization has an overhead.<br \/>\nI&#8217;ve put it in color there where it&#8217;s easier to visualize that most of the reads are in the 128-256 range. Bare Metal in blue, OVM in orange. <\/p>\n<p><a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM.png\" alt=\"CaptureX5BMVM\" width=\"990\" height=\"704\" class=\"alignnone size-full wp-image-10049\" \/><\/a><\/p>\n<p>In Bare Metal, most of the reads are faster than 256 microseconds. In virtualized there are some significant reads are above. This may be cause by virtualization but anyway that&#8217;s not a big difference. I don&#8217;t think that virtualization overhead is an important criteria when choosing how to install your Exadata. Storage capacity planning is the major criteria: consolidate all storage in two diskgroups (DATA and RECO) for all databases, or partition them for each cluster. choice is about manageability and agility in provisioning vs. licence optimization.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Franck Pachot . The Exadata X5 can be installed Bare Metal or Virtualized. The latter one, Oracle VM allows to create multiple clusters on one machine, is more complex for installation and for storage capacity planning. But it allows a small flexibility on options licencing. Those are the real challenges behind the choice. However, [&hellip;]<\/p>\n","protected":false},"author":27,"featured_media":8685,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[103,96,412,624,894],"type_dbi":[],"class_list":["post-8683","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-database-administration-monitoring","tag-exadata","tag-oracle","tag-oracle-vm","tag-slob","tag-x5"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Exadata X-5 Bare Metal vs. OVM performance - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Exadata X-5 Bare Metal vs. OVM performance\" \/>\n<meta property=\"og:description\" content=\"By Franck Pachot . The Exadata X5 can be installed Bare Metal or Virtualized. The latter one, Oracle VM allows to create multiple clusters on one machine, is more complex for installation and for storage capacity planning. But it allows a small flexibility on options licencing. Those are the real challenges behind the choice. However, [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2016-07-29T05:54:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"990\" \/>\n\t<meta property=\"og:image:height\" content=\"704\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Oracle Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Oracle Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\"},\"author\":{\"name\":\"Oracle Team\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"headline\":\"Exadata X-5 Bare Metal vs. OVM performance\",\"datePublished\":\"2016-07-29T05:54:55+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\"},\"wordCount\":677,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png\",\"keywords\":[\"Exadata\",\"Oracle\",\"Oracle VM\",\"SLOB\",\"X5\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\",\"name\":\"Exadata X-5 Bare Metal vs. OVM performance - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png\",\"datePublished\":\"2016-07-29T05:54:55+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png\",\"contentUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png\",\"width\":990,\"height\":704},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Exadata X-5 Bare Metal vs. OVM performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\",\"name\":\"Oracle Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"caption\":\"Oracle Team\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Exadata X-5 Bare Metal vs. OVM performance - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/","og_locale":"en_US","og_type":"article","og_title":"Exadata X-5 Bare Metal vs. OVM performance","og_description":"By Franck Pachot . The Exadata X5 can be installed Bare Metal or Virtualized. The latter one, Oracle VM allows to create multiple clusters on one machine, is more complex for installation and for storage capacity planning. But it allows a small flexibility on options licencing. Those are the real challenges behind the choice. However, [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/","og_site_name":"dbi Blog","article_published_time":"2016-07-29T05:54:55+00:00","og_image":[{"width":990,"height":704,"url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png","type":"image\/png"}],"author":"Oracle Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Oracle Team","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/"},"author":{"name":"Oracle Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"headline":"Exadata X-5 Bare Metal vs. OVM performance","datePublished":"2016-07-29T05:54:55+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/"},"wordCount":677,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png","keywords":["Exadata","Oracle","Oracle VM","SLOB","X5"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/","url":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/","name":"Exadata X-5 Bare Metal vs. OVM performance - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png","datePublished":"2016-07-29T05:54:55+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/CaptureX5BMVM-1.png","width":990,"height":704},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/exadata-x-5-bare-metal-vs-ovm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Exadata X-5 Bare Metal vs. OVM performance"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee","name":"Oracle Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","caption":"Oracle Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/8683","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=8683"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/8683\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/8685"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=8683"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=8683"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=8683"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=8683"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}