Oracle Gives Exadata Clusters Hardware Makeover
Customers who are in the middle of buying Exadata database clusters from Oracle had better put a hold on those deals and check out the new Exadata X4-2 machines that the company has just rolled out. And those who were looking at fat node implementations of the database engines would be wise to wait and see what future machines based on Intel's forthcoming "Ivy Bridge-EX" Xeon E7 processors will hold.
With rivals Hewlett-Packard and Dell making lots of noise, Oracle had to do something, and so it trotted out the new Exadata iron, which has upgraded components up and down the stack, from the processors to the networking cards, to the flash and disk storage. The end result is a system with more compute and storage capacity as well as throughput, and with Oracle holding prices steady, this is translating into substantially better value for the dollar.
The X4-2 system is the fifth generation of Exadata machines and the fourth created by Oracle in the wake of its acquisition of Sun Microsystems nearly four years ago; the first machine made by Oracle was done in partnership with Hewlett-Packard and based on its ProLiant servers.
The X4-2 is based on two-socket servers designed by Oracle that are based on Intel's "Ivy Bridge-EP" Xeon E5-2600 v2 processors, which were announced back in September. These processors are socket compatible with the earlier "Sandy Bridge-EP" processors, so the processor upgrade is no big deal in terms of system engineering. But by moving to the top-bin E5-2697 v2 processors, which have twelve cores running at 2.7 GHz, Oracle is able to increase the core count in a single rack of Exadata iron by 50 percent to 192 cores across eight compute nodes. On top of that, Oracle has worked with Intel to tune Turbo Boost to run on both the compute and storage server nodes in the updated Exadata machines. Juan Loaiza, senior vice president of systems technology at Oracle, said in a Webcast going over the new iron that if only a few threads are active on a Xeon core, then the clock speed can jump to 3.5 GHz, boosting the throughput on database workloads by 29 percent, and on heavy workloads the clock rate can often be pumped up to 3 GHz and push about 11 percent more transactions through the systems. Turbo Boost is enabled by default in the new Exadata systems and will crank up the clocks on cores if the power draw on a processor is low enough to keep it below its set thermal limits.
Each compute node in the Exadata X4-2 has 256 GB of memory, just like the X3-2 nodes did before them, but now Oracle has certified 32 GB memory sticks in the server so you can pump it up to 512 GB per node if that helps with your database workloads. The compute nodes now have 600 GB SAS disks, which is twice the capacity used in the earlier nodes; these disks spin at a relatively slow 10K RPM. The two-port InfiniBand cards that Oracle gets from Mellanox Technologies and that slide into a PCI-Express 3.0 slot are both activated, doubling the bandwidth between the nodes and their storage servers. Prior to this, Oracle was putting a PCI-Express 2.0 card into the slot and only had one port active. (It is all about balancing the performance of the network to the compute and the storage servers that feed into the compute nodes.)
The X4-2 storage server is also based on the new Ivy Bridge Xeon E5 processors, and in this case it is using a six-core Xeon E5-2630 v2 that spins at 2.6 GHz, which is faster than the cores used in the earlier X3-2 storage servers. The scale-out storage that is the secret sauce in the Exadata machine has fourteen two-socket machines in a full-rack setup, which has 168 cores in total. The memory has been boosted from 64 GB to 96 GB to manage the data coming off the fatter 800 GB Sun Flash Accelerator F80 PCI-Express flash cards in the storage servers.
Each storage server node has four F80 flash cards, which have on-chip hardware compression added that the F40 cards did not. These storage servers have 168 SAS drive slots, and Oracle allows customers to pick either a dozen 1.2 TB disks in a 2.5-inch form factor that spin at 10K RPM – what it calls the higher performance or HP variant – or a dozen 4 TB drives in a 3.5-inch form factor that spin at only 7.2K RPM – what it calls the high capacity or HC option. That is a maximum of 200 TB per Exadata rack using the 1.2 TB disks for the HP option, which is a factor of two increase in capacity, or a maximum of 672 TB per rack using the 4 TB disks for the HC option, which is a 33 percent increase over the X3-2 setup. The storage servers now have dual-port InfiniBand adapters with both ports active, just like the compute nodes.
The flash cards in the storage servers have double the capacity, which yields 44.8 TB per rack, and with hardware compression, the logical capacity of the flash memory is more like 89.6 TB. This compression is done automatically and transparent to the Oracle 11g or 12c database that runs on the cluster. This compression is done in hardware, so there is no performance overhead on the compute or storage servers, and Oracle says that depending on the data, compression ratios can range from a factor of 1.2 to 5. The important thing, Loaiza explained, is that most of the databases used in production environments today will fit inside of the flash spread put around the Exadata cluster. Flash cache hit rates are often from 95 to 98 percent even when the database is ten times larger than the physical flash size because of all of the compression and caching tricks Oracle has come up with.
Add it all up, and Oracle says that a rack of Exadata gear can do 2.66 million random reads and 1.96 million random writes using 8 KB chunks of data to the Oracle 12c database into the flash that front-ends the disk storage. This is 77 percent more I/O operations per seconds than the X3-2 could deliver. It is not clear how this will translate into database and data warehouse performance increases.
Here are some more detailed performance specs:
Oracle has made a few other tweaks with the X4-2. The full-rack and half-rack configurations no longer have an InfiniBand spine switch, which is used to lash multiple Exadata racks together. You can still gang up multiple racks if you want to, but you will have to buy the switch to do so. Also, each rack now has one spare disk and one spare flash card that come with it for quick replacement in the field.
For fat-node Exadata machines, Oracle tipped its cards a bit and said that it would be supporting the forthcoming Ivy Bridge Xeon E7 processors in a machine that will, for some reason, continue to bear the X3-8 name instead of being called the X4-8. The disk and network configurations in this Xeon E7-enabled X3-8 system will be the same, so there will not be a doubling of local disk capacity or InfiniBand throughput on the compute nodes. But, Oracle did say that customers will be able to upgrade to the faster X4-2 storage servers inside this X3-8, so they will get all the benefits from fatter flash, flash compression, faster storage processors, and fatter disks there, just like customers buying the new Exadata X4-2 systems.
On the software front, the new Exadata X4-2 system will be available with Oracle Linux 5.9, which has the UEK2 kernel from Oracle, or Solaris 11 Update 1 SRU 9. Both operating systems have been tweaked to support the hardware-based flash compression and for doing a better job of flash caching.
As you can see from the Engineered Systems price list, a full rack of the Exadata X4-2 costs $1.1 million. The price is the same for the high capacity or high performance disk variants in the Exadata storage servers that are included in this full rack configuration. You can buy an eighth, quarter, or half rack configuration as well, with the eighth rack entry price being $220,000. These prices include the cost of Linux or Solaris, but not the cost of the Oracle database software or the Real Application Clustering and other add-ons commonly used by customers. This can be very expensive, depending on the user, processor, or enterprise-wide licensing that a user has with Oracle. Like many millions of dollars per rack additional, depending on the options customers choose.