Enlarge / Left-to-right: Cascade Lake Xeon AP, Cascade Lake Xeon SP, Broadwell Xeon D-1600, and up entrance Optane DC Persistent Reminiscence.
Intel right now launched a barrage of latest merchandise for the info heart, tackling nearly each enterprise workload on the market. The corporate’s various vary of merchandise highlights how right now’s knowledge heart is extra than simply processors, with community controllers, customizable FPGAs, and edge gadget processors all a part of the providing.
The star of the present is the brand new Cascade Lake Xeons. These have been first introduced final November, and on the time a dual-die chip with 48 cores, 96 threads, and 12 DDR4 2933 reminiscence channels was going to be the highest spec half. However Intel has gone even additional than initially deliberate with the brand new Xeon Platinum 9200 vary: the top-spec half, the Platinum 9282, pairs two 28 core dies for a complete of 56 cores and 112 threads. It has a base frequency of two.6GHz, a Three.8GHz turbo, 77MB of degree Three cache, 40 lanes of PCIe Three.zero enlargement, and a 400W energy draw.
The brand new dual-die chips are dubbed “Superior Efficiency” (AP) and slot in above the Xeon SP (“Scalable Processor”) vary. They’re going to be supported in two socket configurations for a complete of four dies, 24 reminiscence channels, and 112 cores/224 threads. Intel doesn’t plan to promote these as naked chips; as an alternative, the corporate goes to promote motherboard-plus-processor packages to OEMs. The OEMs are then accountable for including liquid or air cooling, deciding how densely they wish to pack the motherboards, and so forth. As such, there isn’t any worth for these chips, although we think about it will be someplace north of “costly.”
Degree Three cache/MB
In addition to these new AP components, Intel is providing a full refresh of the Xeon SP line. The complete Cascade Lake SP vary contains some 60 completely different variations, providing completely different combos of core depend, frequency, degree Three cache, energy dissipation, and socket depend. On the high finish is the Xeon Platinum 8280, 8280M, and 8280L. All three of those have the identical primary parameters: 28 cores/56 threads, 2.7/four.0GHz base/turbo, 38.5MB L3, and 205W energy. They differ within the quantity of reminiscence they assist: the naked 8280 helps 1.5TB, the M bumps that as much as 2TB, and the L goes as much as four.5TB. The bottom mannequin is available in at $10,009, with the excessive reminiscence variants costing extra nonetheless.
Throughout the complete vary, a lot of different suffixes pop up too; N, V, and S are geared toward particular workloads (Networking, Virtualization, and Search, respectively), and T is designed for long-life/reduce-thermal hundreds. Lastly, a number of fashions have a Y suffix. This denotes that they’ve a characteristic known as “velocity choose,” which permits purposes to be pinned to the cores with the very best thermal headroom and highest-possible clock speeds.
Cascade Lake itself is an incremental revision to the Skylake SP structure. The essential parameters—as much as 28 cores/56 threads per die, 1MB degree 2 cache per core, as much as 38.5MB shared degree Three cache, as much as 48 PCIe Three.zero lanes, six DDR4 reminiscence channels, and AVX-512 assist—stay the identical, however the particulars present enchancment. They assist DDR4-2933, up from DDR4-2666, and the usual reminiscence supported is now 1.5TB as an alternative of 768GB. Their AVX-512 assist has been prolonged to incorporate an extension known as VNNI (“vector neural community directions”) geared toward accelerating machine-learning workloads. In addition they embody (largely unspecified) fixes for many variants of the Spectre and Meltdown assaults.
The opposite massive factor that Cascade Lake brings past Skylake is assist for Optane reminiscence. Many of the Xeon SP vary (although oddly, not the Xeon AP processors) can use Optane DIMMs constructed to the DDR4-T customary. Optane (also referred to as 3D XPoint) is a non-volatile solid-state reminiscence expertise developed by Intel and Micron. Its promise is to supply density that is corresponding to flash, random entry efficiency that is inside an order of magnitude or two of DDR RAM, and sufficient write endurance that it may be utilized in memory-type workloads with out failing prematurely. It does all this at a worth significantly decrease than DDR4.
Intel has been speaking about utilizing Optane DIMMs for memory-like duties for a while, however solely right now is it lastly launching, as Optane DC Persistent Reminiscence. Methods cannot use Optane solely—they’re going to want some standard DDR4 as properly—however through the use of the mixture they are often readily outfitted with huge portions of reminiscence, utilizing 128, 256, or 512GB Optane DIMMs.
Enlarge / Intel Optane DC Persistent Reminiscence
Purposes unaware of non-volatile reminiscence can use the Optane and DDR4 as a single big pool of reminiscence. Behind the scenes, the DDR4 will cache the Optane, and the general impact will probably be merely that a machine has an terrible lot of reminiscence that is just a little slower than common reminiscence. Alternatively, purposes could be written to explicitly use non-volatile reminiscence and could have direct entry to the Optane, utilizing it as a sort of big, randomly accessible, high-speed disk.
To alleviate any issues about endurance, Intel is providing a Three-year guarantee for Optane DC Reminiscence, even for components which were operating at their peak write efficiency for the whole three years.
Intel additionally introduced some refreshes to the Xeon D systems-on-chips first launched in 2015. In 2015, Intel launched the Broadwell-based Xeon D 1500 line. Final yr, these have been joined by the Skylake SP-based Xeon D 2100 line. The 2100 line provided a big improve in efficiency and reminiscence capability however with a lot larger energy attracts, too.
As we speak comes the Xeon D 1600 line, direct replacements for the 1500 components. Surprisingly, these new 1600 components proceed to make use of the identical Broadwell structure as their predecessors; they’re aimed on the similar sorts of storage and networking workloads, with two to eight cores/16 threads, as much as 128GB RAM, and energy attracts between 27 and 65W.
In addition to the processor cores, they embody (relying on which precise mannequin you have a look at) 4 10GbE Ethernet controllers, Intel Fast Help Know-how acceleration of compression and encryption workloads, 6 SATA Three channels, 4 every of USB Three.zero and a couple of.zero ports, 24 lanes of PCIe Three.zero, and eight lanes of PCIe 2.zero.
Enlarge / Intel 800-series Ethernet controller.
Introduced right now however coming within the third quarter is a brand new Intel Ethernet controller. The 800 sequence, codenamed Columbiaville, will assist 100Gb Ethernet. These controllers are somewhat extra programmable than your typical Ethernet controller, with customizable software-controlled packet parsing occurring inside the Ethernet controller itself. Meaning the chip can ship a packet for additional processing, reroute it to a distinct vacation spot, or do no matter an utility wants, all with out the involvement of the host processor in any respect. The controllers additionally assist application-defined queues and price limits, so complicated application-specific prioritization could be enforced.
For its last knowledge heart providing, Intel introduced the Agilex FPGA (discipline programmable gate array—a processor that may have its inside wiring reconfigured on-the-fly), constructed utilizing the corporate’s 10nm course of. These chips provide as much as 40TFLOPS of number-crunching efficiency and allow builders to construct a variety of application-specific accelerators. The FPGAs will sport a spread of non-compulsory capabilities, equivalent to containing 4 ARM Cortex-A53 cores, PCIe era four or 5, DDR4, DDR5, and Optane DC Persistent reminiscence, with an possibility for HBM excessive bandwidth reminiscence mounted on-chip, and cache coherent interconnects to connect them to Xeon SP chips.
For machine-learning workloads, they’re going to assist a spread of low-precision integer and floating level codecs. Additional customization will come from the flexibility to work with Intel and instantly embed customized chiplets into the FPGAs.
Enlarge / Intel Agilex FPGA mannequin.
Over the previous couple of years, FPGAs have turn out to be more and more widespread, particularly within the cloud knowledge facilities operated by the likes of Microsoft, Google, and Amazon, as they provide a helpful midpoint between the large flexibility of software-based computation and the large efficiency of hardware-based acceleration; they provide versatile acceleration of issues like networking, encryption, and machine-learning workloads, in a way that’s readily upgraded and altered to adapt to new algorithmic necessities and fashions.
Intel plans to have these accessible from the third quarter.