Dell Takes A Long View On Datacenters
One of the things that Dell committed to do once it went private was to invest more in research and development. Last summer, while the privatization effort was still under way, the company quietly formed Dell Research and put Jai Menon, a top storage and systems researcher formerly from IBM, in charge of the division.
Menon spoke with EnterpriseTech recently to outline the goals for the division and how they map to changes that Dell sees in the datacenter over the next several years.
Menon is no stranger to prognostication and research. He was hired by Dell back in the fall of 2012 to be CTO of its Enterprise Group, and prior to that Menon spent twenty years at IBM in various roles. Menon got his MS and PhD in computer science from Ohio State University and then joined Big Blue in 1982 with his first position at IBM Research. He holds 52 patents and has expertise in storage systems, particularly RAID controllers but also with the networking, caching, and virtualization technologies that go into modern storage systems. Menon eventually moved out of IBM Research to become CTO of IBM's Storage Division and eventually became CTO for the entire Systems and Technology Business.
The Dell Research arm is not, as you might expect, a smaller replica of the very broad and deep research and development that IBM is noted for doing. You can afford that kind of scientific inquiry on a mainframe budget, but certainly not on a PC and X86 server budget. There are some similarities, such as Dell wanting to collaborate more strongly with universities to do its investigations. But Dell doesn't want to be operating at such a high level, but rather a little closer to where the customer – IT shops that are building datacenters and running applications – lives. The Dell organization is relatively small for now – Dell will not disclose its budget – and will leverage existing R&D teams within Dell's various divisions as well as hiring its own experts from academia or startups.
The unit brings together researchers who worked for various Dell divisions, including a plethora of software companies with specialization in systems management, virtualization, and other areas. The initial four areas of study include modern infrastructure, security, data insight, and mobility and the Internet of Things. About half of the researchers work from Dell's facilities outside of Austin, Texas and the other half work at Dell's office in Silicon Valley, which is the coalescing of several software and hardware companies that Dell acquired over the past several years. The Dell Research division is part of the overall expansion of Dell's R&D efforts, which include a $300 million innovator fund that the company announced last December at its annual DellWorld customer and partner shindig. Dell was on track to spend around $1.3 billion on research, development, and engineering in the calendar year 2013 before it went private, and that was about 28 percent higher than the in the prior calendar year. Generally speaking, Menon says that Dell will spend even more on R&D going forward, but he is not at liberty to say how much even if he can talk generally about what Dell will be exploring.
"The focus for us is to complement the work that is going on in each individual business unit," explains Menon. "We are focused on things that are pan-Dell, that are not necessarily straight line – what is the next PC, what is the next server – but disruptive and long term. The other research efforts tend to be shorter term and more business-unit focused, and they do a fantastic job of that."
The research in modern infrastructure is most relevant to enterprise IT shops, and includes cloud computing, software-defined datacenters, and similar technologies. One big area that Menon's team is looking into is flash memory and all of the possible follow-ons that could be commercialized after flash technology runs out of gas.
"As phase change memory and memristors come to market – and I expect PCM to become real within the next three years in a server context – we want to explore what we can do if we have a server with memory that is like flash, but it is 50 to 100 times faster, almost the speed of DRAM itself," explains Menon. "What kind of new analytics will be enable?"
The idea is to create a curve for analytics work that takes into account the rate of improvement for algorithms, changes in memory technology, and Moore's Law improvements for processors and memory. And the goal is to predict how much analytics work will be able to be performed three years from now given all of these trends and the adoption of new memory technologies in servers and storage.
Another infrastructure research project involves the slew of gear deployed by telecommunications companies. What Dell learns with telcos will apply to other industry sectors and to datacenters in general, says Menon.
"If you go to a telco operator, they have a large number of very specialized boxes that really drive their infrastructure," explains Menon. "And it is high in both capital and operating expenses, and they have an alphabet soup of names. At the end of the day, it is hardware, it is proprietary, and there are only a few sources. This is a $160 billion business, and what the operators would like is to move all of that stuff to run on standard servers and storage. These applications are a little different from what you see in a regular datacenter, and the research is into creating a layer of software that can drive a huge number of packets per second. This is possible and it is going to be a journey of many years with these customers. But the power of software is going to disrupt this industry, and we are going to help with that."
When it is all done, telcos will be able to run their systems on cloudy infrastructure and relatively easily scale from hundreds of thousands to millions of users without having to buy tons of specialized gear.
Given the fact that datacenters have, by and large, standardized on X86 processors and either Windows or Linux operating systems for a lot of their work, you might be thinking: Why this hasn't happened already? The reason, says Menon, is that telcos have focused on performance. As things now stand, you have a lot of different ASICs in the telco stack performing different functions, with processors sitting next to them processing code. The ASICs don't all have to go away, but you can consolidate them down to a single server. The Freedom Server-Switch from Pluribus Networks announced two weeks ago is a perfect example, which integrates a switch ASIC and a server platform.
"Our philosophy is that the standard stuff will be able to do more and more," says Menon, speaking very generally about datacenters, not just telco gear. "There will always be edge cases, of course."
Dell's Data Center Solutions group already has done ground-breaking work on minimalist hyperscale server designs and modular and containerized datacenters, so Dell Research is not going to repeat that work, says Menon. But, he is thinking about perhaps starting up a research project for cable-less datacenters. Such a wireless datacenter might be possible in five to ten years – what he called a "Holy Grail" timeframe – and this would eliminate a lot of weight and errors and cost in datacenters.