Google Rethinks The Datacenter
With its vastly profitable search engine and advertising empire, it might seem that Google no longer has to worry about controlling capital expenditures. This is not the case. Google obsesses about infrastructure costs. But it does have the luxury of stepping back to rethink how best to utilize its sprawling infrastructure. In Google's case, the datacenter and, with it, the cloud, is steadily evolving to become the computer.
Speaking at the GigaOm Structure conference, Urs Hoelzle, Google's senior vice president for technical infrastructure, described how the search giant is rethinking the datacenter. For starters, "We've really learned how to scale" as more computing is done on devices, particularly Android devices.
Google has been engineering its own hardware for a decade and its software from day one to get a better handle on managing workloads generated by searches, Gmail, video, and a raft of other applications. Hence, its use case has steadily diverged from what server vendors were offering.
The result, said Hoelzle, is Google began "thinking of the datacenter as a computer rather than as a single box."
"With cloud, these things in the five or ten year [time] frame actually converge again because 'normal' workloads, not just Google-scale workloads, are going to run on the same infrastructure and therefore can use the same hardware," he said.
As the datacenter becomes one big virtual computer, Google is looking for ways to make it more transparent for programmers so they can develop better tools for managing workloads. That means you need a "scheduler" to manage the workloads and "put them inside the cluster," he said.
"One of the key pieces is the cluster management that takes a stream of workloads from multiple users and actually places it on these servers and networks and clusters."
With that in mind, Google recently moved to open source a simplified cluster manager for Docker container images. "You have a set of VMs, and you put one or more of these containers inside the VM in order to run them," Hoelzle explained. "That let's you think about your application rather than thinking about your infrastructure. In the long term, that's the right way of thinking about datacenters."
Meanwhile, the need to scale is forcing organizational change.
"You can't scale your organization by 50 percent every year and keep doing things the way you did," he stressed. "You need to have a sort of efficiency of scale," which means more automation and fewer administrators handling more management tasks.
Google has concluded that workloads are moving to the cloud. "It's obviously the right thing from an economic perspective, from a functionality perspective. And it will be the obvious thing from a security and compliance perspective," the Google executive claimed.
Private cloud proponents might take issue with Hoelzle's views on cloud security, but he argued that merely containing data on-premise is increasingly overrated from a security standpoint.
"Physical location is not what makes your security," he argued.
What does, and the hard part, is getting all security elements in place from the user to encrypted storage. "If we can have more cookie-cutter infrastructure and services that actually get these individual pieces to link up together without making accidental mistakes, that will increase your de facto security," Hoelzle argued.
Related
George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).