メイン コンテンツにスキップ

 Subscribe


Charlton Barreto is an accomplished and visionary cloud, Web, and distributed computing expert.  He is currently a Technology Strategist in cloud technologies at Intel, where he has worked since 2008. His previous role was with Adobe as a Technology Evangelist, prior to which he was with Web Methods and Borland. Charlton is a luminary who actively blogs, tweets, and speaks on cloud technologies and initiatives. He also has an active role on the boards of advisors for a number of companies, including BTC Logic and Intuit.

In this interview we discuss:

  • How thinking about the cloud has matured among customers
  • If country specific data laws work for the cloud
  • Technologies that underpin the cloud
  • How companies are using cloud computing outside the knowledge of IT and leadership
  • Perception of clouds running on open source
  • The Open Cloud Computing Interface

Robert: You’ve recently spoken at a number of cloud computing conferences, such as Cloud Camp Hamburg, the Cloud Computing Expo, and DreamForce. What are the major observations you took away from those speaking engagements about cloud computing and the attendees’ impressions of where things stand today?

Charlton: My major impressions have been that there is still a body of attendees who see cloud as a single-dimensional offering and something that’s principally technological. That’s something I was hoping to chip away at during these events; I have hoped to turn their perception toward cloud as a usage model, rather than as a set of technologies.

The other, more remarkable impression I took away is the fact that security and privacy are extremely important to consumers of clouds. They have integrated very thoughtful insight into their thinking about how current providers and emerging providers are taking measures to guarantee certain levels of data security and execution security in the cloud. They have begun to develop sophisticated expectations around service agreements concerning privacy.

My main takeaway has been that the thinking about cloud has matured to the extent that existing and potential consumers are starting to ask the right questions of their providers and their vendors. And these providers and vendors, to a fair degree, are beginning to respond. That is definitely a great positive to take away from these events.

Robert: There are laws that vary by country that restrict data. Do you think that some of those laws need to be rethought in light of the cloud?

Charlton: I think that laws in some regions, and particularly the Patriot Act in the U.S., need to be rethought in terms of the rights of a government to intrusively access data without regard for the processes of the data’s country of origin.

For example, if a European entity utilizes a cloud service that happens to have some of its processing and data storage in a U.S. site, should that fact give the U.S. government the right to subpoena that information? I think that issue has to be deeply reconsidered.

If a cloud consumer somewhere in the world objects to the data-access stipulations of the Patriot Act, how can they ensure that their information is isolated and secure from access by the U.S. government? They would need to be able to be sure that processing won’t take place in a U.S. data center, but rather in other data centers not bound by the Patriot Act regulations.

For reasons such as that, I think location is growing in importance. Effectively, I think that laws such as the Patriot Act have to be completely reconsidered, given that data storage and processing are transcending political borders more and more as the cloud continues to evolve.

Robert: Just a couple of days ago, I was talking to a financial solutions provider. He mentioned to me that the Swiss banking system requires all its data to stay in Switzerland. Not only that, but only Swiss citizens can view the data.

Charlton: That aptly highlights the importance of location services. Location services have previously been considered with regard to the qualities of the location. In other words, does the location of my processing allow me to have either greater quality of experience or access to different types of services? Or can I have freer access to a corporate network because my processing and data happen to reside in a location that has direct access to that network?

But I think as this borderless processing continues to grow, the importance of location services as part of policy decisions or administration grows in importance.

As a Swiss bank, if I am placing my information into a cloud, my provider has to ensure that my information stays within data centers located in Switzerland, in addition to being able to apply the correct policies to ensure that only those who have appropriate authentication can access that information.

Those considerations set the stage for technologies such as a Swiss citizen with a device located within Switzerland who can use user credentials, device credentials, and location services on that device to intersect with policies in a Swiss data center to gain authorized access to that information.

Now consider the case where the same Swiss citizen happens to be outside of Switzerland. Even though that person has an authenticated device and is authenticated in terms of their user credentials, they may be denied access based upon their location.

Robert: To shift gears a little, what Intel technologies do you feel impact cloud computing environments? Obviously, Intel’s VT technology comes into play here, but what other technologies?

Charlton: Trusted Execution Technology plays a critical role in enabling what is being called a trusted cloud. There are not that many practical means to determine and report on a cloud service’s security profile, or to verify, let’s say, service conformance or compliance to a governance standard. The current processes that exist are rather labor intensive and not necessarily consistent.

Trusted Execution Technology provides a way to address a lot of these issues and concerns around security. One is providing for secure virtual machines. In other words, you can measure and validate software prior to launch so that, with the execution controls, you can ensure that only the software that you trust as a user or service provider would be launched in your data center.

This provides what we are calling a chain of trust that is rooted in a secure platform. You are protecting a platform from the firmware up through hypervisor, and through that, you are verifying that the hypervisor is trustworthy before launching it. From there, you can also ensure that the VMs being launched and provisioned can be attested.

These are launch-time assurances, and Intel looks to its partners that provide services concerned with issues such as runtime security to be able to extend them into the runtime environment. Let’s say you are a user of a public cloud service. How can you understand exactly what sort of exposure and posture you actually have within your infrastructure?

To date, the efforts to build those on Trusted Execution Technology have included partners such as VMware and RXA. They provide capabilities to integrate governance risk and compliance monitors, configuration managers, and security information and event managers to report on the configuration of the virtual infrastructure.

Other notable features include Active Management Technology, which provides out-of-band, secure background updates to platforms. You could combine that with a trusted measured launch, or even without it, you could provide greater levels of security and management.

Active Management Technology provides a way for both managed and non-managed clients to conform to requirements in terms of updates, patches, security management, policy management, and resource utilization. Other technologies include, for example, anti-theft technology with devices, which allows you to provide policy actions that can be taken in the case of lost or stolen devices.

Robert: Previously you’ve highlighted Intel CTO Justin Rattner’s comments that it can be challenging to “take an open platform and selectively close it, protect crucial parts of the code from attack”, etc. What are your thoughts on that comment when it comes to cloud platforms?

Charlton: Well, I think in the cloud it’s somewhat complicated by the fact that such a large proportion of these platforms are virtualized. I think that reduces a lot of steps around missing security or isolation technologies and measures. In other words, how do you actually control something that can live anywhere, where you don’t necessarily know what resources it has attached to?

How can you correlate, let’s say, any of that processing with respect to what exists on the platforms on which they’re executed? That challenge can be mitigated through a number of different approaches.

Justin Rattner’s comment leads one to look at how can they provide other fascia, not only in terms of what platforms are you executing on, but in terms of the actual platforms you’re running. If you can attach to a runtime artifact such as a VM, a document, an application, or something else that happens to be that mutable and dynamic, you can then determine whether you trust that exact stack and whether you have enough control over it to show that it meets your requirements.

You can also determine whether you can trust that dynamic artifact based on the fact that it’s going to be experiencing changes from time to time. One question that arises there is how you can ensure that you can protect open resources in a cloud when you don’t necessarily understand their performance profile.

Consider the case where I have a given level of resources available in a typical data center or in a typical pool of data center resources, and I expect them to behave in this specific fashion. Given the fact that in the cloud, or among many service providers, the platform on which that code can run can vary greatly, how do I determine what the capabilities of the platforms are and then apply policies based on that information?

The cloud adds a lot of complexities to these questions, and the technologies and solutions are emerging to alleviate some of that complexity.

Robert: You’ve had roles on the boards of several companies (Intuit, BTC Logic), over the years. What have you found particularly rewarding about that experience, and how have you seen cloud computing evolve as a discussion point with the companies you’ve been a board member for?

Charlton: What’s been most rewarding is the ability to help companies understand some of the options available to them to address their greatest obstacles, in scaling, reaching new markets, or being able to address customer requests in an economical and timely fashion.

I value being able to take the experiences that I’ve had leading from early Web experiences into the cloud and being able to help guide them with that experience. With each of these companies, I’ve been able to help them take a different look at cloud as a way to provide some of these capabilities or resources they otherwise would not have. Intuit has been very fascinating, in the sense that they’ve looked to cloud as a way to deliver services.

Helping them understand, based on their requirements and desired deliverables to their customers, how they can apply cloud and how they can partner with other organizations appropriately has been a very rewarding and successful engagement.

I also help them understand specific emerging solutions and architectures that can help them address challenges in new and innovative ways.

Cloud has been very important in that regard, since it offers each of these and many more organizations a very high-level solution to the specific problem of how to deal with resource management given constantly changing levels of demand and constantly changing requests for services.

In terms of how to adapt to an ever-evolving market, cloud architectures have provided some innovative paths to help these organizations and others meet those demands.

Robert: You previously highlighted a study by the Poneman Institute that showed that 50% of respondents were unaware of all the cloud services deployed in their enterprise. What do you think is driving this number to be as high as it is today?

Charlton: First of all, I think there are some gaps in the understanding of cloud. Even though this has improved in the last couple of years, there’s still a body of corporate leadership that is a bit confused by what cloud really is. A second aspect is that much of the innovation, at least in terms of what’s known or understood as the usage model of cloud, is not being driven at the leadership level within organizations.

If you look at what the article that Bernard Golden released today, he really concisely brings up the very important point that, as developers and other professionals with an organization need access to resources and are challenged with difficulty within a traditional IT infrastructure, to access those resources, they’re looking to cloud services to fill back out.

That leads to resources such as a news media archive site being released on cloud services without leadership necessarily being aware that they are using cloud services. This is less a function of some sort of conflict or tension between l
adership and those who are involved in execution, than it is part of understanding what cloud is. As an extension of that, organizations also need to formally identify policies and their understanding of strategy and tactical issues around embracing cloud.

There are many people within various industries who understand and are beginning to develop strategies around cloud. At the same time, there’s still a significant proportion of consumers who aren’t aware of what they actually have in a cloud and what they do not. I see it as important to address that lack of knowledge.

I think another level of the problem is the fact that cloud architecture abstracts the hardware away from the compute resources. Without a broadly available way to bridge that gap, we’re going to see more confusion at this level.

Robert: In a “perception versus reality” slide taken from your recent Cloud Camp presentation in Hamburg, you stated that a perception was that “clouds only use open source”, and you stated that the reality is that this was true with “a few minor exceptions.” Can you expand on this thought a bit?

Charlton: To a large degree, cloud is the utilization of a lot of different open source stacks and components within the architecture. There isn’t a cloud architecture in and of itself; clouds are not monolithic. In that sense, “cloud” really refers to the usage model and the business model, rather than the technology.

The technologies that are being utilized to provide cloud solutions leverage to one degree or another various levels of various open source products or products that leverage open source. The tools, for example, for working with cloud solutions are to a great degree open source. A number of the managers, monitors, and plug-ins that provide integration between these stacks are also open source.

You do have a large degree of open source in cloud solutions, although they don’t use only open source. There are exceptions, but to a large degree, they utilize some level of open source tracking technology simply because either the providers or those who are building these clouds are looking to bootstrap these services as quickly as possible.

At the same time, they have to allay worries and concerns, as well as barriers to entry. In other words, if I can provide you with tools or frameworks that are open source, even if some of the back-end or management technology is proprietary or licensed, I’m providing you with fewer barriers to adoption.

What difference does it really make to me as an end user whether my workloads happen to be running on an open source solution or a proprietary solution? What matters to me is that I can access services and that the processing, privacy, security, and management all comply with the relevant regulations.

At the same time, a lot of these providers are looking for ways to economize and to better enable integration between their systems. I think the greatest underlying factor toward this effort is that many providers don’t have a single monolithic approach to building their clouds. They’re having to piece things together as they go along to suit requirements as they continue to evolve.

Until cloud evolves to a level of maturity where reference architectures are well known and adopted, we’re going to see this continued dynamic environment. There will continue to be, per my experience, a large number of open source solutions that are at least a part of those delivered services.

Robert: In the same talk, you discussed some of the goals and initiatives of the Open Cloud Computing Interface group, which was launched by the Open Grid Forum back in April 2009. Can you tell me a bit more about that work?

Charlton: OCCI is looking to deliver an open interface to manage cloud resources, not so much what happens to live on the stack, but rather the qualities and the characteristics of those workloads and what policies need to be applied to them depending on where they run.

What’s very positive and I think unexpected about that work is that, given the level of adoption, this has a fair enough momentum to achieve a good level of adoption within the industry or at the very least to answer the big raw questions about how to openly manage resources across different types of platforms and environments.

It is answering a very important question: how do I assure that if I have to move or deploy different resources to different cloud providers, the barriers to doing so are minimal?

OCCI, I think, is providing a very aggressive and faithful to its principles approach to assure that we have that interoperability. The vision is that not only do these cloud systems work together, but there is the ability to move your services between them and provide that ability to wire it up with legacy systems.

Robert: Are there other aspects of cloud computing that you would like to discuss while we have the opportunity?

Charlton: Sure. There’s been some very interesting work with DMTF that Intel has been engaged with in order to focus on the formats and actually address the infrastructure as a service question. How can I ensure the greatest level of portability and interoperability of my workloads across different clouds?

Whereas OCCI is a paths approach, DMTF tries to address it at the level of where, let’s say, Azure might expose a greater proportion of platform details or what you would see with an organization such as Rackspace or Amazon.

One aspect of that space that I find very compelling is the growing need to understand resourcing and how to best target those workloads to appropriate resources as they become available. To help build that out, Intel is increasingly developing fine-grained resource management, such that you can more reliably report on what resources are available to resources in the virtualized environment.

Traditional forms of virtualization do not provide an easy way to map those cycles to characteristics such as power consumption, CPU utilization, network utilization, or storage utilization on the device. That adds considerable challenges to being able to optimize the utilization of the environment and make the best use of that capacity.

At the same time, it makes it hard for providers to understand how well they are doing with respect not only to billing and metering their customers, but factors such as energy usage. Let’s say that you are a provider with an exclusive agreement with an energy delivery company. What happens when you begin to exceed the limits specified by your agreement? And how do you assure that you can best move your resources so as to comply with those thresholds?

Some of the work being done at Intel in the area of fine-grained resource management has greatly improved the ability to control and understand what resources individual virtualized workloads are consuming. That is a necessary precursor to being able to use policy management systems such those from Microsoft, VMware, Citrix, and others to manage those resources.

Robert: I know that Intel is an active contributor to the OpenStack community. Where do you see opportunity for Microsoft and Windows Azure to work with the OpenStack community?

Charlton: I think OpenStack provides a lot of opportunities for those who have either private cloud capabilities or are using other platforms that are currently open source themselves. I think OpenStack provides Microsoft the ability to bring some of these folks into Azure by allowing or providing integration capabilities and allowing users to leverage OpenStack to target resources that can be in the Azure back end.

Since early on, Azure has been looking at ways to provide more access or integration with different language sets and software platforms. I think OpenStack provides a way for Azure to open the door even further to integrate or allow interaction with other platforms, so that in a sense you can bring more workloads into the Azure cloud, given that you are opening it up a bit through this support of OpenStack to bring those workloads in and to move them around flexibly.

Robert: I had the chance at OSCON to meet Rick Clark, the community manager, and I have heard a lot of comments that the more we all take the same approach, the more rapidly customers can adopt the technology.

Charlton: I certainly agree that the more integration there is with an open path, the more providers will be able to compete on quality of service and capabilities, bringing more users into their particular cloud. If you are bringing more people into Azure, for example, and continuing to offer them value by utilizing the Azure cloud, it makes perfect sense that you will not only gain greater levels of engagement with these users, but also bring follow-on business into it.

Robert: One of our open source developers recently made what I thought was a very insightful remark: “Whoever’s cloud is the easiest to leave will win.”

Charlton: That’s a great line.

Robert: I agree. Well, we have run out of time, but thanks for taking the time to talk today.

Charlton: Thank you. 

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning


Join the conversation