Google’s Continued Innovation of Technology Evolution

Google has the uncanny ability to introduce non-core disruptive innovations while simultaneously defending and expanding its core, and an analysis of the concepts and framework in Clayton Christensen’s book Seeing What’s Next offers insight into how.

Recently, Google introduced free GPS on the Android phone through a strategy that can be described as “sword and shield.” This latest disruptive innovation seeks to beat a current offering serving the “overshot customers,” i.e. the ones who would stop paying for additional performance improvements that historically had called for price premium. Google essentially entered into the “GPS Market” to serve said overshot customers by using a shield: asymmetric skills and motivation in the form of Android OS, mapping data and a lack of direct revenue expectations. Subsequently, Google transformed its “shield” into a “sword” by disinteremediating the map providers and using a revenue-share agreement to incentivize the carriers.

Examples of “incremental to radical,” to use Christensen’s terms, sustaining innovations in which Google sought out the “undershot customers” are GMail and Google’s core search technology. Frustrated with the products’ limitations, these customers are willing to swap their current product for another better one, should it exist. Web-based email solutions and search engines existed before the Google-introduced ones, but those introduced by Google solved problems that were frustrating users of other products. For example, users relished in GMail’s expansive email quota (compared to the limited quota they faced before) and also enjoyed the better indexing and relevancy algorithms of the Google search engine. Although Microsoft is blatantly targeting Google with Bing, Google appears unruffled and continues to steadily, if somewhat slowly, invest in its sustainable innovation (such as with Caffeine, the next-generation search platform, Gmail labs, social searches, profiles, etc.) to continue to maintain the revenue stream out of its core business.

By spending money on lower-end disruptive innovations and not “cramming” sustaining innovation, Google managed to thrive while most companies are practically destined to fail. The issue between Google’s sustaining and disruptive innovations was even coped with by using this strategy! According to insiders at Google, the GMail team was not used to create Google Wave, a fact unbeknownst to the GMail team. If Google had added wave-like functionality to Gmail, it would have been “cramming” sustaining innovation, while innovating outside of email can potentially serve a variety of both undershot and overshot customers.

So what does this mean for AT&T? Basically, AT&T needs to watch its back and keep an eye on Google! Smartphone revenue is predicted to surpass laptop revenue in 2012, after the number of Smartphone units this year surpassed the number of laptops sold. The current number of subscribers to Comcast exceeds 7 million (eight-fold what it used to be). While Google pays a pricey phone bill for Google Voice, which has 1.4 million users (with 570,000 of them using it seven days a week) Google is dedicated to making Google Voice work—and if it does Google could potentially serve a new brand of overshot customers that want to stay connected in realtime but don’t need or want a landline.

Although some argue that Chrome OS is more disruptive, using disruptive innovation theory it can be said that Chrome OS is created for the breed of overshot customer that is frustrated with other market solutions at the same level, not for the majority of customers. Should Google currently be scheming around Chrome OS, the business plan would be an expensive one, not to mention timely and draining in its use of resources. For more information on Google’s continued innovation efforts, please visit Nubifer.com.

Addressing Concerns for Networking in the Cloud

Many concerns arise when moving applications between internal data centers and public clouds. The considerations for cloud networking once transferred to the cloud will be addressed below.

In the respect that clouds have unique networking infrastructures that support flexible and complex multi-tenant environments, clouds do not vary from the enterprise. Each enterprise has an individual network infrastructure used for accessing servers and allowing applicants to communicate between varying components. That unique infrastructure includes address services (like DHCP/DNS), specific addressing (sub-nets), identity/directory services (like LDAP) and firewalls and routing rules.

It is important to remember that the cloud providers have to control their networking in order to route traffic within their infrastructure. The cloud providers’ design is different from enterprise networking in architecture, design and addressing. While this does not pose a problem when doing something stand-alone in the cloud (because it doesn’t matter what the network structure is, as long as it can be accessed over the Internet), discontinuities must be addressed when desiring to extend existing networks and using existing applications.

In terms of addressing, the typical cloud provider will assign a block of addresses as part of the cloud account. Flexiscale and GoGrid, for example, give the user a block of addresses which are able to be attached to the servers created. These are external addresses (i.e. public addresses that are able to be accessed from the Internet) in some cases, and internal in others. Whether external or internal, they are not assigned as part of the user’s addressing, which means that even if the resources are able to be connected to the data center, new routes will need to be built and services will need to be altered to allow these “foreign” addresses into the system.

A different approach was taken by Amazon, which provided a dynamic system where an address is assigned each time a server is started. In doing this, it was difficult to build multi-tier applications which require developers to create systems which are capable of passing changing address information between application components. The problem for connecting to the Amazon cloud is partially solved by the new VPC (Virtual Private Cloud), although some key problems persist, thus other cloud providers continue to look into similar networking capabilities.

Data protection is another key issue concerning networking in the cloud. A secure perimeter defined and developed by an IT organization, comprised of firewalls, rules and systems to create a protected environment for internal applications, is located within the data center. The reason this is important is that most applications need to communicate over ports and services not safe for general Internet access. It can be dangerous to move applications into the cloud unmodified because applications are developed for the protected environment of the data center. The application owner or developer usually has to build protection on a per-server basis and subsequently enact corporate protection policies.

An additional implication for the loss of control of the infrastructure referenced earlier is that in most clouds, the physical interface level cannot be controlled. MAC addresses are assigned in addition to IP addresses, and these can change each time a server is started, meaning that the identity of the server cannot be based on this common attribute.

Whenever enterprise applications require the support of data center infrastructure, networking issues like identity and naming services and access to internal databases and other resources are involved. Cloud resources thus need a way to connect to the data center, and the easiest is a VPN (Virtual Private Network). In creating this solution, it is essential to design for routing to the cloud and provide a method for cloud applications to “reach back” to the applications and services running in the data center. This connection ideally would allow Layer-2 connectivity due to a number of services required to function properly.

In conclusion, networking is a very important part of IT infrastructure, and the cloud contributes several new variables to the design and operation of the data center environment. A well-constructed architecture and solid understanding of the limitations imposed by the cloud are needed if you want to integrate with the public cloud successfully. Currently, this can be a major barrier to cloud adoption because enterprises are understandably reluctant to re-architect their network environments or become knowledgeable about each cloud provider’s underlying infrastructure’s complexities. In designing a cloud strategy, it is essential to choose a migration path which addresses these issues and protects from expensive engineering projects as well as cloud risks. Please visit Nubifer.com for more information.

Amazon Offers Private Clouds

While Amazon initially resisted offering a private cloud, and there are many advocates of the public cloud, Amazon recently introduced a new Virtual Public Cloud, or VPC. While many bloggers question whether or not Amazon’s VPC is truly a “virtually” private cloud or a “virtual” private cloud, there are some who believe that the VPC may be a way to break down the difficulties that face customers seeking to adopt cloud computing, such as security, ownership and virtualization. The following paragraphs will address each of these issues and how Amazon’s VPC would alleviate them.

One of the key concerns facing customers adopting cloud computing is the perceived security risks that may occur, but the placebo cloud may assuage these risks. The security risk stems from the past experiences of customers’; these customers believe that any connections made using Amazon’s VPN must be secure, even if they are connecting into a series of shared resources. Using Amazon’s private cloud, customers will deploy and consume the applications in an environment that they feel is safe and secure.

Amazon’s VPC provides a sense of ownership to customers without letting them actually own the computing. Customers may initially be skeptical about not owning the computing, thus it is up to Amazon’s marketing engine to provide ample information to alleviate that worry.

As long as the customers’ business goals are fully realized with Amazon’s VPC, they need not necessarily understand nor care about the differences between virtualization and the cloud. In using the VPC, customers are able to use VPN, and network-virtualization—the existing technology stack that they are already comfortable with. In addition, the VPC would allow the partners to enable the customers to bridge the gap between their on-premise systems to the cloud to create a hybrid virtualization environment, which spans several resources.

Whether or not some favor the public cloud, the customer should be able to first choose to enter into cloud computing and later choose which way to leverage the cloud on their own.  For more information about Private Clouds, please visit Nubifer.com.

Get Your Java with Google App Engine

Finally! Google’s App Engine service has finally embraced Java’s programming language. The most requested feature for App Engine since its exception, Java support is currently in “testing mode,” although Google eventually plans on bringing GAE’s Java tools up to speed with its current Python support.

As Google’s service for hosting scalable and flexible web applications, App Engine is synonymous with cloud computing for Google. Java is one of the most frequently-used languages for coding applications on the web, and by adding Java Google is filling a major break in its cloud services plan. Also by adding Java, Google is catching up with one if its fiercest competitors in cloud computing, Amazon. Amazon’s Web Services platform has provided support for Java virtual machines for some time now.

In addition, Java support also allows for the possibility of making App Engine a means of running applications for Google’s Android mobile platform. Although no plans for Google’s Android GAW apps have not been outlined as of yet, it appears as if Google is preparing for an effortless and quick way to develop for Android, as Java is available on the device as well as the server.

With the addition of Java support to Google App Engine, other programming languages such as JavaScript, Ruby and maybe Scala, can run on Java virtual machines as well. The possibility of JRuby support or support for other JVM languages arriving any time in the near future, however, is unlikely due to the experimental status of Java.

Those wishing to play around with Google App Engine’s new Java support can add their name to the list on the sign up page; the first 10,000 developers will be rewarded with a spot in the testing group.

Along with Java support, the latest update for Google App Engine includes support for cron jobs which enables programmers to easily schedule recurring tasks such as weekly reports. The Secure Data Connector is another new feature; the Secure Data Connector lets Google App Engine access data behind a firewall. Thirdly, there is a new database import tool; the database import too makes it easier to transport large amounts of data into App Engine.

In summary, by embracing the programming language of Java, Google is filling a gap in its cloud services plan and catching up with competitors like Amazon.  For more information on Google Apps, please visit Nubifer.com.

Answers to Your Questions on Cloud Connectors for Leading Platforms like Windows Azure Platform

Jeffrey Schwartz and Michael Desmond, both editors of Redmond Developer News, recently sat down with corporate vice president of Microsoft’s Connected Systems Division, Robert Wahbe, at the recent Microsoft Professional Developers Conference (PDC) to talk about Microsoft Azure and its potential impact on the developer ecosystem at Microsoft. Responsible for managing Microsoft’s engineering teams that deliver the company’s Web services and modeling platforms, Wahbe is a major advocate of the Azure Services Platform and offers insight into how to build applications that exist within the world of Software-as-a-Service, or as Microsoft calls it, Software plus Services (S + S).

When asked how much of Windows Azure is based on Hyper-V and how much is an entirely new set of technologies, Wahbe answered, “Windows Azure is a natural evolution of our platform. We think it’s going to have a long-term radical impact with customers, partners and developers, but it’s a natural evolution.” Wahbe continued to explain how Azure brings current technologies (i.e. the server, desktop, etc.) into the cloud and is fundamentally built out of Windows Server 2008 and .NET Framework.

Wahbe also referenced the PDC keynote of Microsoft’s chief software architect, Ray Ozzie, in which Ozzie discussed how most applications are not initially created with the idea of scale-out. Explained Wahbe, expanding upon Ozzie’s points, “The notion of stateless front-ends being able to scale out, both across the data center and across data centers requires that you make sure you have the right architectural base. Microsoft will be trying hard to make sure we have the patterns and practices available to developers to get those models [so that they] can be brought onto the premises.”

As an example, Wahbe created a hypothetical situation in which Visual Studio and .NET Framework can be used to build an ASP.NET app, which in turn can either be deployed locally or to Windows Azure. The only extra step taken when deploying to Windows Azure is to specify additional metadata, such as what kind of SLA you are looking for or how many instances you are going to run on. As explained by Wahbe, the Metadata is an .XML file and as an example of an executable model, Microsoft is easily able to understand that model. “You can write those models in ‘Oslo’ using the DSL written in ‘M,’ targeting Windows Azure in those models,” concludes Wahbe.

Wahbe answered a firm “yes” when asked if there is a natural fit for application developed in Oslo, saying that it works because Oslo is “about helping you write applications more productively,” also adding that you can write any kind of application—including cloud. Although new challenges undoubtedly face development shops, the basic process of writing and deploying code remains the same. According to Wahbe, Microsoft Azure simply provides a new deployment target at a basic level.

As for the differences, developers are going to need to learn a new set of services. An example used by Wahbe is if two businesses were going to connect through a business-to-business messaging app; technology like Windows Communication Foundation can make this as easy process. With the integration of Microsoft Azure, questions about the pros and cons of using the Azure platform and the service bus (which is part of .NET services) will have to be evaluated. Azure “provides you with an out-of-the-box, Internet-scale, pub-sub solution that traverses firewalls,” according to Wahbe. And what could be bad about that?

When asked if developers should expect new development interfaces or plug-ins to Visual Studio, Wahbe answered, “You’re going to see some very natural extensions of what’s in Visual Studio today. For example, you’ll see new project types. I wouldn’t call that a new tool … I’d call it a fairly natural extension to the existing tools.” Additionally, Wahbe expressed Microsoft’s desire to deliver tools to developers as soon as possible. “We want to get a CTP [community technology preview] out early and engage in that conversation. Now we can get this thing out broadly, get the feedback, and I think for me, that’s the most powerful way to develop a platform,” explained Wahbe of the importance of developers’ using and subsequently critiquing Azure.

When asked about the possibility of competitors like Amazon and Google gaining early share due to the ambiguous time frame of Azure, Wahbe’s responded serenely, “The place to start with Amazon is [that] they’re a partner. So they’ve licensed Windows, they’ve licensed SQL, and we have shared partners. What Amazon is doing, like traditional hosters, is they’re taking a lot of the complexity out for our mutual customers around hardware. The heavy lifting that a developer has to do to tale that and then build a scale-out service in the cloud and across data centers—that’s left to the developer.” Wahbe detailed how Microsoft has base computing and base storage—the foundation of Windows Azure—as well as higher-level services such as the database in the cloud. According to Wahbe, developers no longer have to build an Internet-scale pub-sub system, nor do they have to find a new way to do social networking and contacts nor have reporting services created themselves.

In discussing the impact that cloud connecting will have on the cost of development and the management of development processes, Wahbe said, “We think we’re removing complexities out of all layers of the stack by doing this in the cloud for you … we’ll automatically do all of the configuration so you can get load-balancing across all of your instances. We’ll make sure that the data is replicated both for efficiency and also for reliability, both across an individual data center and across multiple data centers. So we think that be doing that, you can now focus much more on what your app is and less on all that application infrastructure.” Wahbe predicts that it will be simpler for developers to build applications with the adoption of Microsoft Azure.  For more information regarding Windows Azure, please visit Nubifer.com.

Welcome to Nubifer Cloud Computing blogs

In this location, we share blogs, research, tutorials and opinions about the ever changing and emerging arena of cloud computing, software-as-a-service, platform-as-a-service, hosting-as-a-service, and user-interface-as-a-service. We also share key concepts focused on interoperability while always maintaining an agnostic viewpoint of technologies and services offered by the top cloud platform providers. For more information, please visit Nubifer.com.