Archive for June, 2010

Cloud Computing in 2010

A recent research study by the Pew Internet & American Life Project released on June 11 found that most people expect to “access software applications online and share and access information through the use of remote server networks, rather than depending primarily on tools and information housed on their individual, personal computers” by 2010. This means that the term “cloud computing” will likely be referred to as simply “computing” ten years down the line.

The report points out that we are currently on that path when it comes to social networking, thanks to sites like Twitter and Facebook. We also communicate in the cloud using services like Yahoo Mail and Gmail, shop in the cloud on sites like Amazon and eBay, listen to music in the cloud on Pandora, share pictures in the cloud on Flickr and watch videos on cloud sites like Hulu and YouTube.

The more advanced among us are even using services like Google Docs, Scribd or to create, share or store documents in the cloud. With that said, it will be some time before desktop computing falls away completely.

The report says: “Some respondents observed that putting all or most of faith in remotely accessible tools and data puts a lot of trust in the humans and devices controlling the clouds and exercising gate keeping functions over access to that data. They expressed concerns that cloud dominance by a small number of large firms may constrict the Internet’s openness and its capability to inspire innovation—that people are giving up some degree of choice and control in exchange for streamlines simplicity. A number of people said cloud computing presents difficult security problems and further exposes private information to governments, corporations, thieves, opportunists, and human and machine error.”

For more information on the current state of Cloud Computing, contact Nubifer today.


The Impact of Leveraging a Cloud Delivery Model

In a recent discussion about the positive shift in the Cloud Computing discourse towards actionable steps as opposed to philosophical rants in definitions, .NET Developer’s Journal issued a list of five things not to do. The first mistake among the list of five (which included #2. assuming server virtualization is enough; #3 not understanding service dependencies; #4 leveraging traditional monitoring; #5 not understanding internal/external costs), was not understanding the business value. Failing to understand the business impact of leveraging a Cloud delivery model for a given application or service is a crucial mistake, but it can be avoided.

When evaluating a Cloud delivery option, it is important to first define the service. Consider: is it new to you or are you considering porting an existing service? On one hand, if new, there is a lower financial bar to justify a cloud model, but on the downside is a lack of historical perspective on consumption trends to aid an evaluating financial considerations or performance.

Assuming you choose a new service, the next step is to address why you are looking at Cloud, which may require some to be honest about their reasons. Possible reasons for looking at cloud include: your business requires a highly scalable solution; your data center is out of capacity; you anticipate this to be a short-lived service; you need to collaborate with a business partner on neutral territory; your business has capital constraints.

All of the previously listed reasons are good reasons to consider a Cloud option, yet if you are considering this option because it takes weeks, months even, to get a new server in production; your Operation team is lacking credibility when it comes to maintaining a highly available service; or your internal cost allocation models are appalling—you may need to reconsider. In these cases, there may be some in-house improvements that need to be made before exploring a Cloud option.

An important lesson to consider is that just because you can do something doesn’t mean you necessarily should, and this is easily applicable in this situation. Many firms have had disastrous results in the past when they exposed legacy internal applications to the Internet. The following questions must be answered when thinking about moving applications/services to the Cloud:

·         Does the application consume or generate data with jurisdictional requirements?

·         Will your company face fines or a public relations scandal is there is a security breach/data loss?

·         What part of your business value chain is exposed if the service runs poorly? (And are there critical systems that rely on it?)

·         What if the application/service doesn’t run at all? (Will you be left stranded or are there alternatives that will allow the business to remain functioning?)

Embracing Cloud services—public or private—comes with tremendous benefits, yet a constant dialogue about the business value of the service in question is required to reap the rewards. To discuss the benefits of adopting a hybrid On-Prem/Cloud solution contact Nubifer today.

Asigra Introduces Cloud Backup Plan

Cloud backup and recovery software provider Asigra announced the launch of Cloud Backup v10 on June 8. Available through the Asigra partner network, the latest edition extends the scope and performance of the Asigra platform, including protection for laptops, desktops, servers, data centers and cloud computing environments with tiered recovery options to meet Recovery Time Objectives (RTOs). Organizations can select an Asigra service provider for offsite backup, choose to deploy the software directly onsite, or both. Pricing begins at $50 per month through cloud backup service providers.

V10 expanded the tiers of backup and recovery (Local-Only Backup and Backup Lifecycle Manager (BLM) enables cloud storage) and also allows the backup of laptops in the field and other environments, enabling businesses to back up and recover their data to and from physical, virtual or both types of servers. Among the features are DS-Mobile support to backup laptops in the field, FIPS 140-2 NIST certified security and encryption of data in-flight and at-rest and new backup sets for comprehensive protection of enterprise applications, including MS Exchange, MS SharePoint, MS SQL, Windows Hyper-V Oracle SBT, Sybase and Local-Only backup.

Senior analyst at the Enterprise Strategy Group Lauren Whitehouse said, “The local backup option is a powerful benefit for managed service providers (MSPs) as they can now offer more pricing granularity for customers on three levels—local, new and aging data. With more pricing flexibility, more reliable and affordable backup service package to attract more business customers and free them from the pain of tape backup.”

At least two-thirds of companies in North America and Europe have already implemented server virtualization, according to Forrester Research. Asigra added enhancements to the virtualization support in v10 as a response to the major server virtualization vendors embracing the cloud as the strategic deliverable of a virtualized infrastructure. The company has offered support for virtual machine backups at the host level; Cloud Backup v10 is able to be deployed as a virtual appliance with virtual infrastructures. The company said that the current version now supports Hyper-V, VMware and XenServer.

“The availability of Asigra Cloud Backup v10 has reset the playing field for Asigra with end-to-end date protection from the laptop to the data center to the public cloud. With advanced features that differentiate Asigra both technologically and economically from comparable solutions, the platform can adapt to the changing nature of today’s IT environments, providing unmatched backup efficiency and security as well as the ability to respond to dynamic business challenges,” said executive vice president for Asigra Eran Farakjun. To discover how a Cloud back-up system can benefit your enterprise, contact Nubifer Inc.

The Future of Enterprise Software in the Cloud

Although there is currently a lot of discussion regarding the impact that cloud computing and Software-as-a-Service will have on enterprise software, it comes mainly from a financial standpoint. It is now time to begin understanding how enterprise software as we know it will evolve across a federated set of private and public cloud services.

The strategic direction being taken by Epicor is a prime example of the direction that enterprise software is taking. A provider of ERP software for the mid-market, Epicor is taking a sophisticated approach by allowing customers to host some components on the Epicor suite on premise rather than focusing on hosting software in the cloud. Other components are delivered as a service.

Epicor is a Microsoft software partner that subscribes to the Software Plus Services mantra and as such is moving to offer some elements of its software, like the Web server and SQL server components, as an optional service. Customers would be able to invoke this on the Microsoft Azure cloud computing platform.

Basically, Epicor is going to let customers deploy software components where they make the most sense, based on the needs of customers on an individual basis. This is in contrast to proclaiming that one model of software delivery is better than another model.

Eventually, every customer is going to require a mixed environment, even those that prefer on-premise software, because they will discover that hosting some applications locally and in the cloud simultaneously will allow them to run a global operation 24 hours a day, 7 days a week more easily.

Much of the argument over how software is delivered in the enterprise will melt away as customers begin to view the cloud as merely an extension of their internal IT operations. To learn more on how the future of Software in the Cloud can aide your enterprise, schedule a discussion time with a Nubifer Consultant today.

What Cloud APIs Reveal about the Budding Cloud Market

Although Cloud Computing remains hard to define, one of its essential characteristics is pragmatic access to virtually unlimited network, compute and storage resources. The foundation of a cloud is a solid Application Programming Interface (API), despite the fact that many users access cloud computing through consoles and third-party applications.

CloudSwitch works with several cloud providers and thus is able to interact with a variety of cloud APIs (both active and about-to-be-released versions). CloudSwitch has come up with some impressions after working with both the APIs and those implementing them.

First, clouds remain different in spite of constant discussion about standards. Cloud APIs have to cover more than start/stop/delete a server, and once the API crosses into provisioning the infrastructure (network ranges, storage capacity, geography, accounts, etc.), it all starts to get interesting.

Second, a very strong infrastructure is required for a cloud to function as it should. The infrastructure must be good enough to sell to others when it comes to public clouds. Key elements of the cloud API can inform you about the infrastructure, what tradeoffs the cloud provider has made and the impact of end users, if you are attuned to what to look out for.

Third, APIs are evolving fast, like cloud capabilities. New API calls and expansion of existing functions as cloud providers add new capabilities and features are now a reality. On balance, we are discussing on-the-horizon services and with cloud providers and what form their API is poised to take. This is a perfect opportunity to leverage the experience and work of companies like CloudSwitch as a means to integrate these new capabilities into a coherent data model.

When you look at the functions beyond simple virtual machine control, an API can give you an indication of what is happening in the cloud. Some like to take a peek at the network and storage APIs in order to understand how the cloud is built. Take Amazon, for example. In Amazon, the base network design is that each virtual server receives both a public and private IP address. These addresses are assigned from a pool based on the location of the machine within the infrastructure. Even though there are two IP addresses, however, the public one is just routed (or NAT’ed) to the private address. You only have a single network interface to your server—which is simply and scalable architecture for the cloud provider for support—with Amazon. The server will cause problems for applications requiring at least two NICs, such as some cluster applications.

Terremark’s cloud offering is in stark contrast to Amazon’s. IP addresses are defined by the provider so they can route traffic to your servers, like Amazon, but Terremark allocates a range for your use when you first sign up (while Amazon uses a generic pool of addresses). This can been seen as a positive because there is better control of the assignment of networking address, but on the flip side is potential scaling issues because you only have a limited number of addresses to work with. Additionally, you can assign up to four NIC’s to each server in Terremark’s Enterprise cloud (which allows you to create more complex network topologies and support applications requiring multiple networks for proper operation).

One important thing to consider is that with the Terremark model, servers only have internal addresses. There is no default public NAT address for each server, as with Amazon. Instead, Terremark has created a front-end load balancer that can be used to connect a public IP address to a specified set of servers by protocol and port. You must first create an “Internal Service” (in the language of Terremark) that defines a public IP/Port/Protocol for each protocol and port. Next, assign a server and port to the Service, which will create a connection. You can add more than one server to each public IP/Port/Protocol group  since this is a load balancer. Amazon does have a load balancer function as well, and although it isn’t required to connect public addresses to your cloud servers, it does support connecting multiple servers to a single public IP address.

When it comes down to it, the APIs and the feature sets they define tell a lot about the capabilities and design of a cloud infrastructure. The end user features, flexibility and scalability of the whole service will be impacted by decisions made at the infrastructure level (such as network address allocation, virtual device support and load balancers). It is important to look down to the API level when considering what cloud environment you want because it helps you to better understand how the cloud providers’ infrastructure decisions will impact your deployments.

Although building a cloud is complicated, it can provide a powerful resource when implemented correctly. Cloud with different “sweet spots” emerge when cloud providers choose key components and a base architecture for their service. You can span these different clouds and put the right application in the right environment with CloudSwitch. To schedule a time to discuss how Cloud Computing can help your enterprise, contact Nubifer today.

App Engine and VMware Plans Show Google’s Enterprise Focus

Google opened its Google I/O developer conference in San Francisco on May 19 with the announcement of its new version of the Google App Engine, Google App Engine for Business. This was a strategic announcement, as it shows Google is focused on demonstrating its enterprise chops. Google also highlighted its partnership with VMware to bring enterprise Java developers to the cloud.

Vic Gundotra, vice president of engineering at Google said via a blog post: “… we’re announcing Google App Engine for Business, which offers new features that enable companies to build internal applications on the same reliable, scalable and secure infrastructure that we at Google use for our own apps. For greater cloud portability, we’re also teaming up with VMware to make it easier for companies to build rich web apps and deploy them to the cloud of their choice or on-premise. In just one click, users of the new versions of SpringSource Tool Suite and Google Web Toolkit can deploy their application to Google App Engine for Business, a VMware environment or other infrastructure, such as Amazon EC2.”

Enterprise organizations can build and maintain their own applications on the same scalable infrastructure that powers Google Applications with Google App Engine for Business. Additionally,  Google App Engine for Business has added management and support features that are tailored for each unique enterprise. New capabilities with this platform include: the ability to manage all the apps in an organization in one place; premium developer support; simply pricing based on users and applications; a 99.9 percent uptime service-level agreement (SLA); access to premium features such as cloud-based SQL and SSL (coming later this year).

Kevin Gibbs, technical lead and manager of the Google App Engine project said during the May 18 Google I/O keynote that “managing all the apps at your company” is a prevalent issue for enterprise Web developers. Google sought to address this concern through its Google App Engine hosting platform but discovered it needed to shore it up to support enterprises. Said Gibbs, “Google App Engine for Business is built from the ground up around solving the problems that enterprises face.”

Product management director for developer technology at Google Eric Tholome told eWEEK that Google App Engine for Business allows developers to use standards-based technology (like Java, the Eclipse IDE, Google Web Toolkit GWT and Python) to create applications that run on the platform. Google App Engine for Business also delivers dynamic scaling, flat-rate pricing and consistent availability to users.

Gibbs revealed that Google will be doling out the features in Google App Engine for Business throughout the rest of 2010, with Google’s May 19 announcement acting as a preview of the platform. The platform includes an Enterprise Administration Console, a company-based console which allows users to see, manage and set security policies for all applications in their domain. The company’s road map states that features like support, the SLA, billing, hosted SQL and custom domain SSL will come at a later date.

Gibbs said that pricing for Google App Engine for Business will be $8 per month per user for each application with the maximum being $1,000 per application per month.

Google also announced a series of technology collaboration with VMware. The goal of these is to deliver solutions that make enterprise software developers more efficient at building, deploying and managing applications within all types of cloud environments.

President and CEO of VMware Paul Maritz said, “Companies are actively looking to move toward cloud computing. They are certainly attracted by the economic advantages associated with cloud, but increasingly are focused on the business agility and innovation promised by cloud computing. VMware and Google are aligning to reassure our mutual important to both companies. We will work to ensure that modern applications can run smoothly within the firewalls of a company’s data center or out in the public cloud environment.”

Google is essentially trying to pick up speed in the enterprise, with Java developers using the popular Spring Framework (stemming from VMware’s SpringSource division). Recently, VMware did a similar partnership with

Maritz continued to say to the audience at Google I/O, “More than half of the new lines of Java code written are written in the context of Spring. We’re providing the back-end to add to what Google provides on the front end. We have integrated the Spring Framework with Google Web Toolkit to offer an end-to-end environment.”

Google and VMware are teaming up in multiple ways to make cloud applications more productive, portable and flexible. These collaborations will enable Java developers to build rich Web applications, use Google and VMware performance tools on cloud apps and subsequently deploy Spring Java applications on Google App Engine.

Google’s Gundotra explained, “Developers are looking for faster ways to build and run great Web applications, and businesses want platforms that are open and flexible. By working with VMware to bring cloud portability to the enterprise, we are making it easy for developers to deploy rich Java applications in the environments of their choice.”

Google’s support for Spring Java apps on Google App Engine are part of a shared vision to make building, running and managing applications for the cloud easier and in a way that renders the applications portable across clouds. Developers can build SpringSource Tool Suite using the Eclipse-based SpringSource and have the flexibility to choose to deploy their applications in their current private VMware vSphere environment, in VMware vCloud partner clouds or directly to Google App Engine.

Google and VMware are also collaborating to combine the speed of development of Spring Roo–a next-generation rapid application development tool–with the power of the Google Web Toolkit to create rich browser apps. These GWT-powered applications can create a compelling end-user experience on computers and smartphones by leveraging modern browser technologies like HTML5 and AJAX.

With the goal of enabling end-to-end performance visibility of cloud applications built using Spring and Google Web Toolkit, the companies are collaborating to more tightly integrate VMware’s Spring Insight performance tracing technology within the SpringSource tc Server application server with Google’s Speed Tracer technology.

Speaking about the Google/VMware partnership, vice president at Nucleus Research Rebecca Wettemann told eWEEK, “In short, this is a necessary step for Google to stay relevant in the enterprise cloud space. One concern we have heard from those who have been slow to adopt the cloud is being ‘trapped on a proprietary platform.’ This enables developers to use existing skills to build and deploy cloud apps and then take advantage of the economies of the cloud. Obviously, this is similar to’s recent announcement about its partnership with VMware–we’ll be watching to see how enterprises adopt both. To date, has been better at getting enterprise developers to develop business apps for its cloud platform.”

For his part, Frank Gillett, an analyst with Forrester Research, describes the Google/VMware more as “revolutionary” and the partnership to create VMforce as “evolutionary.”

“Java developers now have a full Platform-as-a-Service [PaaS] place to go rather than have to provide that platform for themselves,” said Gillett of the new Google/VMware partnership. He added, however, “What’s interesting is that IBM, Oracle and SAP have not come out with their own Java cloud platforms. I think we’ll see VMware make another deal or two with other service providers. And we’ll see more enterprises application-focused offerings from Oracle, SAP and IBM.”

Google’s recent enterprise moves show that the company is set on gaining more of the enterprise market by enabling enterprise organizations to buy applications from others through the Google Apps Marketplace (and the recently announced Chrome Web Store), buy from Google with Google Apps for Business or build their own enterprise applications with Google App Engine for Business. Nubifer Inc. is leading Research and Consulting firm specializing in Cloud Computing and Software as a Service.

Cloud Computing Business Models on the Horizon

Everyone is wondering what will follow SaaS, PaaS and IaaS, so here is a tutorial on some of the emerging cloud computing business models on the horizon.

Computing arbitrage:

Companies like are buying bandwidth at a wholesale rate and reselling it to the companies to meet their specific needs. Peekfon began buying data bandwidth in bulk and slice it up to sell to their customers as a way to solve the problem of expensive roaming for customers in Europe. The company was able to negotiate with the operators to buy bandwidth in bulk because they intentionally decided to steer away from the voice plans. They also used heavy compression on their devices to optimize the bandwidth.

While elastic computing is an integral part of cloud computing, not all companies who want to leverage the cloud necessarily like it. These companies with unique cloud computing needs—like fixed long-term computing that grows at relatively fixed low rate and seasonal peaks—have a problem that can easily be solved via intermediaries. Since it requires hi cap-ex, there will be fewer and fewer cloud providers. Being a “cloud VAR” could be a good value proposition for the vendors that are “cloud SI” or have a portfolio of cloud management.

App-driven and content-driven clouds:

Now that the competition between private and public clouds is nearly over, it is time to think about a vertical cloud. The needs to compute depend on what is being computed, and it depends on the applications’ specific needs to compute, the nature and volume of data that is being computed and the kind of content that is being delivered. The vendors are optimizing the cloud to match their application and content needs in the current SaaS world, and some are predicting that a few companies will help ISV’s by delivering app-centric and content-centric clouds.

For advocates of net neutrality, the current cloud-neutrality that is application-agnostic is positive, but innovation on top of raw clouds is still needed. Developer’s need fine knobs for CPU computes, I/O computes, main-memory computing and other varying needs of their applications. The extensions are specific to a programming stack like Heroku for Ruby but the opportunity to provide custom vertical extensions for an existing cloud or to build a cloud that is purpose-built for a specific class of applications and has a range of stack options underneath (making it easy for the developers to leverage the cloud natively) is here. Nubifer Inc. provides Cloud and SaaS Consulting services to enterprise companies.