Posts Tagged ‘ Platform-as-a-Service ’

Cloud Computing in 2012 (continued) – Shared Resources in the Cloud

A primary characteristic of cloud computing is that the platform leverages pooled or shared assets. These computing resources can be bought, controlled externally, and used for public or private usage. As we look further into the validity of these shared computing resources, one can easily see that they are an integral component to any public or private cloud platform.

Take, for example, a business website. We begin to see standard options commonly available in today’s market. Shared hosting, is one of the choices companies have had for quite some time now. The shared approach leads them to be free from managing their own data center, and in turn, leverage a third party. Most of the time, managed hosting services lease out to their customers a dedicated server which is not the shared with other users.

Based solely on this, cloud computing looks a lot like a shared hosting model of managed services. This is due to the fact that the cloud platform provider is the third-party that manages, operates and owns the physical computing hardware and software resources which are distributed and shared. At this juncture in the paradigm is where the similarities between shared or dedicated hosting and cloud computing end.

With cloud computing set aside for a moment, the move away from IT departments utilizing self hosted resources and using outsourced IT services  has been evolving for years. This change has substantial economic impacts. Two of the main areas of change are in CAPEX and OPEX. This furthers the potential opportunity for reducing OPEX in conjunction with operating the hardware and software infrastructure. The change from CAPEX toward OPEX defines a lowering of the barrier for entry when starting a new project.

When examining self hosting, companies are required to allocate funding to be spent up front for licenses and hardware purchases. Operating under fixed costs, it is an out-of-pocket expense in the beginning of that project. Furthermore, when leveraging and outsourced offering (a.k.a. managed hosting), the upfront fees can typically be equal to a one-month start-up operational cost, and possibly a set up fee. When analyzed from a financial perspective, the annual cost is close to the same, or just a little bit lower, than the CAPEX expense for an equal project. Additionally, this can be offset by the reduction of required OPEX to manage and care for the infrastructure.

In stark comparison, when analyzing the cloud model, it is standard to see no up-front fees. With closer examination, a subscriber to cloud services can register, purchase, and be leveraging the services in much less time than it takes to read this blog.

The dramatic differential comparisons in financial expenditures you might see between these hosting models, and the cloud model, exist because the cost structures when utilizing cloud infrastructures are drastically more attractive than earlier models offered to IT.  With further investigation, it’s clear the economies of scale are multi-faceted, and driven by relation to the economics of volume. The largest cloud platform providers are able to offer a better price point to the IT consumers because they are able to bulk purchase, and offer better goods and services; which in this paradigm, are capacity, power, data storage, and compute processing power.

And so continues our 2012 blog series dedicated to understanding the core layers of cloud computing. Our next blog will focus on elasticity in cloud computing. Please check back often, or subscribe to our blog to stay up-to-date on the latest posts and perspectives and news about cloud computing. For more information about Nubifer Cloud Computing visit www.NUBIFER.com

Fujitsu to Deliver First Windows Azure Appliance This Summer

The “private cloud” Windows Azure appliances that Microsoft announced a year ago are here. There’s an August, 2011 ship date slated for the first of them.

Fujitsu, one of three OEMs that announced initial support for the Azure Appliance concept, is going to deliver its first Azure Appliance in August 2011, Fujitsu and Microsoft announced on June 7. Fujitsu’s offering is known as the Fujitsu Fujitsu Global Cloud Platform, FGCP/A5, and will be running in Fujitsu’s datacenter in Japan. Fujitsu has been running a trial of the service since April 21, 2011, with 20 companies, according to the press release.

Microsoft officials had no further updates on the whereabouts of appliances from Dell or Hewlett-Packard. Originally, Microsoft told customers to expect Azure Appliances to be in production and available for sale by the end of 2010.

Windows Azure Appliances, as initially described, were designed to be pre-configured containers with between hundreds and thousands of servers running the Windows Azure platform. These containers will be housed, at first, at Dell’s, HP’s and Fujitsu’s datacenters, with Microsoft providing the Azure infrastructure and services for these containers.

In the longer term, Microsoft officials said they expected some large enterprises, like eBay, to house the containers in their own data-centers on site — in other words, to run their own “customer-hosted clouds.” Over time, smaller service providers also will be authorized to make Azure Appliances available to their customers as well.

Fujitsu’s goal with the new Azure-based offering is to sign up 400 enterprise companies, plus 5,000 small/medium enterprise customers and ISVs, in the five-year period following launch, a recent Fujitsu press release noted.

For more information regarding the Azure Appliances, and how they can provide you with a turn-key private cloud solution, visit Nubifer.com/azure.

Start Me Up….Cloud Tools Help Companies Accelerate the Adoption of Cloud Computing

Article reposted form HPC in the Cloud Online Magazine. Article originally posted on Nov. 29 2010:

For decision makers looking to maximize their impact on the business, cloud computing offers a myriad of benefits. At a time when cloud computing is still being defined, companies are actively researching how to take advantage of these new technology innovations for business automation, infrastructure reduction, and strategic utility based software solutions.

When leveraging “the cloud”, organizations can have on-demand access to a pool of computing resources that can instantly scale as demands change. This means IT — or even business users — can start new projects with minimal effort or interaction and only pay for the amount of IT resources they end up using.

The most basic division in cloud computing is between private and public clouds. Private clouds operate either within an organization’s DMZ or as managed compute resources operated for the client’s sole use by a third-party platform provider. Public clouds let multiple users segment resources from a collection of data-centers in order to satisfy their business needs. Resources readily available from the Cloud include:

● Software-as-a-Service (SaaS): Provides users with business applications run off-site by an application provider. Security patches, upgrades and performance enhancements are the application provider’s responsibility.

● Platform-as-a-Service (PaaS): Platform providers offer a development environment with tools to aide programmers in creating new or updated applications, without having to own the software or servers.

● Infrastructure-as-a-Service (IaaS): Offers processing power, storage and bandwidth as utility services, similar to an electric utility model. The advantage is greater flexibility, scalability and interoperability with an organization’s legacy systems.

Many Platforms and Services to Choose From:

Cloud computing is still in its infancy, with a host of platform and application providers serving up a plethora of Internet-based services ranging from scalable on-demand  applications to data storage services to spam filtering. In this current IT environment, organizations’ technology ecosystem have to operate cloud-based services individually, but cloud integration specialists and ISVs (integrated software vendors) are becoming more prevalent and readily available to build on top of the emerging and powerful platforms.

Mashing together services provided by the worlds largest and best funded companies like Microsoft, Google, Salesforce.com, Rackspace, Oracle, IBM, HP and many others, gives way to an opportunity for companies to take hold and innovate, and build a competitive, cost saving cloud of their own on the backs of these software giant’s evolving view of the cloud.

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, licensing and maintenance of new software. Cloud computing involves all subscription-centric or pay-for-what-you-use service that extends your IT environments existing capabilities.

Before deciding whether an application is destined for the cloud, analyze you current cost of ownership. Examine more than just the original licenses and cost of ownership; factor in ongoing expenses for maintenance, power, personnel and facilities. To start, many organizations build an internal private cloud for application development and testing, and decide from their if it is cost-effective to scale fully into a public cloud environment.

“Bridging the Whitespace” between Cloud Applications

One company, Nubifer.com (which in Latin, translates to ‘bringing the clouds’) approaches simplifying the move to the Cloud for its enterprise clients by leveraging a proprietary set of Cloud tools named Nubifer Cloud:Portal, Cloud:Connector and Cloud:Link. Nubifer’s approach with Cloud:Portal enables the rapid development of “enterprise cloud mash-ups”, providing rich dash-boards for authentication, single sign-on and identity management. This increased functionality offers simple administration of accounts spanning multiple SaaS systems, and the ability to augment and quickly integrate popular cloud applications. Cloud Connector seamlessly integrates data management, data sync services, and enables highly available data interchange between platforms and applications. And Cloud:Link provides rich dashboards for analytic and monitoring metrics improving system governance and audit trails of various SLAs (Service Level Agreements).

As a Cloud computing accelerator, Nubifer focuses on aiding enterprise companies in the adoption of emerging SaaS and PaaS platforms. Our recommended approach to an initial Cloud migration is to institute a “pilot program” tailored around your platform(s) of choice to in order to fully iron-out any integration issues that may arise prior to a complete roll-out.

Nubifer’s set of Cloud Tools can be hosted on Windows Azure, Amazon EC2 or Google AppEngine. The scalability offered by these Cloud platforms promote an increased level of interoperability, availability, and a significantly lower financial barrier for entry not historically seen with current on-prem application platforms.

Cloud computing’s many flavors of services and offerings can be daunting at first review, but if you take a close look at the top providers offerings, you will see an ever increasing road map for on-boarding your existing or new applications to “the cloud”. Taking the first step is easy, and companies like Nubifer that provide the platform services, and the partner networks to aid your goals, are resourced and very eager to support your efforts.

Emerging Trends in Cloud Computing

Due to its reputation as a game-changing technology set, Cloud Computing is a hot topic when discussing emerging technology trends. Cloud Computing is defined by the National Institute of Standards and Technology (NIST) “as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

IT optimization has largely been the reason for the early adoption of Cloud Computing in “Global 2000” enterprises, with the early drivers being cost savings and faster infrastructure provisioning. A December 2009 Forrester Report indicated that over 70% of IT budget is spent on maintaining current IT infrastructure rather than adding new capabilities. Because of this, organizations are seeking to adopt a Cloud Computing model for their enterprise applications in order to better utilize the infrastructure investments.

Several such organizations currently have data center consolidation and virtualization initiatives underway and look to Cloud Computing as a natural progression of those initiatives. Enterprise private cloud solutions add capabilities such as self-service, automation and charge back over the virtualized infrastructure and thus make infrastructure provisioning quicker, helping to improve the over-all utilizations. Additionally, some of these organizations have been beginning to try public cloud solutions as a new infrastructure sourcing option.

IT spending of “Global 2000” enterprises makes up less than 5% of their revenues, thus optimizing IT isn’t going to impact their top or bottom line. In the current economic state, IT optimization is a good reason for these large enterprises to begin looking at Cloud Computing. So what is the true “disruptive” potential of Cloud Computing? It lies in the way it is going to aid these large enterprises in reinventing themselves and their business models in order to rise to the challenge of an evolving business landscape.

Social Networking Clouds and e-Commerce

Worldwide e-Commerce transactions will be worth over $16 trillion by 2013, and by 2012 over 50% of all adult Internet users in the U.S. will be using social networks. Currently, 49% of web users make a purchase based on a recommendation gleaned from social media. This increased adoption of social media makes it easier for consumers to remain connected and get options on products and services. Basically, the consumer has already made up their mind about a produce before even getting to the website or store. This is causing major changes in consumer marketing and the B2C business models. The relationship used to be between the enterprise and the consumer, but it is now changed to a deeper relationship that encompasses the consumer’s community.

Large enterprises can’t afford to have “websites” or “brick-and-mortar stores” any longer if they want to remain relevant and ensure customer loyalty—they need to provide online cloud hosted platforms that engage the consumers constantly along with their social community. That way, they incorporate the enterprise business services in their day-to-day life. When the Gen Y consumers reach the market, for example, “community driven” social commerce just may replace traditional “website based” e-commerce. Enterprises need to begin building such next-generation industry specific service platforms for the domain they operate it in anticipation of this.

Computing’s Pervasiveness

One half of the world population—roughly 3.3 billion—have active mobile devices, and the increased use of these hand held devices is altering the expectations of consumers when it comes to the availability of services. Consumers expect that the products and services should be available to them whenever they need the service, wherever they are, through innovative applications, the kinds of applications that can be better delivered through the cloud model.

The number of smart devices is expected to reach one trillion by 2011, due to increasing adoption of technologies like wireless sensors, wearable computing, RFIDs and more. This will lead to significant changes in the way consumers use technology, as future consumers will be used to (and be expecting) more intelligent products and services such as intelligent buildings that conserve energy and intelligent transportation systems that can make decisions based on real-time traffic information. An entirely new set of innovative products and services based on such pervasive computing will need to be created for the future generation.

Service providers will look to increase customer loyalty by providing more offerings, better services and maintaining deeper relationships as products and services become commoditized. Several industry leaders are increasingly adopting open innovation models, there by creating business clouds supported by an ecosystem of partners, in order to increase the portfolio of offerings and innovate faster. A new generation of applications must be created as Cloud Computing becomes more pervasive with the increased adoption of smart devices.

To gain a competitive edge, reduce CAPEX on infrastructure and maintenance, and take advantage of powerful SaaS technologies offered in the Cloud, Companies need to build their next generation business cloud platforms in order to better manage the scale of information.

To learn more about Cloud Computing and how companies can adopt and interoperate with the cloud, Visit Nubifer.com

Microsoft Announces Office 365

Announced October 19th 2010, Microsoft is launching Office 365, the software giants’ next cloud productivity offering syncing Microsoft Office, SharePoint Online, Exchange Online and Lync Online in an “always-on” software and platform-as-a-service. Office 365 makes it simpler for organizations to get and use Microsoft’s highly-acclaimed business productivity solutions via the cloud.

With the Office 365 cloud offering, users can now work together more collaboratively from anywhere on any device with Internet connectivity, while collaborating with others inside and outside their enterprise in a secure and interoperable fashion. As part of today’s launch  announcement by Microsoft, the Redmond based software company is opening a pilot beta program for Office 365 in 13 different regions and countries.

Microsoft relied on years of experience when architecting Office 365, delivering industry-acclaimed enterprise cloud services ranging from the first browser-based e-mail to today’s Business Productivity Online Suite, Microsoft Office Live Small Business and Live@edu. Adopting the Office 365 cloud platform means Microsoft users don’t have to alter the way they work, because Office 365 works with the most prevalent browsers, smart-phone hand-sets and desktop applications people use today.

Office 365 developers worked in close association with existing customers to develop this cloud offering, resulting in a platform that is designed to meet a wide array of user needs:

“Office 365 is the best of everything we know about productivity, all in a single cloud service,” said Kurt DelBene, president of the Office Division at Microsoft. “With Office 365, your local bakery can get enterprise-caliber software and services for the first time, while a multinational pharmaceutical company can reduce costs and more easily stay current with the latest innovations. People can focus on their business, while we and our partners take care of the technology.”

With Office 365 for small businesses, professionals and small companies with fewer than 25 employees can be up and running with Office Web Apps, Exchange Online, SharePoint Online, Lync Online and an external website in just 15 minutes, for $6 per user, per month.

Microsoft Office 365 for the enterprise introduces an wide range of choices for mid and large organizations, as well as for governmental entities, starting at $2 per user, per month for basic e-mail. Office 365 for the enterprise also includes the choice to receive Microsoft Office Professional Plus on a pay-as-you-use basis. For less than $25 per user, per month, organizations can get Office Professional Plus along with webmail, voicemail, business social networking, instant messaging, Web portals, extranets, voice-conferencing, video-conferencing, web-conferencing, 24×7 phone support, on-premises licenses, and more.

Office 365 is creating new growth opportunities for Microsoft and its partners by reaching more customers and types of users and meeting more IT needs — all while reducing the financial burden for its customers.

Product Availability

Office 365 will be available worldwide in 2011. Starting today, Microsoft will begin testing Office 365 with a few thousand organizations in 13 countries and regions, with the beat expanding to include more organizations as the platform matures. Office 365 will be generally available in over 40 countries and regions next year.

Towards the end of next year, Microsoft Office 365 will offer Dynamics CRM Online in order to provide their complete business productivity experience to organizations of all varieties and scales. Additionally, Office 365 for education will debut later next year, giving students, faculty and school employees powerful technology tailored specifically to their needs.

October 19th at Noon PDT, Microsoft will launch http://www.Office365.com. Customers and partners can sign up for the Office 365 beta and learn more at that site, or follow Office 365 on Twitter (@Office365), Facebook (Office 365), or the new Office 365 blog at http://community.office365.com to get the latest information.

Nubifer is a Microsoft Registered Partner with expertise in Office, Windows 7, BPOS and Windows Azure.  Contact a representative today to learn how the Office 365 cloud platform can streamline your business processes or visit www.nubifer.com and fill out our online questionaire.

Protecting Data in the Cloud

When it comes to cloud computing, one of the major concerns is protecting the data being stored in the cloud. IT departments often lack the knowledge necessary to make informed decisions regarding the identification of sensitive data—which can cost an enterprise millions of dollars in legal costs and lost revenue.

The battle between encryption and tokenization was explored in a recent technology report, and the merits of both are being considered as securing data in the cloud becomes more and more important. Although the debate over which solution is best continues, it is ultimately good news that protection in cloud computing is available in the first place.

It is essential that data is secure while in storage or in transit (both inherent in cloud computing) in the current business climate; the protection is necessary whether dealing with retail processing, accessing personal medical records or managing government information and financial activity. It is necessary to implement the correct security measure to protect sensitive information.

So what is tokenization? Tokenization is the process in which sensitive data is segmented into one or more pieces and replaced with non-sensitive values, or tokens, and the original data is stored encrypted elsewhere. When clients need access to the sensitive data, they typically provide the token along with authentication credentials to a service that then validates the credentials, decrypts the secure data, and provides it back to the client. Even though encryption is used, the client is never involved in either the encryption or decryption process so encryption keys are never exchanged outside the token service. Tokens protect information like medical records, social security numbers, financial transactions, etc prevent unauthorized access.

Encryption, on the other hand, is the process of changing the information using an algorithm to ensure it is unreadable to anyone expect those who possess a key or special knowledge. The military and government have been using this method for some time to make sure that their sensitive information remains in the hands of the right people and organizations.

Tokenization and encryption can be applied when using cloud computing to protect the information is used in the cloud. For organizations seeking to determine which method is a better fit for them, it is necessary to ask questions about the security of the method and whether one has more pros than the others. It is necessary in this case to clearly define the objectives of the business process as well.

A clear method of protecting information is essential if cloud computing is posing benefits for the enterprise. Conversely, this can also be an obstacle to launching a cloud computing strategy. Gartner reports that 85 percent of participants cited security as a key factor that could prevent them from launching cloud-based apps.

In conclusion, there is no clear winner in the debate over tokenization versus encryption. Rather, it depends on the goals of the business and how the company plans to manage the security of their sensitive information. The data needs to be protected in a way that is easily manageable when launching a cloud computing strategy—and it is only at this point that cloud computing can be both successful and secure. For more information regarding securing data int eh cloud via tokenization, contact a Nubifer representative today.

Google Apps Receives Federal Certification for Cloud Computing

On July 26, Google released a version of its hosted suite of applications that meets the primary federal IT security certification, making a major leap forward in its push to drive cloud computing in the government. Nearly one year in the making, Google announces its new edition of Google Apps as the first portfolio of cloud applications to have received certification under the Federal Information Security Management Act (FISMA).

The government version of Google Apps has the same pricing and services as the premier edition, including Gmail, the Docs productivity site and the Talk instant-messaging application.

Google Business Development Executive David Mihalchik said to reporters, “We see the FISMA certification in the federal government environment as really the green light for federal agencies to move forward with the adoption of cloud computing for Google Apps.”

Federal CIO Vivek Kundra announced a broad initiative to embrace the cloud across the federal government last September, as a way to reduce both costs and inefficiencies of redundant and underused IT deployments. The launch of that campaign was accompanied by the launch of Apps.gov. An online storefront for vendors to showcase their cloud-based services for federal IT manager, Apps.gov was revealed at an event at NASA’s Ames Research Center and attended by Google co-founder Sergey Brin. At the same time, Google announced plans to develop a version of its popular cloud-based services that  would meet the federal-government sector’s security requirements.

Mike Bradshaw, director of Google’s Federal Division, said, “We’re excited about this announcement and the benefits that cloud computing can bring to this market.” Bradshaw continued to say that “the President’s budget has identified the adoption of cloud computing in the federal government as a way to more efficiently use the billions of dollars spent on IT annually.” Bradshaw added that the government spends $45 million in electrical costs alone to run its data-centers and servers.

Security concerns are consistently cited by proponents of modernizing the deferral IT apparatus as the largest barrier to the adoption of cloud computing. Google is including extra security features to make federal IT buyers at agencies with more stringent security requirements feel more at ease. These extra security features are in addition to the 1,500 pages of documentation that came with Google’s FISMA certification.

Google will store government cloud accounts on dedicated servers within its data centers that will be segregated from its equipment that houses consumer and business data. Additionally, Google has committed to only use servers located in the continental U.S. for government cloud accounts. Google’s premier edition commercial customers have their data stored on servers in both the U.S. and European Union.

Mihalchik explained that security was the leading priority from the get-go in developing Google Apps for Government saying, “We set out to send a signal to government customers that the cloud is ready for government.” Adding, “today we’ve done that with the FISMA certification, and also going beyond FISMA to meet some of the other specific security requirements of government customers.”

Thus far, Google has won government customers at state and local levels such as in the cities of Los Angeles, California and Orlando, Florida. Mihalchik said that over one dozen federal agencies are in various stages of trialing or deploying elements of Google apps. Mihalchik states that several agencies are using Google anti-spam and anti-virus products to filter their email. Others, like the Department of Energy, are running pilot programs to evaluate the full suite of Google Apps in comparison with competitors’ offerings.

Find out more about cloud security and FISMA certification of Google Apps by talking to a Nubifer Consultant today.

Zoho Sheet 2.0 launches on August 31st 2010, with support for Million Cell Spreadsheets

Zoho, an industry leader in cloud hosted officing software, announced today the launch of Zoho Sheet 2.0. Among the many added features of Zoho Sheet, is the newly added support for million cell spreadsheets.

When a user logs-in to Zoho Sheet 2.0, they will not notice much change visually, but there have been many performance improvements on the back-end. Frequent users of Zoho’s increasingly popular spreadsheet app will notice the performance and interoperability improvements instantly. Regarding the performance of the app, Zoho enhanced the back-end engine significantly upgrading its performance, allowing users of Zoho Sheet 2.0 to load large and complex spreadsheets with instant response times.

Zoho Sheet’s One Million Cell Spreadsheet

At Nubifer Inc., we are constantly working with extensive spreadsheets, and were infinitely familiar with constant freezes and over-consumption of local compute resources. This is no longer an issue for our teams, as Zoho Sheet is completely online with all the heavy lifting being done on the server side, keeping our client side agile and nimble.

With Zoho’s latest product update, subscribers can now create a million cell spreadsheet. Zoho Sheet 2.0 supports 65,536 rows and 256 columns per worksheet, creating 1 Million Cells per spreadsheet project. Supporting a million cells is an important feature, but maintaining efficient load-times with large spreadsheets was the primary goal with Zoho Sheet 2.0. Waiting as long as 5 minutes to load very large spreadsheets is no longer an issue, this can now be experienced instantly within your web browser. We here at Nubifer encourage you to give it a test drive, and witness for yourself how agile and efficient response is while using Zoho Sheet 2.0.

Here is an example embedded spreadsheet with 25,000 rows. The performance on the return is quite impressive.


In addition to the improved performance metrics, here are some other great features designed to aid functionality and work flow.

Chrome & Safari Browser Support

Zoho Sheet now officially supports Chrome 4+, Safari 4+, Firefox 2+ and IE 6+.

Some Additionally Impressive Improvements

  • Users can now directly input Chinese, Japanese & Korean characters without having to double-click on a cell.
  • Improved ‘Find’ functionality. Control+F will now bring up the ‘Find’ panel at the bottom of the spreadsheet with options to search within the row, column or sheet.
  • The ‘Undo’ and ‘Redo’ actions now work across the spreadsheet and are maintained on a per-user basis while collaborating with other users.
  • You can now set formats and styles on column, row, and sheet tiers.

Are you an existing user? If not, you probably wont see many changes visually, but you will experience these enhancements when working with Zoho Sheets 2.0.

Zoho is tirelessly working on performance updates to their cloud-hosted officing applications. Some updates are cosmetic for look and feel, while others are performance based. The overwhelming majority of Zoho’s updates go under the hood. For these updates, users may not notice anything visually, but these updates are significant and lay the groundwork for things to come in the future.

For more information about Zoho Sheet, or other Zoho officing applications please visit Nubifer.com.

Understanding the Cloud with Nubifer Inc. CTO, Henry Chan

The overwhelming majority of cloud computing platforms consist of dependable services relayed via data centers and built in servers with varying tiers of virtualization capabilities. These services are available anywhere that allows access to the networking platform. Clouds often appear as single arenas of access for all subscribers’ enterprise computing needs. All commercial cloud platform offerings are guaranteed to adhere to the customers’ quality of service (QoS) requirements, and typically offer service level agreements.  Open standards are crucial to the expansion and acceptance of cloud computing, and open source software has layed the ground work for many cloud platform implementations.

The article to follow is what Nubifer Inc. CTO, Henry Chan, recently described to be his summarized view of what cloud computing means, its benefits and where it’s heading in the future:

Cloud computing explained:

The “cloud” in cloud computing refers to your network’s Internet connection. Cloud computing is essentially using the Internet to perform tasks like email hosting, data storage and document sharing which were traditionally hosted on premise.

Understanding the benefits of cloud computing:

Cloud computing’s myriad of benefits depend on your organizational infrastructure needs. If your enterprise is sharing large number of applications between a varying number of office locations, it would be beneficial to your organization to store the apps on a virtual server. Web-based application hosting can save time for people traveling without the ability to connect back to the office because they can have access to everything over their shared virtual private network (VPN).

Examples of cloud computing:

Hosted email (such as GMail or Hotmail), online data back-up, online data storage, any Software-as-a-Service (SaaS) application (such as a cloud hosted CRM from vendors like Salesforce, Zoho or Microsoft Dynamics) or accounting applications, are examples of applications that can be hosted in the cloud. By hosting these applications in the cloud, your business can benefit from the interoperability and scalability cloud computing and SaaS services offer.

Safety in the cloud:

Although there are some concerns over the safety of cloud computing, the reality is that data stored in the cloud can be just as secure as the vast majority of data stored on your internal servers. The key is to implement the necessary solutions to ensure that the proper level of encryption is applied to your data while traveling to and from your cloud storage container, as well as when being stored. This can be as safe as any other solution you could implement locally when designed properly. The leading cloud vendors all currently maintain compliance with Sarbanes-Oxley, SAS90, FISMA and HIPPA.

Cloud computing for your enterprise:

To determine which layer of cloud computing is optimally suited for your organization, it is important to thoroughly evaluate your organizational goals as it relates to your IT ecosystem. Examine how you currently use technology, current challenges with technology, how your organization will evolve technologically in the years to come, and what scalability and interoperability will be required going forward. After a careful gap analysis of these determinants, you can decide what types of cloud-based solutions will be optimally suited for your organizational architecture.

Cloud computing, a hybrid solution:

The overwhelming trend in 2010 and 2011 is to move non-sensitive data and applications into the cloud while keeping trade secrets behind your enterprise firewall, as many organizations are not comfortable hosting all their applications and hardware in the cloud. The trick to making cloud computing work for your business is to understand which applications should be kept local and which would benefit most from leveraging the scalability and interoperability of the cloud ecosystem.

Will data be shared with other companies if it is hosted in the cloud:

Short answer: NO! Reputable SaaS and cloud vendors will make sure that your data is properly segmented according to the requirements of your industry.

Costs of cloud computing:

Leading cloud-based solutions charge a monthly fee for application usage and data storage, but you may be outlaying this capital expenditure already, primarily in the form of hardware maintenance and software fees—some of which could be wiped out by moving to the cloud.

Cloud computing makes it easy for your companies’ Human Resource software, payroll and CRM to co-mingle with your existing financial data, supply chain management and operations installation, while simultaneously reducing your capital requirements on these systems. Contact a Nubifer representative today to discover how leveraging the power of cloud computing can help your business excel.

Confidence in Cloud Computing Expected to Surge Economic Growth

The dynamic and flexible nature of cloud computing, software-as-a-service and platform-as-a-service may help organizations in their recovery from the current economic downturn, according to more than two thirds of IT decision leaders and makers who participated in a recent annual study by Vanson Bourne, an International Research Firm. Vanson Bourne surveyed over 600 IT and business decision makers across the United States, United Kingdom and Singapore. Of the countries sampled, Singapore is leading the shift to the cloud, with 76 percent of responding enterprises using some form of cloud computing. The U.S. follows with 66 percent, with the U.K. at 57 percent.

This two year study about Cloud Computing reveals that IT decision makers are very confident in cloud computing’s ability to deliver within budget and offer CapEx savings. Commercial and public sector respondents also predict cloud use will help decrease overall IT budgets by an average of 15 Percent, with others expecting savings as much as 40 Percent.

“Scalability, interoperability and pay-as-you-go elasticity are moving many of our clients toward cloud computing,” said Chad Collins, CEO at Nubifer Inc., a strategic Cloud and SaaS consulting firm. “However, it’s important, primarily for our enterprise clients, to work with a Cloud provider that not only delivers cost savings, but also effectively integrates technologies, applications and infrastructure on a global scale.”

A lack of access to IT capacity is clearly labeled as an obstacle to business progress, with 76 percent of business decision makers reporting they have been prevented from developing or piloting projects due to the cost or constraints within IT. For 55 percent of respondents, this remains an issue.

Confidence in cloud continues to trend upward — 96 percent of IT decision makers are as confident or more confident in cloud computing being enterprise ready now than they were in 2009. In addition, 70 percent of IT decision makers are using or plan to be using an enterprise-grade cloud solution within the next two years.

The ability to scale resources up and down in order to manage fluctuating business demand was the most cited benefit influencing cloud adoption in the U.S. (30 percent) and Singapore (42 percent). The top factor driving U.K. adoption is lower cost of total ownership (41 percent).

Security concerns remain a key barrier to cloud adoption, with 52 percent of respondents who do not leverage a cloud solution citing security of sensitive data as a concern. Yet 73 percent of all respondents want cloud providers to fully manage security or to fully manage security while allowing configuration change requests from the client.

Seventy-nine percent of IT decision makers see cloud as a straight forward way to integrate with corporate systems. For more information on how to leverage a cloud solution inside your environment, contact a Nubifer.com representative today.

Taking a Closer Look at the Power of Microsoft Windows Azure AppFabric

Microsoft’s Windows Azure runs Windows applications and stores advanced applications, services and data in the cloud. This baseline understanding of Windows Azure, coupled with the practicality of using computers in the cloud makes leveraging the acres of Internet-accessible servers on offer today an obvious choice. Especially when the alternate option of buying and maintaining your own space in data centers and hardware deployed to those data centers can quickly become costly. For some applications, both code and data might live in the cloud, where the systems they use are managed and maintained by someone else. On-premise applications—which run inside an organization—might store data in the cloud or rely on other cloud infrastructure services. Ultimately, making use of the cloud’s capabilities provides a variety of advantages.

Windows Azure applications and on-premises applications can access the Windows Azure storage service using a REST-ful approach. The storage service allows storing binary large objects (blobs), provides queues for communication between components of Windows Azure application, and also offers a form of tables with a simple query language. The Windows Azure platform also provides SQL Azure for applications that need traditional relational storage. An application using the Windows Azure platform is free to use any combination of these storage options.

One obvious need between applications hosted in the cloud and hosted on-premise is communication between applications. Windows Azure AppFabric provides a Service Bus for bi-directional application connectivity and Access Control for federated claims-based access control.

Service Bus for Azure AppFabric

The primary feature of the Service Bus is message “relaying” to and from the Windows Azure cloud to your software running on-premise, bypassing any firewalls, network address translation (NAT) or other network obstacles. The Service Bus can also help negotiate direct connections between applications. Meanwhile, the Access Control feature provides a claims-based access control mechanism for applications, making federation easier to tackle and allowing your applications to trust identities provided by other systems.

A .NET developer SDK is available that simplifies integrating these services into your on-premises .NET applications. The SDK integrates seamlessly with Windows Communication Foundation (WCF) and other Microsoft technologies to build on pre-existing skill sets as much as possible. These SDKs have been designed to provide a first-class .NET developer experience, but it is important to point out that they each provide interfaces based on industry standard protocols. Thus, making it possible for applications running on any platform to integrate with them through REST, SOAP and WS-protocols.

SDKs for Java and Ruby are currently available for download. Combining them with the underlying Windows Azure platform service produces a powerful, cloud-based environment for developers.

Access Control for the Azure AppFabric

Over the last decade, the industry has been moving toward an identity solution based on claims. A claims-based identity model allows the common features of authentication and authorization to be factored out of your code, at which point such logic can then be centralized into external services that are written and maintained by subject matter experts in security and identity. This is beneficial to all parties involved.

Access Control is a cloud-based service that does exactly that. Rather than writing your own customer user account and role database, customers can let AC orchestrate the authentication and most of the user authorization. With a single code base in your application, customers can authorize access to both enterprise clients and simple clients. Enterprise clients can leverage ADFS V2 to allow users to authenticate using their Active Directory logon credentials, while simple clients can establish a shared secret with AC to authenticate directly with AC.

The extensibility of Access Control allows for easy integration of authentication and authorization through many identity providers without the need for refactoring code. As Access Control evolves, support for authentication against Facebook Connect, Google Accounts, and Windows Live ID can be quickly added to an application. To reiterate: over time, it will be easy to authorize access to more and more users without having to change the code base.

When using AC, the user must obtain a security token from AC in order to log in; this token is similar to a signed email message from AC to your service with a set of claims about the user’s identity. AC doesn’t issue a token unless the user first provides his or her identity by either authenticating with AC directly or by presenting a security token from another trusted issuer (such as ADFS) that has authenticated that user. So by the time the user presents a token to the service, assuming it is validated, it is safe to trust the claims in the token and begin processing the user’s request.

Single sign-on is easier to achieve under this model, so a customer’s service is no longer responsible for:

• Authenticating users
• Storing user accounts and passwords
• Calling to enterprise directories to look up user identity details
• Integrating with identity systems from other platforms or companies
• Delegation of authentication (a.k.a. federation) with other security realms

Under this model, a customer’s service can make identity-related decisions based on claims about the user made by a trusted issuer like AC. This could be anything from simple service personalization with the user’s first name, to authorizing the user to access higher-valued features and resources in the customer’s service.

Standards

Due to the fact that single sign-on and claims-based identity have been evolving since 2000, there are a myriad of ways of doing it. There are competing standards for token formats as well as competing standards for the protocols used to request those tokens and send them to services. This fact is what makes AC so useful, because over time, as it evolves to support a broader range of these standards, your service will benefit from broader access to clients without having to know the details of these standards, much less worry about trying to implement them correctly.

Security Assertion Markup Language (SAML) was the first standard. SAML specified an XML format for tokens (SAML tokens) in addition to protocols for performing Web App/Service single sign-on (SAML tokens are sometimes referred to inside Microsoft as SAMLP–for the SAML protocol suite). WS-Federation and related WS-* specifications also define a set of protocols for Web App/Service single sign-on, but they do not restrict the token format to SAML, although it is practically the most common format used today.

To Summarize

The Service Bus and Access Control constituents of the Windows Azure platform provide key building block services that are vital for building cloud-based or cloud-aware applications. Service Bus enables customer to connect existing on-premises applications with new investments being built for the cloud. Those cloud assets will be able to easily communicate with on-premises services through the network traversal capabilities, which are provided through Service Bus relay.

Overall, the Windows Azure platform represents a comprehensive Microsoft strategy designed to make it easy for Microsoft developers to realize the opportunities inherent to cloud computing. The Service Bus and Access Control offer a key component of the platform strategy, designed specifically to aid .NET developers in making the transition to the cloud. These services provide cloud-centric building blocks and infrastructure in the areas of secure application connectivity and federated access control.

For more information on the Service Bus & Access Control, please contact a Nubifer representative or visit these Microsoft sponsored links:

• An Introduction to Windows Azure platform AppFabric for Developers (this paper)
o http://go.microsoft.com/fwlink/?LinkID=150833

• A Developer’s Guide to Service Bus in Windows Azure platform AppFabric
o http://go.microsoft.com/fwlink/?LinkID=150834

• A Developer’s Guide to Access Control in Windows Azure platform AppFabric
o http://go.microsoft.com/fwlink/?LinkID=150835

• Windows Azure platform
o http://www.microsoft.com/windowsazure/

• Service Bus and Access Control portal
o http://netservices.azure.com/

Two Kinds of Cloud Agility

CIO.com’s Bernard Golden defines cloud agility and provides examples of how cloud computing fosters business agility in the following article.

Although agility is commonly described as a key benefit of cloud computing, there are two types of agility that are real, but one of them packs more of a punch.

First, however, it is important to define cloud agility. Cloud agility is tied to the rapid provisioning of computer resources. In typical IT shops, new compute instances or storage can take weeks (or even months!), but the same provisioning process takes just minutes in cloud environments.

Work is able to commence at a rapid pace due to the dramatic shortening of the provisioning timeframe. For example, in a cloud environment submitting a request for computing resources and waiting anxiously for a fulfillment response via email does not happen. In this way, agility can be defined as “the power of moving quickly and easily; nimbleness,” and in his way it is clear how this rapid provisioning is commonly referred to advancing agility.

It is at this point that the definition of agility becomes confusing, as people often conflate both engineering resource availability and business response to changing conditions or opportunity under agility.

While both types of agility are useful, business response to changing conditions or opportunity will prove to be the more compelling type of agility. It will also come to be seen as the real agility associated with cloud computing.

The issue with this type of agility, however, is that it is a local optimization, meaning that it makes a portion of internal IT processes more agile. However this doesn’t necessarily shorten the overall application supply chain, which extends from initial prototype to production rollout.

It is, in fact, very common for cloud agility to enable developers and QA to begin their work more quickly, but for the overall delivery time to stay the same, stretched by slow handover to operations, extended shakedown time in the new production environment and poor coordination with release to the business units.

Additionally, if cloud computing comes to be seen as an internal IT optimization, with little effect on the timeliness of compute capability rolling out into mainline business processes, IT potentially may never receive the business unit support it requires to fund the shift to cloud computing. What may happen, is that cloud computing will end up like virtualization, in which in many organizations remains at 20 or 30 percent penetration, unable to gather the funding necessary to support wider implementation. Necessary funding will probably never materialize if the move to cloud computing is presented as “helps our programmers program faster.”

Now, for the second type of agility, which affects how quickly business units can roll out new offerings. This type of agility does not suffer the same problems that the first one does. Funding will not be an issue if business units can see a direct correlation between cloud computing and stealing a march on the competition. Funding is never an issue when the business benefit is clear.

The following three examples show the kind of business agility fostered by cloud computing in the world of journalism:

1. The Daily Telegraph broke  a story about a scandal regarding Members of Parliament expenses which was a huge cause celebre featuring examples of MPs seeking reimbursement of for building a duck house and other equally outrageous claims. As can be imagined, the number of expense forms was huge, and overtaxed the resources of the Telegraph available to review and analyze them. The Telegraph loaded the documents in Google Docs and allowed readers to browse them at their own will. CIO of the Telegraph Media Group, Toby Wright, used this example during a presentation at the Cloud Computing World Forum and pointed out how interesting it was to see several hundred people clicking through the spreadsheets at once.

2. The Daily Telegraph’s competitor, the Guardian, of course featured its own response to the expenses scandal. The Guardian quickly wrote an application to let people examine individual claims and identify ones that should be examined more closely. As a result, more questionable claims surfaced more quickly and allowed the situation to heat up. Simon Willison of the Guardian said of the agility that cloud computing offers, “I am working at the Guardian because I am interested in the opportunity to build rapid prototypes that go live: apps that live for two or three days.” Essentially, the agility of cloud computing enables quick rollout of short-lived applications to support the Guardian’s core business: delivery of news and insight.

3. Now, for an example from the United States. The Washington Post took static pdf files of former First Lady Hillary Clinton’s schedule and used Amazon Web Services to transform them into a searchable document format. The Washington Post then placed the documents into a database and put a simple graphic interface in place to allow members of the public to be able to click through them as well–once again, crowds-ourcing the analysis of documents to accelerate analysis.

It can be argued that these examples don’t prove the overall point of how cloud computing improves business agility–they are media businesses, after all, not “real” businesses that deal with physical objects and can’t be satisfied with a centralized publication site. This point doesn’t take into account that modern economies are shifting to become more IT-infused and digital data is becoming a key part of every business offering. The ability to turn out applications associated with the foundation business offering will be a critical differentiator in the future economy.

Customers get more value and the vendor gets competitive advantage due to this ability to surround a physical product or service with supporting applications. In order to win in the future, it is important to know how to take advantage of cloud computing to speed delivery of complimentary applications into the marketplace. As companies battle it out in the marketplace, they will be at a disadvantage if they fail to optimize the application delivery supply chain.

It is a mistake to view cloud computing as a technology that helps IT do its job quicker, and internal IT agility is necessary but not sufficient for the future. It will be more important to link the application of cloud computing to business agility, speeding business innovation to the marketplace. In summary, both types of agility are good but the latter should be the aim of cloud computing efforts.

Rackspace Announces Plans to Collaborate with NASA and Other Industry Leaders on OpenStack Project

On July 19, Rackspace Hosting, a specialist in the hosting and cloud computing industry, announced the launch of OpenStackTM, an open-source cloud platform designed to advance the emergence of technology standards and cloud interoperability. Rackspace is donating the code that fuels its Cloud Files and Cloud Servers public-cloud offerings to the OpenStack project, which will additionally incorporate technology that powers the NASA Nebula Cloud Platform. NASA and Rackspace plan on collaborating on joint technology development and leveraging the efforts of open-source software developers on a global scale.

NASA’s Chief Technology Officer for IT Chris C. Kemp said of the announcement, “Modern scientific computation requires ever increasing storage and processing power delivered on-demand. To serve this demand, we built Nebula, an infrastructure cloud platform designed to meet the needs of our scientific and engineering community. NASA and Rackspace are uniquely positioned to drive this initiative based on our experience in building large scale cloud platforms and our desire to embrace open source.”

OpenStack is poised to feature several cloud infrastructure components including a fully distributed object store that is based on Rackspace Cloud Files (currently available at OpenStack.org). A scalable compute-provisioning engine based on the NASA Nebula cloud technology and Rackspace Cloud Servers technology are the next components planned for release, anticipated to be available sometime in late 2010. Organizations using these components would be able to turn physical hardware into scalable and extensible cloud environments using the same code currently in production serving large government projects and tens of thousands of customers.

“We are founding the OpenStack initiative to help drive industry standards, prevent vendor lock-in and generally increase the velocity of innovation in cloud technologies. We are proud to have NASA’s support in this effort. Its Nebula Cloud Platform is a tremendous boost to the OpenStack community. We expect ongoing collaboration with NASA and the rest of the community to drive more-rapid cloud adoption and innovation, in the private and public spheres,” Lew Moorman, President and CSO at Rackspace, said at the time of the announcement.

Both organizations have committed to use OpenStack to power their cloud platforms, while Rackspace will dedicate open-source developers and resources to support adoption of OpenStack among service providers and enterprises. Rackspace hosted an OpenStack Design Summit in Austin, Texas from July 13 to 16, in which over 100 technical advisors, developers and founding members teamed up to validate the code and ratify the project roadmap. Among the more than 25 companies represented at the Design Summit were Autonomic Resources, AMD, Cloud.com, Citrix,  Dell, FathomDB, Intel, Limelight, Zuora, Zenoss, Riptano and Spiceworks.

“OpenStack provides a solid foundation for promoting the emergence of cloud standards and interoperability. As a longtime technology partner with Rackspace, Citrix will collaborate closely with the community to provide full support for the XenServer platform and our other cloud-enabling products,” said Peter Levine, SVP and GM, Datacenter and Cloud Division, Citrix Systems.

Forrest Norrod, Vice President and General manager of Server Platforms, Dell, added, “We believe in offering customers choice in cloud computing that helps them improve efficiency. OpenStack on Dell is a great option to create open source enterprise cloud solutions.”

Updated User Policy Management for Google Apps

Google has released a series of new features granting administrators more controls to manage Google Apps within their organizations, including new data migration tools, SSL enforcement capabilities, multi-domain support and the ability to tailor Google Apps with over 100 applications from the recently-introduced Google Apps Marketplace. On July 20 Google announced one of the most-requested features from administrators: User Policy Management.

With User Policy Management, administrators can segment their users into organizational units and control which applications are enabled or disabled for each group.  Take a manufacturing firm, for example. The company might want to give their office workers access to Google Talk, but not their production line employees, and this is possible with User Policy Management.

Additionally, organizations can use this functionality to test applications with pilot users before making them available on a larger scale. Associate Vice President for Computer Services at Temple University Sheri Stahler says, “Using the new User Policy Management feature in Google Apps, we’re able to test out new applications like Google Wave with a subset of users to decide how we should roll our new functionality more broadly.”

Customers can transition to Google Apps from on-premise environments with User Policy Management, as it grants them the ability to toggle services on or off for groups of users. A business can enable just the collaboration tools like Google Docs and Google sites for users who have yet to move off old on-premises messaging solutions, for example.

These settings can be managed by administrators on the ‘Organizations & Users’ tab in the ‘Next Generation’ control panel. On balance, organizations can mirror their existing LDAP organizational schema using Google Apps Directory Sync or programmatically assign users to organizational units using the Google Apps Provisioning API.

Premier and Educational edition users can begin using User Policy Management for Google Apps at no additional charge.

Dell and Microsoft Partner Up with the Windows Azure Platform Appliance

Dell and Microsoft announced a strategic partnership in which Dell will adopt the Windows Azure platform appliance as part of its Dell Services Cloud to develop and deliver next-generation cloud services at Microsoft’s Worldwide Partner Conference on July 12. With the Windows Azure platform, Dell will be able to deliver private and public cloud services for its enterprise, public, small and medium-sized business customers. Additionally, Dell will develop a Dell-powered Windows Azure platform appliance for enterprise organizations to run in their data-centers.

So what does this mean exactly? By implementing the limited production release of the Windows Azure platform appliance to host public and private clouds for its customers, Dell will leverage its vertical industry expertise in offering solutions for the speedy delivery of flexible application hosting and IT operations. In addition, Dell Services will produce application migration, advisory migration and integration and implementation services.

Microsoft and Dell will work together to develop a Windows Azure platform appliance for large enterprise, public and hosting customers to deploy to their own data centers. The resulting appliance will leverage infrastructure from Dell combined with the Windows Azure platform.

This partnership shows that both Dell and Microsoft recognize that more organizations can reap the benefits of the flexibility and efficiency of the Windows Azure platform. Both companies understand that cloud computing allows IT to increase responsiveness to business needs and also delivers significant efficiencies in infrastructure costs. The result will be an appliance to power a Dell Platform-as-a-Service (PaaS) Cloud.

The announcement with Dell occurred on the same day that Microsoft announced the limited production release of the Windows Azure platform appliance, a turnkey cloud platform for large service providers and enterprises to run in their own data centers. Initial partners (like Dell) and customers using the appliance in their data centers will have the scale-out application platform and data center efficiency of Windows Azure and SQL Azure that Microsoft currently provides.

Since the launch of the Windows Azure platform, Dell Data Center Solutions (DCS) has been working with Microsoft to built out and power the platform. Dell will use the insight gained as a primary infrastructure partner for the Windows Azure platform to make certain that the Dell-powered Windows Azure platform appliance is optimized for power and space to save ongoing operating costs and performance of large-scale cloud services.

A top provider of cloud computing infrastructure, Dell’s client roster boasts 20 of the 25 most heavily-trafficked Internet sites and four of the top global search engines. The company has been custom-designing infrastructure solutions for the top global cloud service providers and hyperscale data center operations for the past three years and has developed an expertise about the specific needs of organizations in hosting, HPC, Web 2.0, gaming, energy, social networking, energy, SaaS, plus public and private cloud builders in that time.

Speaking about the partnership with Microsoft, president of Dell Services Peter Altabef said, “Organizations are looking for innovative ways to use IT to increase their responsiveness to business needs and drive greater efficiency. With the Microsoft partnership and the Windows Azure platform appliance, Dell is expanding its cloud services capabilities to help customers reduce their total costs and increase their ability to succeed. The addition of the Dell-powered Windows Azure platform appliance marks an important expansion of Dell’s leadership as a top provider of cloud computing infrastructure.”

Dell Services delivers vertically-focused cloud solutions with the combined experience of Dell and Perot Systems. Currently, Dell Services delivers managed and Software-as-a-Service support to over 10,000 customers across the globe. Additionally, Dell boasts a comprehensive suite of services designed to help customers leverage public and private cloud models. With the new Dell PaaS powered by the Windows Azure platform appliance, Dell will be able to offer customers an expanded suite of services including transformational services to help organizations move applications into the cloud and cloud-based hosting.

Summarizing the goal of the partnership with Dell, Bob Muglia, president of Microsoft Server and Tools said at the Microsoft Windows Partner Conference on July 12, “Microsoft and Dell have been building, implementing and operating massive cloud operations for years. Now we are extending our longstanding partnership to help usher in the new era of cloud computing, by giving customers and partners the ability to deploy Windows Azure platform in their datacenters.”

Protected: Microsoft Azure® Platform-as-a-Service Breaks Away from the Pack

This content is password protected. To view it please enter your password below:

Four Key Categories for Cloud Computing

When it comes to cloud computing, concerns about control and security have dominated recent discussions. While it was once assumed that all computing resources could be had from outside, now it is going towards a vision of a data center magically transformed for easy connections to internal and external IT resources.

According to IDC’s Cloud Services Overview report, sales of cloud-related technology is growing at 26 percent per year. That is six times the rate of IT spending as a whole; although they comprised only about 5 percent of total IT revenue this year. While the report points out that defining what constitutes cloud-related spending is complicated, it estimates global spending of $17.5 billion on cloud technologies in 2009 will grow to $44.2 billion by 2013. IDC predicts that hybrid or internal clouds will be the norm, although even in 2013 only an estimated 10 percent of that spending will go specifically to public clouds.

According to Chris Wolf, analyst at The Burton Group, hybrid cloud infrastructure isn’t that different from existing data-center best practices. The difference is that all of the pieces are meant to fit together using Internet-age interoperability standards as opposed to homegrown kludge.

The following are four items to consider when making a “shopping list” when preparing your IT budget for use of private or public cloud services:

1.       Application Integration

Software integration isn’t the first thing most companies consider when building a cloud, although Bernard Golden, CEO at cloud consulting firm HyperStratus, and CIO.com blogger, says it is the most important one.

Tom Fisher, vice president of cloud computing at SuccessFactors.com, a business-application SaaS provider in San Mateo, California, says that integration is a whole lot more than simply batch-processing chunks of data being traded between applications once or twice per day like it was done in mainframes.

Fisher continues to explain that it is critical for companies to be able to provision and manage user identities from a single location across a range of applications, especially when it comes to companies that are new in the software-providing business and do not view their IT as a primary product.

“What you’re looking for is to take your schema and map it to PeopleSoft or another application so you can get more functional integration. You’re passing messages back and forth to each other with proper error-handling agreement so you can be more responsive. It’s still not real time integration, but in most cases you don’t really need that,” says Fisher.

2.       Security

The ability to federate—securely connect without completely merging—two networks, is a critical factor in building a useful cloud, according to Golden.

According to Nick Popp, VP of product development at Verisign (VRSN), that requires layers of security, including multifactor authentication, identity brokers, access management and sometimes an external service provider who can provide that high a level of administrative control. Verisign is considering adding a cloud-based security service.

Wolf states that it requires technology that doesn’t yet exist. According to Wolf, an Information Authority that can act as a central repository for security data and control of applications, data and platforms within the cloud. It is possible to assemble that function out of some of the aspects Popp mentions today, yet Wolf maintains that there is no one technology able to span all platforms necessary to provide real control of even an internally hosted cloud environment.

3.       Virtual I/O

One IT manager at a large digital mapping firm states that if you have to squeeze data for a dozen VMs through a few NICs, the scaling of your VM cluster to cloud proportions will be inhibited.

“When you’re in the dev/test stage, having eight or 10 [Gigabit Ethernet] cables per box is an incredible labeling issue; beyond that, forget it. Moving to virtual I/O is a concept shift—you can’t touch most of the connections anymore—but you’re moving stuff across a high-bandwidth backplane and you can reconfigure the SAN connections or the LANs without having to change cables,” says the IT manager.

Virtual I/O servers (like the Xsigo I/O Director servers used by the IT manager’s company) can run 20Gbit/sec through a single cord and as many as 64 cords to a single server—connecting to a backplane with a total of 1,560Gbit/sec of bandwidth. The IT Manager states that concentrating such a large amount of bandwidth in one device saves space, power and cabling and keeps network performance high and saves money on network gear in the long run.

Speaking about the Xsigo servers, which start at approximately $28,000 through resellers like Dell (DELL), the manager says, “It becomes cost effective pretty quickly. You end up getting three, four times the bandwidth at a quarter the price.”

4.       Storage

Storage remains the weak point of the virtualization and cloud-computing worlds, and the place where the most money is spent.

“Storage is going to continue to be one of the big costs of virtualization. Even if you turn 90 percent of your servers into images, you still have to store them somewhere,” says Golden in summary. Visit Nubifer.com for more information.

Zuora Releases Z-Commerce

The first external service (SaaS) that actually understands the complex billing models of the cloud providers (which account for monthly subscription fees as well as automated metering, pricing and billing for products, bundles and highly individualized/specific configurations) arrived in mid-June in the form of Zuora’s Z-Commerce. An upgrade to Zuora’s billing and payment service that is built for cloud providers, Z-Commerce is a major development. With Z-Commerce, storage-as-a-service is able to charge for terabytes of storage used, or IP address usage, or data transfer charges. Cloud providers can also structure a per CPU instance charge or per application use charge and it can take complexities like peak usage into account. Zuora has provided 20 pre-configured templates for the billing and payment models that cloud providers use.

What makes this development so interesting that that Zuora is using what they are calling the “subscription economy” for the underlying rationale for their success: 125 customers, 75 employees and profitability.

Tien Tzou, the CEO of Zuora (also the former Chief Strategy Officer of Salesforce.com, described subscription economy below:

“The business model of the 21st century is a fundamentally different business model.

The 21st century world needs a whole new set of operational systems — ones that match the customer centric business model that is now necessary to succeed.

The business model of the 20th century was built around manufacturing.  You built products at the lowest possible cost, and you find buyers for that product.

They key metrics were all around inventory, cost of goods sold, product life cycles, etc. But over the last 30 years, we’ve been moving away from a manufacturing economy to a services economy. Away from an economy based on tangible goods, to an economy based on intangible ideas and experiences.

What is important now is the customer — of understanding customer needs, and building services & experiences that fulfill those customer needs.  Hence the rise of CRM.

But our financial and operational systems have not yet evolved!  What we need today are operational systems built around the customer, and around the services you offer to your customers.

You need systems that allow you to design different services, offered under different price plans that customers can choose from based on their specific needs.  So the phone companies have 450 minute plans, prepaid plans, unlimited plans, family plans, and more.  Salesforce has Professional Edition, and Enterprise Edition, and Group Edition, and PRM Edition, and more.  Amazon has Amazon Prime.  ZipCar has their Occasional Driving Plan and their Extra Value Plans.

You need systems that track customer lifecycles — things such as monthly customer value, customer lifetime value, customer churn, customer share of wallet, conversion rates, up sell rates, adoption levels.

You need systems that measure how much of your service your customers are consuming.  By the minute?  By the gigabyte?  By the mile?  By the user?  By the view?  And you need to establish an ongoing, recurring billing relationship with your customers, that maps to your ongoing service relationship, that allows you to monetize your customer interactions based on the relationship that the customer opted into.

The 21st century world needs a whole new set of operational systems — ones that match the customer centric business model that is now necessary to succeed.”

To summarize, what he is saying is that the model for future business isn’t the purchase of goods and services, but rather a price provided to a customer for an ongoing relationship to the company. Under this model, the customer is able to structure the relationship in a way which provides them with what they need to accomplish the job (s) that the company can help them with (which can be a variety of services, products, tools and structured experiences).

This is also interesting because your business is measuring the customer’s commitments to you and the other way around in operation terms, even as the business model is shifting to more interactions than ever before. If you are looking at traditional CRM metrics like CLV, churn, share of wallet, adoption rates and more, as they apply to a business model that has continued to evolve away from pure transactions, Tien is saying that the payment/billing, to him, is the financial infrastructure for this new customer-centered economic model (i.e. the subscription model).

Denis Pombriant of Beagle Research Group, LLC commented on this on his blog recently, pointing out that a subscription model does not guarantee a business will be successful. What does have significant bearing on the success of failure of a business is how well the business manages it or has it managed (i.e. by Zuora).

This can be applied to the subscription economy. Zuora is highlighting what they have predicted: that companies are increasingly moving their business models to subscription based pricing. This is the same model that supports free software and hardware, which charges customers by the month. How it is managed is another can of worms, but for now Zuora has done a service by recognizing that the customer-driven companies are realizing that the customers are willing to pay for the aggregate capabilities of the company in an ongoing way—as long as the company continues to support the customer’s needs in solving problems that arise. To learn more about cloud computing and the subscription model, contact a Nubifer.com representative.

Microsoft Releases Security Guidelines for Windows Azure

Industry analysts have praised Microsoft for doing a respectable job at ensuring the security of its Business Productivity Online Services, Windows and SQL Azure. With that said, deploying applications to the cloud requires additional considerations to ensure that data remains in the correct hands.

Microsoft released a version of its Security Development Lifecycle in early June as a result of these concerns. Microsoft’s Security Development Lifecycle, a statement of best practices to those building Windows and .NET applications, focuses on how to build security into Windows Azure applications and has been updated over the years to ensure the security of those apps.

Principle security program manager of Microsoft’s Security Development Lifecycle team Michael Howard warns that those practices were not, however, designed for the cloud. Speaking in a pre-recorded video statement embedded in a blog entry, Howard says, “Many corporations want to move their applications to the cloud but that changes the threats, the threat scenarios change substantially.”

Titled “Security Best Practices for Developing Windows Azure Applications,” the 26-page white paper is divided into three sections: the first describes the security technologies that are part of Windows Azure (including the Windows Identity Foundation, Windows Azure App Fabric Access Control Service and Active Directory Federation Services 2.0—a core component for providing common logins to Windows Server and Azure); the second explains how developers can apply the various SDL practices to build more secure Windows Azure applications, outlining various threats like namespace configuration issues and recommending data security practices like how to generate shared-access signatures and use of HTTPS in the request URL;  and the third is a matrix that identifies various threats and how to address them.

Says Howard, “Some of those threat mitigations can be technologies you use from Windows Azure and some of them are threat mitigations that you must be aware of and build into your application.”

Security is a major concern and Microsoft has address many key issues concerning security in the cloud. President of Lieberman Software Corp., a Microsoft Gold Certified Partner specializing in enterprise security Phil Lieberman says, “By Microsoft providing extensive training and guidance on how to properly and securely use its cloud platform, it can overcome customer resistance at all levels and achieve revenue growth as well as dominance in this new area. This strategy can ultimately provide significant growth for Microsoft.”

Agreeing with Lieberman, Scott Matsumoto, a principal consultant with the Washington, D.C.-based consultancy firm Cigital Inc., which specializes in security, says, “I especially like the fact that they discuss what the platform does and what’s still the responsibility of the application developer. I think that it could be [wrongly] dismissed as a rehash of other information or incomplete—that would be unfair.” To find more research on Cloud Security, please visit Nubifer.com.

Microsoft Makes Strides for a More Secure and Trustworthy Cloud

Cloud computing currently holds court in the IT industry with vendors, service providers, press, analysts and customers all evaluating and discussing the opportunities presented by the cloud.

Security is a very important piece to the puzzle, and nearly every day a new press article or analyst report indicated that cloud security and privacy are a top concern for customers as the benefits of cloud computing continue to unfold. For example, a recent Microsoft survey revealed that although 86% of senior business leaders are thrilled about cloud computing, over 75% remain concerned about the security, access and privacy of data in the cloud.

Customers are correct in asking how cloud vendors are working to ensure the security of cloud applications, the privacy of individuals and protection of data. In March, Microsoft CEO Steve Ballmer told an audience at the University of Washington that, “This is a dimension of the cloud, and it’s a dimension of the cloud that needs all of our best work.”

Microsoft is seeking to address security-related concerns and help customers understand which questions they need to ask as part of Microsoft’s Trustworthy Computing efforts. The company is trying to become more transparent than competitors concerning how they help enable an increasingly secure cloud.

Server and Tools Business president Bob Muglia approached the issue in his recent keynote at Microsoft’s TechEd North America conference saying, “The data that you have is in your organization is yours. We’re not confused about that, that it’s incumbent on us to help you protect that information for you. Microsoft’s strategy is to deliver software, services and tools that enable customers to realize the benefits of a cloud-based model with the reliability and security of on-premise software.”

The Microsoft Global Foundations Services (GFS) site is a resource for users to learn about Microsoft’s cloud security efforts, with the white papers “Securing Microsoft’s Cloud Infrastructure” and “Microsoft’s Compliance Framework for Online Services” being very informative.

Driving a comprehensive, centralized Information Security Program for all Microsoft cloud data-centers and the 200+ consumer and commercial services they deliver –all built using the Microsoft Security Development Lifecycle–GFS covers everything from physical security to compliance, such as Risk Management Process, Response, and work with law enforcement; Defense-in-Depth Security controls across physical, network, identity and access, host, application and data; A Comprehensive Compliance Framework to address standards and regulations such as PCI, SOX, HIPPA, and the Media Ratings Council; and third party auditing, validation and certification (ISO 27001, SAS 70).

Muglia also pointed out Microsoft’s focus on identity, saying, “As you move to cloud services you will have a number of vendors, and you will need a common identity system.” In general, identity is the cornerstone of security, especially cloud security. Microsoft currently provides technologies with Windows Server and cloud offerings which customers can use to extend existing investments in identity infrastructure (like Active Directory) for easier and more secure access to cloud services.

Microsoft is not alone in working on cloud security, as noted by Microsoft’s chief privacy strategist Peter Cullen. “These truly are issues that no one company, industry or sector can tackle in isolation. So it is important to start these dialogs in earnest and include a diverse range of stakeholders from every corner of the globe,” Cullen said in his keynote at the Computers, Freedom and Privacy (CFP) conference. Microsoft is working with customers, governments, law enforcement, partners and industry organizers (like the Cloud Security Alliance) to ensure more secure and trustworthy cloud computing through strategies and technologies. To receive additional information on Cloud security contact a Nubifer.com representative today.

Don’t Underestimate a Small Start in Cloud Computing

Although many predict that cloud computing will forever alter the economics and strategic direction of corporate IT, it is likely that the impact of the cloud will continue to be largely from small projects. Some users and analysts say that these small projects, which do not project complex, enterprise-class, computing-on-demand services, are what to look out for.

David Tapper, outsourcing and offshoring analyst for IDC says, “What we’re seeing is a lot of companies using Google (GOOG) Apps, Salesforce and other SaaS apps, and sometimes platform-as-a-service providers, to support specific applications. A lot of those services are aimed at consumers, but they’re just as relevant in business environments, and they’re starting to make it obvious that a lot of IT functions are generic enough that you don’t need to build them yourself.” New enterprise offerings from Microsoft, such as Microsoft BPOS, have also shown up on the scene with powerful SaaS features to offer businesses.

According to Tapper, the largest representation of mini-cloud computing is small- and mid-sized businesses using commercial versions of Google Mail, Google Apps and similar ad hoc or low-cost cloud-based applications. With that said, larger companies are doing the exact same thing. “Large companies will have users whose data are confidential or who need certain functions, but for most of them, Google Apps is secure enough. We do hear about some very large cloud contracts, so there is serious work going on. They’re not the rule though,” says Tapper.

First Steps into the Cloud

A poll conducted by the Pew Research Center’s Internet & American Life Project found that 71 percent of the “technology stakeholders and critics” believe that most people will do their work from a range of computing devices using Internet-basd applications as their primary tools by 2020.

Respondents were picked from technology and analyst companies for their technical savvy and as a whole believe cloud computing will dominate information transactions by the end of the decade. The June report states that cloud computing will be adopted because of its ability to provide new functions quickly, cheaply and from anywhere the user wishes to work.

Chris Wolf, analyst at Gartner, Inc.’s Burton Group, thinks that while this isn’t unreasonable, it may be a little too optimistic. Wolf says that even fairly large companies sometimes use commercial versions of Google Mail or instant messaging, but it is a different story when it comes to applications requiring more fine tuning, porting, communications middleware or other heavy work to run on public clouds, or data that has to be protected and documented.

Says Wolf, “We see a lot of things going to clouds that aren’t particularly sensitive–training workloads, dev and test environments, SaaS apps; we’re starting to hear complaints about things that fall outside of IT completely, like rogue projects on cloud services. Until there are some standards for security and compliance, most enterprises will continue to move pretty slowly putting critical workloads in those environments. Right now all the security providers are rolling their own and it’s up to the security auditors to say if you’re in compliance with whatever rules govern that data.”

Small, focused projects using cloud technologies are becoming more common, in addition to the use of commercial cloud-based services, says Tapper.

For example, Beth Israel Deaconnes Hospital in Boston elevated a set of VMware (VMW) physical and virtual servers into a cloud-like environment to create an interface to its patient-records and accounting systems, enabling hundreds of IT-starved physician offices to link up with the use of just one browser.

New York’s Museum of Modern Art started using workgroup-on-demand computing systems from CloudSoft Corp. last year. This allowed the museum to create online workspaces for short-term projects that would otherwise have required real or virtual servers and storage on-site.

Cloud computing will make it clear to both IT and business management that some IT functions are just generic when they’re homegrown as when rented, in about a decade or so. Says Tapper, “Productivity apps are the same for the people at the top as the people at the bottom. Why buy it and make IT spend 80 percent of its time maintaining essentially generic technology?” Contact Nubifer.com to learn more…

Nubifer Cloud:Link Mobile and Why Windows Phone 7 is Worth the Wait

Sure, Android devices become more cutting-edge with each near-monthly release and Apple recently unveiled its new iPhone, but some industry experts suggest that Windows Phone 7 is worth the wait. Additionally, businesses may benefit from waiting until Windows Phone 7 arrives to properly compare the benefits and drawbacks of all three platforms before making a decision.

Everyone is buzzing about the next-generation iPhone and smartphones like the HTC Incredible and HTC EVO 4G, but iPhone and Android aren’t even the top smart phone platforms. With more market share than second place Apple and third place Microsoft combined, RIM remains the number one smartphone platform. Despite significant gains since its launch, Android is in fourth place, with only 60 percent as much market share as Microsoft.

So what gives? In two words: the business market. While iPhone was revolutionary for merging the line between consumer gadget and business tool, RIM has established itself as synonymous with mobile business communications. Apple and Google don’t provide infrastructure integration or management tools comparable to those available with the Blackberry Enterprise Server (BES).

The continued divide between consumer and business is highlighted by the fact that Microsoft is still in third place with 15 percent market share. Apple and Google continue to leapfrog one another while RIM and Microsoft are waiting to make their move.

The long delay in new smartphone technology from Microsoft is the result of leadership shakeups and the fact that Microsoft completely reinvented its mobile strategy, starting from scratch. Windows Phone 7 isn’t merely an incremental evolution of Windows Mobile 6.5. Rather, Microsoft went back to the drawing board to create an entirely new OS platform that recognizes the difference between a desktop PC and a smartphone as opposed to assuming that the smartphone is a scaled-down Windows PC.

Slated to arrive later this year, Windows 7 smartphones promise an attractive combination of the intuitive touch interface and experience found in the iPhone and Android, as well as the integration and native apps to tie in with the Microsoft server infrastructure that comprises the backbone of most customers network and communications architecture.

With that said, the Windows Phone 7 platform won’t be without its own set of issues. Like Apple’s iPhone, Windows Phone 7 is expected to lack true multitasking and the copy and paste functionality from the get-go. Additionally, Microsoft is also locking down the environment with hardware and software restrictions that limit how smartphone manufacturers can customize the devices, and doing away with all backward compatibility with existing Windows Mobile hardware and apps.

As a mobile computing platform, Cloud Computing today touches many devices and end points. From Application Servers to Desktops and of course the burgeoning ecosystem of smart phone devices. When studying the landscapes and plethora of cell phone operating systems, and technology capabilities of the smart phones, you start to see a whole new and exciting layer of technology for consumers and business people alike.

Given the rich capabilities of Windows Phone 7 offering Silverlight, and/or XNA technology, we at Nubifer have become compelled to engineer the upgrades to our cloud services to inter-operate with the powerful new upcoming technologies offered by Windows Phone 7. At Nubifer, we plan to deploy and inter-operate with many popular smart phones and hand-set devices by way of linking these devices to our Nubifer Cloud:Link technology and offering extended functionality delivered by Nubifer Cloud:Connector and Cloud:Portal which enable enterprise companies to gain a deeper view into the analytics and human computer interaction of end users and subscribers of various owned and leased software systems hosted entirely in the cloud or by way of the hybrid model.

It makes sense for companies that don’t need to replace their smartphones at once to wait for Windows Phone 7 to arrive, at which point all three platforms and be compared and contrasted. May the best smartphone win!

Cloud Computing in 2010

A recent research study by the Pew Internet & American Life Project released on June 11 found that most people expect to “access software applications online and share and access information through the use of remote server networks, rather than depending primarily on tools and information housed on their individual, personal computers” by 2010. This means that the term “cloud computing” will likely be referred to as simply “computing” ten years down the line.

The report points out that we are currently on that path when it comes to social networking, thanks to sites like Twitter and Facebook. We also communicate in the cloud using services like Yahoo Mail and Gmail, shop in the cloud on sites like Amazon and eBay, listen to music in the cloud on Pandora, share pictures in the cloud on Flickr and watch videos on cloud sites like Hulu and YouTube.

The more advanced among us are even using services like Google Docs, Scribd or Docs.com to create, share or store documents in the cloud. With that said, it will be some time before desktop computing falls away completely.

The report says: “Some respondents observed that putting all or most of faith in remotely accessible tools and data puts a lot of trust in the humans and devices controlling the clouds and exercising gate keeping functions over access to that data. They expressed concerns that cloud dominance by a small number of large firms may constrict the Internet’s openness and its capability to inspire innovation—that people are giving up some degree of choice and control in exchange for streamlines simplicity. A number of people said cloud computing presents difficult security problems and further exposes private information to governments, corporations, thieves, opportunists, and human and machine error.”

For more information on the current state of Cloud Computing, contact Nubifer today.

The Impact of Leveraging a Cloud Delivery Model

In a recent discussion about the positive shift in the Cloud Computing discourse towards actionable steps as opposed to philosophical rants in definitions, .NET Developer’s Journal issued a list of five things not to do. The first mistake among the list of five (which included #2. assuming server virtualization is enough; #3 not understanding service dependencies; #4 leveraging traditional monitoring; #5 not understanding internal/external costs), was not understanding the business value. Failing to understand the business impact of leveraging a Cloud delivery model for a given application or service is a crucial mistake, but it can be avoided.

When evaluating a Cloud delivery option, it is important to first define the service. Consider: is it new to you or are you considering porting an existing service? On one hand, if new, there is a lower financial bar to justify a cloud model, but on the downside is a lack of historical perspective on consumption trends to aid an evaluating financial considerations or performance.

Assuming you choose a new service, the next step is to address why you are looking at Cloud, which may require some to be honest about their reasons. Possible reasons for looking at cloud include: your business requires a highly scalable solution; your data center is out of capacity; you anticipate this to be a short-lived service; you need to collaborate with a business partner on neutral territory; your business has capital constraints.

All of the previously listed reasons are good reasons to consider a Cloud option, yet if you are considering this option because it takes weeks, months even, to get a new server in production; your Operation team is lacking credibility when it comes to maintaining a highly available service; or your internal cost allocation models are appalling—you may need to reconsider. In these cases, there may be some in-house improvements that need to be made before exploring a Cloud option.

An important lesson to consider is that just because you can do something doesn’t mean you necessarily should, and this is easily applicable in this situation. Many firms have had disastrous results in the past when they exposed legacy internal applications to the Internet. The following questions must be answered when thinking about moving applications/services to the Cloud:

·         Does the application consume or generate data with jurisdictional requirements?

·         Will your company face fines or a public relations scandal is there is a security breach/data loss?

·         What part of your business value chain is exposed if the service runs poorly? (And are there critical systems that rely on it?)

·         What if the application/service doesn’t run at all? (Will you be left stranded or are there alternatives that will allow the business to remain functioning?)

Embracing Cloud services—public or private—comes with tremendous benefits, yet a constant dialogue about the business value of the service in question is required to reap the rewards. To discuss the benefits of adopting a hybrid On-Prem/Cloud solution contact Nubifer today.

Asigra Introduces Cloud Backup Plan

Cloud backup and recovery software provider Asigra announced the launch of Cloud Backup v10 on June 8. Available through the Asigra partner network, the latest edition extends the scope and performance of the Asigra platform, including protection for laptops, desktops, servers, data centers and cloud computing environments with tiered recovery options to meet Recovery Time Objectives (RTOs). Organizations can select an Asigra service provider for offsite backup, choose to deploy the software directly onsite, or both. Pricing begins at $50 per month through cloud backup service providers.

V10 expanded the tiers of backup and recovery (Local-Only Backup and Backup Lifecycle Manager (BLM) enables cloud storage) and also allows the backup of laptops in the field and other environments, enabling businesses to back up and recover their data to and from physical, virtual or both types of servers. Among the features are DS-Mobile support to backup laptops in the field, FIPS 140-2 NIST certified security and encryption of data in-flight and at-rest and new backup sets for comprehensive protection of enterprise applications, including MS Exchange, MS SharePoint, MS SQL, Windows Hyper-V Oracle SBT, Sybase and Local-Only backup.

Senior analyst at the Enterprise Strategy Group Lauren Whitehouse said, “The local backup option is a powerful benefit for managed service providers (MSPs) as they can now offer more pricing granularity for customers on three levels—local, new and aging data. With more pricing flexibility, more reliable and affordable backup service package to attract more business customers and free them from the pain of tape backup.”

At least two-thirds of companies in North America and Europe have already implemented server virtualization, according to Forrester Research. Asigra added enhancements to the virtualization support in v10 as a response to the major server virtualization vendors embracing the cloud as the strategic deliverable of a virtualized infrastructure. The company has offered support for virtual machine backups at the host level; Cloud Backup v10 is able to be deployed as a virtual appliance with virtual infrastructures. The company said that the current version now supports Hyper-V, VMware and XenServer.

“The availability of Asigra Cloud Backup v10 has reset the playing field for Asigra with end-to-end date protection from the laptop to the data center to the public cloud. With advanced features that differentiate Asigra both technologically and economically from comparable solutions, the platform can adapt to the changing nature of today’s IT environments, providing unmatched backup efficiency and security as well as the ability to respond to dynamic business challenges,” said executive vice president for Asigra Eran Farakjun. To discover how a Cloud back-up system can benefit your enterprise, contact Nubifer Inc.

The Future of Enterprise Software in the Cloud

Although there is currently a lot of discussion regarding the impact that cloud computing and Software-as-a-Service will have on enterprise software, it comes mainly from a financial standpoint. It is now time to begin understanding how enterprise software as we know it will evolve across a federated set of private and public cloud services.

The strategic direction being taken by Epicor is a prime example of the direction that enterprise software is taking. A provider of ERP software for the mid-market, Epicor is taking a sophisticated approach by allowing customers to host some components on the Epicor suite on premise rather than focusing on hosting software in the cloud. Other components are delivered as a service.

Epicor is a Microsoft software partner that subscribes to the Software Plus Services mantra and as such is moving to offer some elements of its software, like the Web server and SQL server components, as an optional service. Customers would be able to invoke this on the Microsoft Azure cloud computing platform.

Basically, Epicor is going to let customers deploy software components where they make the most sense, based on the needs of customers on an individual basis. This is in contrast to proclaiming that one model of software delivery is better than another model.

Eventually, every customer is going to require a mixed environment, even those that prefer on-premise software, because they will discover that hosting some applications locally and in the cloud simultaneously will allow them to run a global operation 24 hours a day, 7 days a week more easily.

Much of the argument over how software is delivered in the enterprise will melt away as customers begin to view the cloud as merely an extension of their internal IT operations. To learn more on how the future of Software in the Cloud can aide your enterprise, schedule a discussion time with a Nubifer Consultant today.

What Cloud APIs Reveal about the Budding Cloud Market

Although Cloud Computing remains hard to define, one of its essential characteristics is pragmatic access to virtually unlimited network, compute and storage resources. The foundation of a cloud is a solid Application Programming Interface (API), despite the fact that many users access cloud computing through consoles and third-party applications.

CloudSwitch works with several cloud providers and thus is able to interact with a variety of cloud APIs (both active and about-to-be-released versions). CloudSwitch has come up with some impressions after working with both the APIs and those implementing them.

First, clouds remain different in spite of constant discussion about standards. Cloud APIs have to cover more than start/stop/delete a server, and once the API crosses into provisioning the infrastructure (network ranges, storage capacity, geography, accounts, etc.), it all starts to get interesting.

Second, a very strong infrastructure is required for a cloud to function as it should. The infrastructure must be good enough to sell to others when it comes to public clouds. Key elements of the cloud API can inform you about the infrastructure, what tradeoffs the cloud provider has made and the impact of end users, if you are attuned to what to look out for.

Third, APIs are evolving fast, like cloud capabilities. New API calls and expansion of existing functions as cloud providers add new capabilities and features are now a reality. On balance, we are discussing on-the-horizon services and with cloud providers and what form their API is poised to take. This is a perfect opportunity to leverage the experience and work of companies like CloudSwitch as a means to integrate these new capabilities into a coherent data model.

When you look at the functions beyond simple virtual machine control, an API can give you an indication of what is happening in the cloud. Some like to take a peek at the network and storage APIs in order to understand how the cloud is built. Take Amazon, for example. In Amazon, the base network design is that each virtual server receives both a public and private IP address. These addresses are assigned from a pool based on the location of the machine within the infrastructure. Even though there are two IP addresses, however, the public one is just routed (or NAT’ed) to the private address. You only have a single network interface to your server—which is simply and scalable architecture for the cloud provider for support—with Amazon. The server will cause problems for applications requiring at least two NICs, such as some cluster applications.

Terremark’s cloud offering is in stark contrast to Amazon’s. IP addresses are defined by the provider so they can route traffic to your servers, like Amazon, but Terremark allocates a range for your use when you first sign up (while Amazon uses a generic pool of addresses). This can been seen as a positive because there is better control of the assignment of networking address, but on the flip side is potential scaling issues because you only have a limited number of addresses to work with. Additionally, you can assign up to four NIC’s to each server in Terremark’s Enterprise cloud (which allows you to create more complex network topologies and support applications requiring multiple networks for proper operation).

One important thing to consider is that with the Terremark model, servers only have internal addresses. There is no default public NAT address for each server, as with Amazon. Instead, Terremark has created a front-end load balancer that can be used to connect a public IP address to a specified set of servers by protocol and port. You must first create an “Internal Service” (in the language of Terremark) that defines a public IP/Port/Protocol for each protocol and port. Next, assign a server and port to the Service, which will create a connection. You can add more than one server to each public IP/Port/Protocol group  since this is a load balancer. Amazon does have a load balancer function as well, and although it isn’t required to connect public addresses to your cloud servers, it does support connecting multiple servers to a single public IP address.

When it comes down to it, the APIs and the feature sets they define tell a lot about the capabilities and design of a cloud infrastructure. The end user features, flexibility and scalability of the whole service will be impacted by decisions made at the infrastructure level (such as network address allocation, virtual device support and load balancers). It is important to look down to the API level when considering what cloud environment you want because it helps you to better understand how the cloud providers’ infrastructure decisions will impact your deployments.

Although building a cloud is complicated, it can provide a powerful resource when implemented correctly. Cloud with different “sweet spots” emerge when cloud providers choose key components and a base architecture for their service. You can span these different clouds and put the right application in the right environment with CloudSwitch. To schedule a time to discuss how Cloud Computing can help your enterprise, contact Nubifer today.

App Engine and VMware Plans Show Google’s Enterprise Focus

Google opened its Google I/O developer conference in San Francisco on May 19 with the announcement of its new version of the Google App Engine, Google App Engine for Business. This was a strategic announcement, as it shows Google is focused on demonstrating its enterprise chops. Google also highlighted its partnership with VMware to bring enterprise Java developers to the cloud.

Vic Gundotra, vice president of engineering at Google said via a blog post: “… we’re announcing Google App Engine for Business, which offers new features that enable companies to build internal applications on the same reliable, scalable and secure infrastructure that we at Google use for our own apps. For greater cloud portability, we’re also teaming up with VMware to make it easier for companies to build rich web apps and deploy them to the cloud of their choice or on-premise. In just one click, users of the new versions of SpringSource Tool Suite and Google Web Toolkit can deploy their application to Google App Engine for Business, a VMware environment or other infrastructure, such as Amazon EC2.”

Enterprise organizations can build and maintain their own applications on the same scalable infrastructure that powers Google Applications with Google App Engine for Business. Additionally,  Google App Engine for Business has added management and support features that are tailored for each unique enterprise. New capabilities with this platform include: the ability to manage all the apps in an organization in one place; premium developer support; simply pricing based on users and applications; a 99.9 percent uptime service-level agreement (SLA); access to premium features such as cloud-based SQL and SSL (coming later this year).

Kevin Gibbs, technical lead and manager of the Google App Engine project said during the May 18 Google I/O keynote that “managing all the apps at your company” is a prevalent issue for enterprise Web developers. Google sought to address this concern through its Google App Engine hosting platform but discovered it needed to shore it up to support enterprises. Said Gibbs, “Google App Engine for Business is built from the ground up around solving the problems that enterprises face.”

Product management director for developer technology at Google Eric Tholome told eWEEK that Google App Engine for Business allows developers to use standards-based technology (like Java, the Eclipse IDE, Google Web Toolkit GWT and Python) to create applications that run on the platform. Google App Engine for Business also delivers dynamic scaling, flat-rate pricing and consistent availability to users.

Gibbs revealed that Google will be doling out the features in Google App Engine for Business throughout the rest of 2010, with Google’s May 19 announcement acting as a preview of the platform. The platform includes an Enterprise Administration Console, a company-based console which allows users to see, manage and set security policies for all applications in their domain. The company’s road map states that features like support, the SLA, billing, hosted SQL and custom domain SSL will come at a later date.

Gibbs said that pricing for Google App Engine for Business will be $8 per month per user for each application with the maximum being $1,000 per application per month.

Google also announced a series of technology collaboration with VMware. The goal of these is to deliver solutions that make enterprise software developers more efficient at building, deploying and managing applications within all types of cloud environments.

President and CEO of VMware Paul Maritz said, “Companies are actively looking to move toward cloud computing. They are certainly attracted by the economic advantages associated with cloud, but increasingly are focused on the business agility and innovation promised by cloud computing. VMware and Google are aligning to reassure our mutual important to both companies. We will work to ensure that modern applications can run smoothly within the firewalls of a company’s data center or out in the public cloud environment.”

Google is essentially trying to pick up speed in the enterprise, with Java developers using the popular Spring Framework (stemming from VMware’s SpringSource division). Recently, VMware did a similar partnership with Salesforce.com.

Maritz continued to say to the audience at Google I/O, “More than half of the new lines of Java code written are written in the context of Spring. We’re providing the back-end to add to what Google provides on the front end. We have integrated the Spring Framework with Google Web Toolkit to offer an end-to-end environment.”

Google and VMware are teaming up in multiple ways to make cloud applications more productive, portable and flexible. These collaborations will enable Java developers to build rich Web applications, use Google and VMware performance tools on cloud apps and subsequently deploy Spring Java applications on Google App Engine.

Google’s Gundotra explained, “Developers are looking for faster ways to build and run great Web applications, and businesses want platforms that are open and flexible. By working with VMware to bring cloud portability to the enterprise, we are making it easy for developers to deploy rich Java applications in the environments of their choice.”

Google’s support for Spring Java apps on Google App Engine are part of a shared vision to make building, running and managing applications for the cloud easier and in a way that renders the applications portable across clouds. Developers can build SpringSource Tool Suite using the Eclipse-based SpringSource and have the flexibility to choose to deploy their applications in their current private VMware vSphere environment, in VMware vCloud partner clouds or directly to Google App Engine.

Google and VMware are also collaborating to combine the speed of development of Spring Roo–a next-generation rapid application development tool–with the power of the Google Web Toolkit to create rich browser apps. These GWT-powered applications can create a compelling end-user experience on computers and smartphones by leveraging modern browser technologies like HTML5 and AJAX.

With the goal of enabling end-to-end performance visibility of cloud applications built using Spring and Google Web Toolkit, the companies are collaborating to more tightly integrate VMware’s Spring Insight performance tracing technology within the SpringSource tc Server application server with Google’s Speed Tracer technology.

Speaking about the Google/VMware partnership, vice president at Nucleus Research Rebecca Wettemann told eWEEK, “In short, this is a necessary step for Google to stay relevant in the enterprise cloud space. One concern we have heard from those who have been slow to adopt the cloud is being ‘trapped on a proprietary platform.’ This enables developers to use existing skills to build and deploy cloud apps and then take advantage of the economies of the cloud. Obviously, this is similar to Salesforce.com’s recent announcement about its partnership with VMware–we’ll be watching to see how enterprises adopt both. To date, Salesforce.com has been better at getting enterprise developers to develop business apps for its cloud platform.”

For his part, Frank Gillett, an analyst with Forrester Research, describes the Google/VMware more as “revolutionary” and the Salesforce.com/VMware partnership to create VMforce as “evolutionary.”

“Java developers now have a full Platform-as-a-Service [PaaS] place to go rather than have to provide that platform for themselves,” said Gillett of the new Google/VMware partnership. He added, however, “What’s interesting is that IBM, Oracle and SAP have not come out with their own Java cloud platforms. I think we’ll see VMware make another deal or two with other service providers. And we’ll see more enterprises application-focused offerings from Oracle, SAP and IBM.”

Google’s recent enterprise moves show that the company is set on gaining more of the enterprise market by enabling enterprise organizations to buy applications from others through the Google Apps Marketplace (and the recently announced Chrome Web Store), buy from Google with Google Apps for Business or build their own enterprise applications with Google App Engine for Business. Nubifer Inc. is leading Research and Consulting firm specializing in Cloud Computing and Software as a Service.

Cloud Computing Business Models on the Horizon

Everyone is wondering what will follow SaaS, PaaS and IaaS, so here is a tutorial on some of the emerging cloud computing business models on the horizon.

Computing arbitrage:

Companies like broadband.com are buying bandwidth at a wholesale rate and reselling it to the companies to meet their specific needs. Peekfon began buying data bandwidth in bulk and slice it up to sell to their customers as a way to solve the problem of expensive roaming for customers in Europe. The company was able to negotiate with the operators to buy bandwidth in bulk because they intentionally decided to steer away from the voice plans. They also used heavy compression on their devices to optimize the bandwidth.

While elastic computing is an integral part of cloud computing, not all companies who want to leverage the cloud necessarily like it. These companies with unique cloud computing needs—like fixed long-term computing that grows at relatively fixed low rate and seasonal peaks—have a problem that can easily be solved via intermediaries. Since it requires hi cap-ex, there will be fewer and fewer cloud providers. Being a “cloud VAR” could be a good value proposition for the vendors that are “cloud SI” or have a portfolio of cloud management.

App-driven and content-driven clouds:

Now that the competition between private and public clouds is nearly over, it is time to think about a vertical cloud. The needs to compute depend on what is being computed, and it depends on the applications’ specific needs to compute, the nature and volume of data that is being computed and the kind of content that is being delivered. The vendors are optimizing the cloud to match their application and content needs in the current SaaS world, and some are predicting that a few companies will help ISV’s by delivering app-centric and content-centric clouds.

For advocates of net neutrality, the current cloud-neutrality that is application-agnostic is positive, but innovation on top of raw clouds is still needed. Developer’s need fine knobs for CPU computes, I/O computes, main-memory computing and other varying needs of their applications. The extensions are specific to a programming stack like Heroku for Ruby but the opportunity to provide custom vertical extensions for an existing cloud or to build a cloud that is purpose-built for a specific class of applications and has a range of stack options underneath (making it easy for the developers to leverage the cloud natively) is here. Nubifer Inc. provides Cloud and SaaS Consulting services to enterprise companies.

U.S. Government Moves to the Cloud

The U.S. Recovery, Accountability and Transparency Board recently announced the move of its Recovery.gov site to a cloud computing infrastructure. That cloud computing infrastructure is powered by Amazon.com’s Elastic Compute Cloud (EC2) and will grant the U.S. Recovery Accountability and Transparency Board more efficient computer operation, reduced costs and improved security.

Amazon Web Services’ (AWS) cloud technology was selected as the foundation for the move by Smartronix, which acted as the prime contractor on the migration made by the U.S. Recovery Accountability and Transparency. Also in the May 13 announcement, the board said Recovery.gov is now the first government-wide system to make the move into the cloud.

The U.S. government’s official Website that provides easy access to data related to Recovery Act spending, Recovery.gov allows for the reporting of potential fraud, waste and abuse. The American Recovery and Reinvestment Act of 2009 created the Recovery Accountability and Transparency Board with two goals in mind: to provide transparency related to the use of Recovery-related funds, and to prevent and detect fraud, waste and mismanagement.

CEO of Smartronix John Parris said of the announcement, “Smartronix is honored to have supported the Recovery Board’s historic achievement in taking Recovery.gov, the standard for open government, to the Amazon Elastic Compute Cloud (EC2). This is the first federal Website infrastructure to operate on the Amazon EC2 and was achieved due to the transparent and collaborative working relationship between Team Smartronix and our outstanding government client.”

The board anticipates that the move will save approximately $750,000 during its current budget cycle and result in long-term savings as well. For fiscal year 2010 and 2011 direct cost savings to the Recovery Board will be $334,800 and $420,000 respectively.

Aside from savings, the move to the cloud will free up resources and enable the board’s staff to focus on its core mission of providing Recovery.com’s users with rich content without worrying about management of the Website’s underlying data center and related computer equipment.

In a statement released in conjunction with the announcement, vice president of Amazon Web Services Adam Selipsky said, “Recovery.gov is demonstrating how government agencies are leveraging the Amazon Web Services cloud computing platform to run their technology infrastructure at a fraction of the cost of owning and managing it themselves. Building on AWS enables Recovery.giv to reap the benefits of the cloud–including the ability to add or shed the resources as needed, paying only for resources used and freeing up scarce engineering resources from running technology infrastructure–all without sacrificing operational performance, reliability, or security.”

The Board’s Chairman, Earl Devany, said, “Cloud computing strikes me as a perfect tool to help achieve greater transparency and accountability. Moving to the cloud allows us to provide better service at lower costs. I hope this development will inspire other government entities to accelerate their own efforts. The American taxpayers would be the winners.”

Board officials also said that greater protection against network attacks and real time detection of system tampering are some of the security improvements from the move. Amazon’s computer security platform has been essentially added to the Board’s own security system (which will continue to be maintained and operated by the Board’s staff).

President of Environmental Systems Research Institute (ESRI) Jack Dangermound also released a statement after the announcement was made. “Recovery.gov broke new ground in citizen participation in government and is now a pioneer in moving to the cloud. Opening government and sharing data through GIS are strengthening democratic processes of the nation,” said Dangermound. “The Recovery Board had the foresight to see the added value of empowering citizens to look at stimulus spending on a map, to explore their own neighborhoods, and overlay spending information with other information. This is much more revealing than simply presenting lists and charts and raises the bar for other federal agencies.” For more information please visit Nubifer.com.

EMC CEO Joe Tucci Predicts Many Clouds in the Future

EMC isn’t alone in focusing on cloud computing during the EMC World 2010 show, as IT vendors, analysts and the like are buzzing about the cloud. But according to EMC CEO Joe Tucci, the storage giant has a new prediction for the future of cloud computing. During his keynote speech on May 10, and a subsequent discussion with reporters and analysts, Tucci said that EMC’s vision of the future varies from others because it sees many private clouds. This exists in stark contrast with the vision of only a few vendors—like Google, Amazon and Microsoft—offering massive public clouds.

“There won’t be four, five or six giant cloud providers. At the end of the day, you’ll have tens of thousands of private clouds and hundreds of public clouds,” said Tucci.

EMC plans on taking on the role of helping businesses move to private cloud environments, where IT administrators have the ability to view multiple data centers as a single pool of resources. These enterprises with their public clouds will also work with public cloud environments, according to Tucci.

The increased complexity and costs of current data centers serve as a catalyst for the demand for cloud computing models. Tucci says that this explosion of data—which comes from multiple sources, including the growth of mobile device users, medical imaging advancements, increased access to broadband and smart devices—is poised to grow further. “Obviously, we need a new approach, because … infrastructures are too complex and too costly. Enter the cloud. This is the new approach,” Tucci said.

According to Tucci, clouds will be based mainly on x86 architectures, feature converged networks and federated resources and will be dynamic, secure, flexible, cost efficient and reliable. These clouds will also be accessible via multiple devices, a growing need due to the ever-increasing use of mobile devices.

EMC’s May 10 announcements were focused on the push for the private cloud, including the introduction of the VPlex appliances and an expanded networking strategy. Said Tucci, “Our mission is to be your guide and to help you on this journey to the private cloud.”

Tucci said that because of the high level of performance in x86 processors from Intel and Advances Micro Devices, he isn’t predicting a long-term future for other architectures in cloud computing. Tucci used Intel’s eight-core Xeon 7500 “Nehalem EX” processors, which can offer up to 1 terabyte of storage, with systems OEMs prepping to unveil servers with as many as eight processors as an example.

Speaking about the overall growth of x86 processor shipments and revenues, Tucci said that RISC architectures and mainframes will continue to slip: “What I’m saying is, we’re convinced, and everything, that EMC does, and everything Cisco does, will be x86-based. Yes, we’re placing a bet on x86, and we’re going to an all-x86 world.” EMC is currently in the midst of a three-year process of migrating to a private cloud environment. This will include abandoning platforms like Solaris and moving to an all-x86 environment. For more information, please visit Nubifer.com.

Cloud-Optimized Infrastructure and New Services on the Horizon for Dell

Over the past three years, Dell has gained experience in the Cloud through its Data Center solutions and  group-designed customized offerings for cloud and hyperscaled IT environments. The company is now putting that experience to use, releasing several new hardware, software and service offerings optimized for cloud computing environments. Dell officials launched the new offerings—which include a new partner program, new servers optimized for cloud computing and new services designed to help business migrate to the cloud—at a San Francisco event on March 24.

Based on work the Dell Data Center Solutions group has completed over the past three years, the new offerings were outlined by Valeria Knafo, senior manager of business development and business marketing for the DCS unit. According to Knafo, DCS has built customized computing infrastructures for large cloud service providers and hyperscale data centers and is now trying to make their solutions available to enterprises. Said Knafo, “We’ve taken that experience and brought it to a new set of users.”

Dell officials revealed that they have been working with Microsoft on its Windows Azure cloud platform and that the software giant will work with Dell to create joint cloud-based solutions. Dell and Microsoft will continue to collaborate around Windows Azure (including offering services) and Microsoft will continue buying Dell hardware for its Azure platform as well. Turnkey cloud solutions—including pre-tested and pre-assembled hardware, software and services packages that businesses can use to deploy and run their cloud infrastructures quickly—are among the new offerings.

A cloud solution for Web applications will be the first Platform-as-a-Service made available. The offering will combine Dell servers and services with Web application software from Joyent and will come with challenges, caution Dell officials, like unpredictable traffic and the migrating of the apps from development to production. Dell is also offering a new Cloud Partner Program. According to officials, it will broaden options for customers seeking to move into private or public clouds. Dell announced three new software companies as partners as well: Aster Data, Greenplum and Canonical.

Also on the horizon for Dell is its PowerEdge C-series servers, which are designed to be energy efficient and offer features that are vital to hyperscaled environments—HPC (high-performance computing), social networking, gaming, cloud computing, Web 2.0 functions—like memory capacity and high performance. The C1100 (designed for clustered computing environments), the C2100 (for data analytics, cloud computing and cloud storage) and the C6100 (a four-node cloud and cluster system which offers a shared infrastructure) are the three servers that make up the family.

In unveiling the PowerEdge C-Series, Dell is partaking in the increasing industry trend of offering new systems optimized for cloud computing. For example, on March 17 Fujitsu unveiled the Primergy CX1000, a rack server created to offer the high performance environments need when lowering costs and power consumption. The Primergy CX1000 can also save on data center space through a design which pushes hot air from the system through the top of the enclosure as opposed to the back.

Last, but certainly not least, are Dell’s Integrated Solution Services. They offer complete cloud lifecycle management and include workshops to assess a company’s readiness to move to the cloud. Knafo said that the services are a combination of what Dell gained with the acquisition of Perot Systems and what it had already. “There’s a great interest in the cloud, and a lot of questions on how to get to the cloud. They want a path and a roadmap identifying what the cloud can bring,” said Knafo.

Mike Wilmington, a planner and strategist for Dell’s DCS group, claimed the services will decrease confusion many enterprises may have about the cloud. Said Wilmington, “Clouds are what the customer wants them to be,” meaning that while cloud computing may offer essentially the same benefits to all enterprises (cost reductions, flexibility, improved management and greater energy efficiency) it will look different for every enterprise. For more information please visit Nubifer.com.

Cisco, Verizon and Novell Make Announcements about Plans to Secure the Cloud

Cisco Systems, Verizon Business and Novell announce plans to launch offerings designed to heighten security in the cloud.

On April 28, Cisco announced security services based around email and the Internet that are part of the company’s cloud protection push and its Secure Borderless Network architecture; Cisco’s Secure Borderless Network architecture seeks to give users secure access to their corporate resources on any device, anywhere, at anytime.

Cisco’s IronPort Email Data Loss Prevention and Encryption, and ScanSafe Web Intelligence Reporting are designed to work with Cisco’s other web security solutions to grant companies more flexibility when it comes to their security offerings while streamlining management requirements, increasing visibility and lowering costs.

Verizon and Novell made an announcement on April 28 about their plans to collaborate to create an on-demand identity and access management service called Secure Access Services from Verizon. Secure Access Services from Verizon is designed to enable enterprises to decide and manage who is granted access to cloud-based resources. According to the companies, the identity-as-a-server solution is the first of what will be a host of joint offerings between Verizon and Novell.

According to eWeek, studies continuously indicate that businesses are likely to continue trending toward a cloud-computing environment. With that said, issues concerning security and access control remain key concerns. Officials from Cisco, Verizon and Novell say that the new services will allow businesses to feel more at ease while planning their cloud computing strategies.

“The cloud is a critical component of Cisco’s architectural approach, including its Secure Borderless Network architecture,” said vice president and general manager of Cisco’s Security technology business unit Tom Gillis in a statement. “Securing the cloud is highly challenging. But it is one of the top challenges that the industry must rise to meet as enterprises increasingly demand the flexibility, accessibility and ease of management that cloud-based applications offer for their mobile and distributed workforces.”

Cisco purchased ScanSafe in December 2009 and the result is Cisco’s ScanSafe Web Intelligence Reporting platform. The platform is designed to give users a better idea of how their Internet resources are being used, and the objective is to ensure that business-critical workloads aren’t being encumbered by non-business-related traffic. Cisco’s ScanSafe Web Intelligence Reporting platform can report on user-level data and information on Web communications activities within second, and offers over 80 predefined reports.

Designed to protect outbound email in the cloud, the IronPort email protection solution is perfect for enterprises that don’t want to manage their email. Cisco officials say that it provides hosted mailboxes (while keeping control of email policies) and also offers the option of integrated encryption.

Officials say Cisco operates over 30 data centers around the globe and that security offerings handle large quantities of activity each day—including 2.8 billion reputation look-ups, 2.5 billion web requests and the detection of more than 250 billion span messages—and these are the latest in the company’s expanding portfolio of cloud security offerings.

Verizon and Novell’s collaboration—the Secure Access Services—are designed to enable enterprises to move away from the cost and complexity associated with using traditional premises0based identity and access management software for securing applications. These new services offer centralized management of web access to applications and networks in addition to identity federation and web single sign-on.

Novell CEO Ron Hovsepian released a statement saying, “Security and identity management are critical to accelerating cloud computing adoption and by teaming with Verizon we can deliver these important solutions.” While Verizon brings the security expertise, infrastructure, management capabilities and portal to the service, Novell provides the identity and security software. For more information contact a Nubifer representative today.

Cloud Interoperability Brought to Earth by Microsoft

Executives at Microsoft say that an interoperable cloud could help companies trying to lower costs and governments trying to connect constituents. Cloud services are increasingly seen as a way for businesses and governments to scale IT systems for the future, consolidate IT infrastructure, and enable innovative services not possible until now.

Technology vendors are seeking to identify and solve the issues created by operating in mixed IT environments in order to help organizations fully realize the benefits of cloud services. Additionally, vendors are collaborating to make sure that their products work well together. The industry may still be in the beginning stages of collaborating on cloud interoperability, but has already made great strides.

So what exactly is cloud interoperability and how can it benefit companies now? Cloud interoperability specifically concerns one cloud solution working with other platforms and applications—not just other clouds. Customers want to be able to run applications locally or in the cloud, or even on a combination of both. Currently, Microsoft is collaborating with others in the industry and is working to make sure that the premise of cloud interoperability becomes an actuality.

Microsoft’s general managers Craig Shank and Jean Paoli are spearheading Microsoft’s interoperability efforts. Shank helms the company’s interoperability work on public policy and global standards and Paoli collaborates with the company’s product teams to cater product strategies to the needs of customers. According to Shank, one of the main attractions of the cloud is the amount of flexibility and control it gives customers. “There’s a tremendous level of creative energy around cloud services right now—and the industry is exploring new ideas and scenarios all the time. Our goal is to preserve that flexibility through an open approach to cloud interoperability,” says Shank.

Paoli chimes in to say, “This means continuing to create software that’s more open from the ground up, building products that support technologies such as PHP and Java, and ensuring that our existing products work with the cloud.” Both Shank and Paoli are confident that welcoming competition and choice will allow Microsoft to become more successful down the road. “This may seem surprising,” says Paoli before adding,” but it creates more opportunities for its customers, partners and developers.”

Shank reveals that due to the buzz about the cloud, some forget about the ultimate goal: “To be clear, cloud computing has enormous potential to stimulate economic growth and enable governments to reduce costs and expand services to citizens.” One example of the real-world benefits of cloud interoperability is the public sector. Microsoft is currently showing results in this area via solutions like their Eye for Earth project. Microsoft is helping the European Environment Agency simplify the collection and processing of environmental information for use by the general public and government officials. Eye on Earth obtains data from 22,000 water monitoring points and 1,000 stations that monitor air quality through employing Microsoft® Windows Azure, Microsoft ® SQL Azure and already existing Linux technologies. Eye on Earth then helps synthesize the information and makes it accessible for people in 24 different languages in real time.

Product developments like this emerged out of feedback channels which the company developed with its partners, customers and other vendors. In 2006, for example, Microsoft created the Interoperability Executive Customer (IEC) Council, which is comprised of 35 chief technology officers and chief information officers from a variety of organizations across the globe. The group meats two times per year in Redmond and discuss issues concerning interoperability as well as provide feedback to Microsoft executives.

Additionally, Microsoft recently published a progress report which—for the first time—revealed operational details and results achieved by the Council across six work streams (or priority areas). The Council recently commissioned the creation of a seventh work stream for cloud interoperability geared towards developing standards related to the cloud which addressed topics like data portability, privacy, security and service policies.

Developers are an important part of cloud interoperability, and Microsoft is part of an effort the company co-founded with Zend Technologies, IBM and Rackspace called Simple Cloud. Simple Cloud was created to help developers write basic cloud applications that work on all major cloud platforms.

Microsoft is further engaging in the collaborative work of building technical “bridges” between the company and non-Microsoft technologies, like the recently-released Microsoft ® Windows Azure Software Development Kits (SDKs) for PHP and Java and tools for the new Windows ® Azure platform AppFabric SDKs for Java, PHP and Ruby (Eclipse version 1.0), the SQL CRUD Application Wizard for PHP and the Bing 404 Web Page Error Toolkit for PHP. These examples show the dedication of Microsoft Interoperability team.

Despite the infancy of the industry’s collaboration on cloud interoperability issues, much progress has already been made. This progress has had a major positive impact on the way even average users work and live, even if they don’t realize it yet. A wide perspective and a creative and collaborative approach to problem-solving are required for cloud interoperability. In the future, Microsoft will continue to support more conversation within the industry in order to define cloud principles and make sure all points of view are incorporated. For more information please contact a Nubifer representative today.

Amazon Sets the Record Straight About the Top Five Myths Surrounding Cloud Computing

On April 19, the 5th International Cloud Computing Conference & Expo (Cloud Expo)opened in New York City, and Amazon Web Services (AWS) used the event as a platform to address some of what the company sees as the lingering myths about cloud computing.

AWS officials said that the company continues to grapple with questions about features of the cloud-ranging from reliability and security to cost and elasticity—despite being one of the first companies to successfully and profitably implement cloud computing solutions. Adam Selipsky, vice president of AWS, recently spoke about the persisting myths of cloud computing from Amazon’s Seattle headquarters, specifically addressing five that linger in the face of increased industry adoption of the cloud and continued successful cloud deployments. “We’ve seen a lot of misperceptions about cloud computing is,” said Selipsky before debunking five common myths.

Myth 1: The Cloud Isn’t Reliable

Chief information officers (CIOs) in enterprise organizations have difficult jobs and are usually responsible for thousands of applications, explains Selipsky in his opening argument, adding that they feel like they are responsible for the performance and security of these applications. When problems with the applications arise, CIOs are used to approaching their own people for answers and take some comfort that there is a way to take control of the situation.

Selipsky says that customers need to consider a few things when adopting the cloud, one of which is that the AWS’ operational performance is good. Selipsky reminded users that they own the data, they choose which location to store the data (and it doesn’t move unless the customer decided to move it) and that regardless of whether customers choose to encrypt or not, AWS never looks at the data.

“We have very strong data durability—we’ve designed Amazon S3 (Simple Storage Service) for eleven 9’s of durability. We store multiple copies of each object across multiple locations,” said Selipsky. He added that AWS has a “Versioning” feature which allows customers to revert to the last version of any object they somehow lose due to application failure or an unintentional deletion. Customers can also ensure additional fault-tolerant applications by deploying their applications in various Availability zones or using AWS’ Load Balancing and Auto Scaling features.

“And, all that comes with no capex [capital expenditures] for companies, a low per unit cost where you only pay for what you consume, the ability to focus on engineers on unique incremental value for your business,” said Selipsky before adding that the origin of the reliability claims come merely from an illusion of a control, not actual control. “People think if they can control it they have more say in how things go. It’s like being in a car versus an airplane, but you’re much safer in a plane,” he explained.

Myth 2: The Cloud Provides Inadequate Security and Privacy

When it comes to security, Selipsky notes that it is an end-to-end process and thus companies need to build security at every level of the stack. Taking a look at Amazon’s cloud, it is easy to note that the same security isolations are employed as with a traditional data center—including physical data center security, separation of the network, isolation of the server hardware and isolation of storage. Data centers had already become a frequently-shared infrastructure on the physical data center side before Amazon launched its cloud services. Selipsky added that companies realized that they could benefit by renting space in a data facility as opposed to building it.

When speaking about security fundamentals, Selipsky noted that security could be maintained by providing badge-controlled access, guard stations, monitored security cameras, alarms, separate cages and strictly audited procedures and processes. Not only is Amazon’s Web Services’ data center identical to the best practices employed in private data facilities, there is an added physical security advantage in the fact that customers don’t need to access to the servers and networking gear inside. Access to the data center is thus controlled more strictly than traditional rented facilities. Selipsky also added that the Amazon cloud as equal or better isolation than could be expected from dedicated infrastructure, at the physical level.

In his argument, Selipsky pointed out that networks ceased to be isolated physical islands a long time ago because, as companies increasingly began to need to connect to other companies—and then the Internet—their networks became connected with public infrastructure. Firewalls and switch configurations and other special network functionality were used to prevent bad network traffic from getting in, or conversely from leaking out. Companies began using additional isolation techniques as their network traffic increasingly passed over public infrastructure to make sure that the security of every packet on (or leaving) their network remained secure. These techniques include Multi-protocol Label Switching (MPLS) and encryption.

Amazon used a similar approach to networking in its cloud by maintaining packet-level isolation of network traffic and supporting industry-standard encryption. Amazon Web Services’ Virtual Private Cloud allows a customer to establish their own IP address space and because of that customers can use the same tools and software infrastructure they are familiar with to monitor and control their cloud networks. Amazon’s scale also allows for more investment in security policing and countermeasures than nearly and large corporation could afford. Maintains Selipsky, “Our security is strong and dug in at the DNA level.”

Amazon Web Services invests in testing and validating the security of its virtual server and storage environment significantly as well. When discussing the investments made on the hardware side, Selipsky lists:

After customers release these resources, the server and storage are wiped clean so no important data can be left behind.

Intrusion from other running instances is prevented because each instance has its own customer firewall.

Those in need of more network isolation can use Amazon VPC, which allows you to carry your own IP address space with you into the cloud; your instances are only accessible through those IP addresses only you know.

Those desiring to run on their own boxes—where no other instances are running—can purchase extra large instances where only that XL instance runs on that server.

According to Selipsky, Amazon’s scale allows for more investment in security policing and countermeasures: “In fact, we often find that we can improve companies’ security posture when they use AWS. Take the example lots of CIOs worry about—the rogue server under a developer’s desk running something destructive or that the CIO doesn’t want running. Today, it’s really hard (if not impossible) for CIOS to know how many orphans there are and where they might be. With AWS, CIOs can make a single API call and see every system running in their VPC [Virtual Private Cloud]. No more hidden servers under the desk or anonymously places servers in a rack and plugged into the corporate network. Finally, AWS is SAS-70 certified; ISO 27—1 and NIST are in process.”

Myth 3: Creating My Own In-House Cloud or Private Cloud Will Allow Me to Reap the Same Benefits of the Cloud

According to Selipsky, “There’s a lot of marketing going on about the concept of the ‘private cloud.’ We think there’s a bit of a misnomer here.” Selipsky continued to explain that generally, “we often see companies struggling to accurately measure the cost of infrastructure. Scale and utilization are big advantages for AWS. In our opinion, a cloud has five key characteristics: it eliminates capex; allows you to pay for what you use; provides true elastic capacity to scale up and down; allows you to move very quickly and provision servers in minutes; and allows you to offload the undifferentiated heavy lifting of infrastructure so your engineers work on differentiating problems.

Selipsky also pointed out the following drawbacks of private clouds: still own the capex (and they are expensive!); not pay for  what you use; not have true elasticity; still manage the undifferentiated heavy lifting. “With a private cloud you have to manage capacity very carefully … or you or your private cloud vendor will end up over-provisioning. So you’re going to have to either get very good at capacity management or you’re going to wind up overpaying,” said Selipsky before challenging the elasticity of the private cloud: “The cloud is shapeless. But if it has a tight box around it, it no longer feels very cloud-like.”

One of AWS’ key offerings is Amazon’s ability to save customers money while also driving efficiency. “In virtually every case we’ve seen, we’ve been able to save people a significant amount of money,” said Selipsky. This is in part because AWS’ business has greatly expanded over the last four years and Amazon has achieved enough scale to secure very low costs. AWS has been able to aggregate hundreds of thousands of customers to have a high utilization of its infrastructure. Said Selipsky, “In our conversations with customers we see that really good enterprises are in the 20-30 percent range on utilization—and that’s when they’re good … many are not that strong. The cloud allows us to have several times that utilization. Finally, it’s worth looking at Amazon’s heritage and AWS’ history. We’re a company that works hard to lower its costs so that we can pass savings back to our customers. If you look at the history of AWS, that’s exactly what we’ve done (lowering price on EC2, S3, CloudFront, and AWS bandwidth multiple times already without any competitive pressure to do so).”

Myth 4: The Cloud Isn’t Ideal Because I Can’t Move Everything at Once

Selipsky debunks this myth by saying, “We believe this is nearly impossible and ill-advised. We recommend picking a few apps to gain experience and comfort then build a migration plan. This is what we most often see companies doing. Companies will be operating in hybrid environments for years to come. We see some companies putting some stuff on AWS and then keeping some stuff in-house. And I think that’s fine. It’s a perfectly prudent and legitimate way of proceeding.”

Myth 5: The Biggest Driver of Cloud Adoption is Cost

In busting the final myth, Selipsky said, “There is a big savings in capex and cost but what we find is that one of the main drivers of adoption is that time-to-market for ideas is much faster in the cloud because it lets you focus your engineering resources on what differentiates your business.”

Summary

Speaking about all of the myths surround the cloud, Selipsky concludes that “a lot of this revolves around psychology and fear of change, and human beings needing to gain comfort with new things. Years ago people swore they would never put their credit card information online. But that’s no longer the case. We’re seeing great momentum. We’re seeing, more and more, over time these barriers [to cloud adoption] are moving.” For additional debunked myths regarding Cloud Computing visit Nubifer.com.

IBM Elevates Its Cloud Offerings with Purchase of Cast Iron Systems

IBM Senior Vice President and Group Executive for IBM Software Group Steve Mills announced the acquisition of cloud integration specialist Cast Iron Systems at the IBM Impact 2010 conference in Las Vegas on May 3. The privately held Cast Iron is based in Mountain View, California and delivers cloud integration software, appliances and services, thus the acquisition broadens the delivery of cloud computing services for IMB’s clients. IBM’s business process and integration software portfolio grew over 20 percent during the first quarter and the company sees this deal as a way to expand it further. The financial terms of the acquisition were not disclosed although Cast Iron Systems’ 75 employees will be integrated into IBM.

According to IBM officials, Big Blue anticipated the worldwide cloud computing market to grow at a compounded annual rate of 28 percent from $47 billion in 2008 to a projected $126 billion by 2012. The acquisition of Cast Iron Systems reflects IBM’s expansion of its software business around higher value capabilities that help clients run companies more effectively.

IBM has transformed its business model to focus on higher value, high-margin capabilities through organic and acquisitive growth in the past ten years–and the company’s software business has been a key catalyst in this shift. IBM’s software revenue grew at 11 percent year-to-year during the first quarter and the company generated $8 billion in software group profits in 2008 (up from $2.8 billion in 2000).

Since 2003, the IBM Software Group has acquired over 55 companies, and the acquisition of Cast Iron Systems is part of that. Cast Iron Systems’ clients include Allianz, Peet’s Coffee & Tea, NEC, Dow Jones, Schumacher Group, ShoreTel, Time Warner, Westmont University and Sports Authority and the cloud integration specialist has completed thousands of cloud integrations around the globe for retail organizations, financial institutions and media and entertainment companies.

IBM’s acquisition comes at a time when one of the major challenges facing businesses when adopting cloud delivery models is integrating the disparate systems running in their data centers with new cloud-based applications–which used to be time-consuming work which drained resources. IBM gains the ability to help businesses rapidly integrate their cloud-based applications and on-pemises systems, with the acquisition of Cast Iron Systems. Additionally, the acquisition advances IBM’s capabilities for a hybrid cloud model–which allows enterprises to blend data from on-premises applications with public and private cloud systems.

IBM, which is know for offering application integration capabilities for on-premises and business-to-business applications, will now be able to offer clients a complete platform to integrate cloud applications from providers like Amazon, Salesforce.com, NewSuite and ADP with on-premises applications like SAP and JD Edwards. Relationships between IBM and Amazon and Salesforce.com will essentially become friendlier due to this acquisition.

IBM said that it can use Cast Iron Systems’ hundreds of prebuilt templates and services expertise to eliminate expensive coding, thus allowing cloud integrations to be completed in mere days (rather than weeks, or even longer). These results can be achieved through using a physical appliance, a virtual appliance or a cloud service.

Craig Hayman, general manager for IBM WebSphere said in a statement, “The integration challenges Cast Iron Systems is tackling are crucial to clients who are looking to adopt alternative delivery models to manage their businesses. The combination of IBM and Cast Iron Systems will make it easy for clients to integrate business applications, no matter where those applications reside. This will give clients greater agility and, as a result, better business outcomes.”

IMB cited Cast Iron Systems helping pharmaceutical distributor Amerisource Bergen Specialty Group connecting Saleforce CRM with its on-premise corporate data warehouse as an example. The company has since been able to give its customer service associates access to the accurate, real-time information they need to deliver a positive customer experience while realizing $250,000 in annual cost savings.

Cast Irons Systems additionally aided a division of global corporate insurance leader Allianz integrate Salesforce CRM with its on-premises underwriting applications to offer real-time visibility into contract renewals for its sales team and key performance indicators for sales management. IBM said that Allianz beat its own 30-day integration project deadline by replacing labor-intensive custom code with Cast Iron Systems’ integration solution.

President and chief executive officer of Cast Iron Systems Ken Comee said, “Through IBM, we can bring Cast Iron Systems’ capabilities as the world’s leading provider of cloud integration software and services to  global customer set. Companies around the world will now gain access to our technologies through IBM’s global reach and its vast network of partners. As part of IBM, we will be able to offer clients a broader set of software, services and hardware to support their cloud and other IT initiatives.”

IBM will remain consistent with its software strategy by supporting and enhancing Cast Iron Systems’ technologies and clients while simultaneously allowing them to utilize the broader IBM portfolio. For more information, visit Nubifer.com.

Transforming Into a Service-Centric IT Organization By Using the Cloud

While IT executives typically approach cloud services from the perspective of how they are being delivered, this model neglects what cloud services are and how they are consumed. These two facets can have a large impact on the overall IT organizations, points out eWeek Knowledge Center contributor Keith Jahn. Jahn maintains that it is very important for IT executives to veer away from the current delivery-only focus by creating a world-class supply chain for managing the supply and demand of cloud services.

Using the popular fable The Sky Is Falling, known lovingly as Chicken Little, Jahn explains a possible future scenario that IT organizations may face due to cloud computing. As the fable goes, Chicken Little embarks on a life-threatening journey to warn the king that the sky is falling and on this journey she gathers friends who join her on her quest. Eventually, the group encounters a sly fox who tricks them into thinking that he has a better path to help them reach the king. The tale can end one of two ways: the fox eats the gullible animals (thus communicating the lesson “Don’t believe everything you hear”) or the king’s hunting dogs can save the day (thus teaching a lesson about courage and perseverance).

So what does this have to do with cloud computing? Cloud computing has the capacity to bring on a scenario that will force IT organizations to change, or possibly be eliminated altogether. The entire technology supply chain as a whole will be severely impacted if IT organizations are wiped out. Traditionally, cloud is viewed as a technology disruption, and is assessed from a deliver orientation, posing questions like how can this new technology deliver solutions cheaper and better and faster? An equally important yet often ignored aspect of this equation is how cloud services are consumed. Cloud services are ready to run, self-sourced, available wherever you are and are pay-as-you-go or subscription based.

New capabilities will emerge as cloud services grow and mature and organizations must be able to solve new problems as they arise. Organizations will also be able to solve old problems cheaper, better and faster. New business models will be ushered in by cloud services and these new business models will force IT to reinvent itself in order to remain relevant. Essentially, IT must move away from its focus on the delivery and management of assets and move toward the creation of a world-class supply chain for managing supply and demand of business services.

Cloud services become a forcing function in this scenario because they are forcing IT to transform. CIOs that choose to ignore this and neglect to make transformative measures will likely see their role shift from innovation leader to CMO (Chief Maintenance Officer), in charge of maintaining legacy systems and services sourced by the business.

Analyzing the Cloud to Pinpoint Patterns

The cloud really began in what IT folks now refer to as the “Internet era,” when people were talking about what was being hosted “in the cloud.” This was the first generation of the cloud, Cloud 1.0 if you will—an enabler that originated in the enterprise. Supply Chain Management (SCM) processes were revolutionized by commercial use of the Internet as a trusted platform and eventually the IT architectural landscape was forever altered.

This model evolved and produced thousands of consumer-class services, which used next-generation Internet technologies on the front end and massive scale architectures on the back end to deliver low-cost services to economic buyers. Enter Cloud 2.0, a more advanced generation of the cloud.

Beyond Cloud 2.0

Cloud 2.0 is driven by the consumer experiences that emerged out of Cloud 1.0. A new economic model and new technologies have surfaced since then, due to Internet-based shopping, search and other services. Services can be self-sourced from anywhere and from any device—and delivered immediately—while infrastructure and applications can be sourced as services in an on-demand manner.

Currently, most of the attention when it comes to cloud services remains focused on the new techniques and sourcing alternatives for IT capabilities, aka IT-as-a-Service. IT can drive higher degrees of automation and consolidation using standardized, highly virtualized infrastructure and applications. This results in a reduction in the cost of maintaining existing solutions and delivering new solutions.

Many companies are struggling with the transition from Cloud 1.0 to Cloud 2.0 due to the technology transitions required to make the move. As this occurs, the volume of services in the commercial cloud marketplace is increasing, propagation of data into the cloud is taking place and Web 3.0/semantic Web technology is maturing. The next generation of the cloud, Cloud 3.0 is beginning to materialize because of these factors.

Cloud 3.0 is significantly different because it will enable access to information through services set in the context of the consumer experience. This means that processes can be broken into smaller pieces and subsequently automated through a collection of services, which are woven together with massive amounts of data able to be accessed. With Cloud 3.0, the need for large-scale, complex applications built around monolithic processes is eliminated. Changes will be able to be made by refactoring service models and integration achieved by subscribing to new data feeds. New connections, new capabilities and new innovations—all of which surpass the current model—will be created.

The Necessary Reinvention of IT

IT is typically organized around the various technology domains taking in new work via project requests and moving it through a Plan-Build-Run Cycle. Here lies the problem. This delivery-oriented, technology-centric approach has inherent latency built-in. This inherent latency has created increasing tension with the business it serves, which is why IT must reinvent itself.

IT must be reinvented so that it becomes the central service-sourcing control point for the enterprise or realize that the business with source them on their own. By becoming the central service-sourcing control point for the enterprise, IT can maintain the required service levels and integrations. Changes to behavior, cultural norms and organizational models are required to achieve this.

IT Must Become Service-Centric in the Cloud

IT must evolve from a technology-centric organization into a service-centric organization in order to survive, as service-centric represents an advanced state of maturity for the IT function. Service-centric allows IT to operate as a business function—a service provider—created around a set of products which customers value and are in turn willing to pay for.

As part of the business strategy, these services are organized into a service portfolio. This model differs from the capability-centric model because the deliverable is the service that is procured as a unit through a catalog and for which the components—and sources of components—are irrelevant to the buyer. With the capability-centric model, the deliverables are usually a collection of technology assets which are often visible to the economic buyer and delivered through a project-oriented life cycle.

With the service-centric model, some existing roles within the IT organization will be eliminated and some new ones will be created. The result is a more agile IT organization which is able to rapidly respond to changing business needs and compete with commercial providers in the cloud service marketplace.

Cloud 3.0: A Business Enabler

Cloud 3.0 enables business users to source services that meet their needs quickly, cost-effectively and at a good service level—and on their own, without the help of an IT organization. Cloud 3.0 will usher in breakthroughs and innovations at an unforeseen pace and scope and will introduce new threats to existing markets for companies while opening new markets for others. In this way, it can be said that cloud is more of a business revolution than a technology one.

Rather than focusing on positioning themselves to adopt and implement cloud technology, a more effective strategy for IT organizations would be to focus on transforming the IT organization into a service-centric model that is able to source, integrate and manage services with high efficiency.

Back to the story and its two possible endings:

The first scenario suggests that IT will choose to ignore that its role is being threatened and continue to focus on the delivery aspects of the cloud. Under the second scenario, IT is rescued by transforming into the service-centric organization model and becoming the single sourcing control point for services in the enterprise. This will effectively place IT in control of fostering business innovation by embracing the next wave of cloud. For more information please visit Nubifer.com.

A Guide to Securing Sensitive Data in Cloud Environments

Due to the outsourced nature of the cloud and its innate loss of control, it is important to make sure that sensitive data is constantly and carefully monitored for protection. That task is easier said than done, which is why the following questions arise: How do you monitor a database server when its underlying hardware moves every day—sometimes even multiple times a day and sometimes without your knowledge? How do you ensure that your cloud computing vendor’s database administers and system administrators are not copying or viewing confidential records inappropriately or abusing their privileges in another way?

When deploying a secure database platform in a cloud computing environment, these obstacles and many more are bound to arise and an enterprise needs to be able to overcome them, as these barriers may be enough to prevent some enterprises from moving their on-premises approach. There are three critical architectural concerns to consider when transferring applications with sensitive data to the cloud.

Issue 1: Monitoring an Ever-changing Environment

Cloud computing grants you the ability to move servers and add or remove resources in order to maximize the use of your systems and reduce expense. This increased flexibility and efficiency often means that the database servers housing your sensitive data are constantly being provisioned and deprovisioned. Each of these scenarios represents a potential target for hackers, which is an important point to consider.

Monitoring data access becomes more difficult due to the dynamic nature of a cloud infrastructure. If the information in those applications is subject to regulations like the Payment Card Industry Data Security Standard (PCI DSS) or the Health Insurance Portability and Accountability Act (HIPAA), it is vital to make sure that it is secure.

It is essential to find a methodology that is easily deployed on new database servers without management involvement when thinking about solutions to monitor activity on these dynamic database servers. This requires a distributed model in which each instance in the cloud has a sensor or agent running locally; and this software must be able to be provisioned automatically along with the database software without requiring intrusive system management.

It won’t always be possible to reboot whenever it is necessary to install, upgrade or update the agents in a multitenancy environment such as this, and the cloud vendor may even place limitations on installation of software requiring certain privileges. With the right architecture in place, you will be able to see where your databases are hosted at any point in town and will be able to centrally log all activity and flag suspicious events across all servers wherever they are.

Issue 2: Working in a WAN

Currently, database activity monitoring solutions utilize a network-sniffing model to identify malicious queries, but this approach isn’t feasible in the cloud environment because the network encompasses the entire Internet. Another method that doesn’t work in the cloud is adding a local agent which sends all traffic to a remote server.

The solution is something that is designed for distributed processing where the local sensor is able to analyze traffic autonomously. Another thing to consider is that  cloud computing resources procured are likely to be on a WAN. Network bandwidth and network latency will make off-host processing inefficient. With cloud computing, you are likely unable to colocate a server lose to your databases. This means that the time and resources spent spending every transaction to a remote server for analysis will stunt network performance and also hinder timely interruption of malicious activity.

So when securing databases in cloud computing, a better approach is to utilize a distributed monitoring solution that is based on “smart” agents. That way, once a security policy for a monitored database is in place, that agent or sensor is able to implement protection and alerting locally and thus prevent the network from turning into the gating factor for performance.

It is also necessary to test the WAN capabilities of your chosen software for remote management of distributed data centers. It should be able to encrypt all traffic between the management console and sensors to restrict exposure of sensitive data. There are also various compression techniques that can enhance performance so that alerts and policy updates are transmitted efficiently.

Issue 2: Know Who Has Privileged Access to Your Data

The activity of privileged users is one of the most difficult elements to monitor in any database implementation. It is important to remember that DBAs and system administrators know how to stealthy access and copy sensitive information (and cover their tracks afterward). There are unknown personnel at unknown sites with these access privileges in cloud computing environments. Additionally, you cannot personally conduct background checks on third parties like you would for your own staff in this situation. When looking at all of these factors, it is easy to see why protecting against inside threats is important yet difficult to do.

So how do you resolve this issue? One way is to separate duties to ensure that the activities of privileged third parties are monitored by your own staff and also that the pieces of the solution on the cloud side of the network are unable to be defeated without alerts going off. It is also necessary to be able to closely monitor individual data assets regardless of the method used to access it.

Seek out a system that knows when the data is being accessed in violation of the policy–without relying on query analytics alone. Sophisticated users with privileges can create new views, insert stored procedures into a database or generate triggers which compromise information without the SQL command arising suspicion.

Summary

Although some may wrongfully conclude that the complex nature of monitoring database in a cloud architecture isn’t worth changing from dedicated systems–or at least not just yet. With that said, most enterprises will decide that deploying applications with sensitive data on one of these models is inevitable. Leading organizations have begun to change and as a result tools are now meeting the requirements driven by the issues raised in this article.

Essentially, security should not prevent you from moving forward with deploying databases in the cloud if you think your enterprise would benefit from doing so. By looking before you leap–ensuring your security methodologies adequately address these unique cases–you can make the transition safely.  For more information please visit Nubifer.com.

New Cloud-Focused Linux Flavor: Peppermint

A new cloud-focused Linux flavor is in town: Peppermint. The Peppermint OS is currently a small, private beta which will open up to more testers in early to late May. Aimed at the cloud, the Peppermint OS is described on its home page as: “Cloud/Web application-centric, sleek, user friendly and insanely fast! Peppermint was designed for enhances mobility, efficiency and ease of use. While other operating systems are taking 10 minutes to load, you are already connected, communicating and getting things done. And, unlike other operating systems, Peppermint is ready to use out of the box.”

The Peppermint team announced the closed beta of the new operating system in a blog post on April 14, saying that the operating system is “designed specifically for mobility.” The description of the technology on Launchpad describes Peppermint as “a fork of Lubuntu with an emphasis on cloud apps and using many configuration files sources from Linux Mint. Peppermint uses Mozilla Prism to create single site browsers for easily accessing many popular Web applications outside of the primary Web applications outside of the primary browser. Peppermint uses the LXDE desktop environment and focuses on being easy for new Linux users to find their way around in.”

Lubuntu is described by the Lubuntu project as a lighter, faster and energy-saving modification of Ubuntu using LXDE (the Lightweight X11 Desktop Environment). Kendall Weaver and Shane Remington, a pair of developers in North Carolina, make up the core Peppermint team. Weaver is the maintainer for the Lunix Mint Fluxbox and LXDE editions as well as the lead software developer for Astral IX Media in Asheville, NC and the director of operations for Western Carolina Produce in Hendersonville, NC. Based in Asheville, NC, Remington is the project manager and lead Web developer for Astral IX Media and, according to the Peppermint site, “provides the Peppermint OS project support with Web development, marketing, social network integration and product development.” For more information please visit Nubifer.com.

Microsoft and Intuit Pair Up to Push Cloud Apps

Despite being competitors, Microsoft and Intuit announced plans to pair up to encourage small businesses to develop cloud apps for the Windows Azure platform in early January 2010.

Intuit is offering a free, beta software development kit (SDK) for Azure and citing Azure as a “preferred platform” for cloud app deployment on the Intuit Partner Platform as part of its collaboration with Microsoft. This marriage opens up the Microsoft partner network to Intuit’s platform and also grants developers on the Intuit cloud platform access to Azure and its tool kit.

As a result of this collaboration, developers will be encouraged to use Azure to make software applications that integrate with Intuit’s massively popular bookkeeping program, QuickBooks. The companies announced that the tools will be made available to Intuit partners via the Intuit App Center.

Microsoft will make parts of its Online Business Productivity Suite (such as Exchange Online, SharePoint Online, Office Live Meeting and Office Communications Online) available for purchase via the Intuit app Center as well.

The agreement occurred just weeks before Microsoft began monetizing the Windows Azure platform (on February 1)—when developers who had been using the Azure beta free of charge began paying for use of the platform.

According to a spokesperson for Microsoft, the Intuit beta Azure SDK will remain free, with the timing for stripping the beta tag “unclear.”

Designed to automatically manage and scale applications hosted on Microsoft’s public cloud, Azure is Microsoft’s latest Platform-as-a-Service. Azure will serve as a competitor for similar offerings like Force.com and Google App Engine. Contact a Nubifer representative to see how the Intuit – Microsoft partnership can work for your business.

A Guide to Windows® Azure Platform Billing

Understanding billing for Windows® Azure Platform can be a bit daunting, so here is a brief guide, including useful definitions and explanations.

The Microsoft ® Online Customer Service Portal (MCOP) limits one Account Owner Windows Live ID (WLID) per MOCP account, and the Account Owner has the ability to create and manage subscriptions, view billing and usage data and specify the Service Administrator for each subscription. While this is convenient for smaller companies, large corporations may need to create multiple subscriptions in order to design an effective account structure that will able to support and also reflect their market strategy. Although the Service Administrator (Service Admin WLID) manages deployments, they cannot create subscriptions.

The Account Administrator can create one or more subscriptions for each individual MOCP account and for each subscription, the Account Administrator can specify a different WLID as the Service Administrator. It is also important to note that the Service Administrator WLID can be the same or different as the Account Owner and is the person actually using the Windows ® Azure Platform. Once a subscription is created in the Microsoft ® Online Customer Service Portal (MOPC), a Project appears in the Windows ® Azure portal.

The relationship between components is clearly displayed in the diagram below:

Projects:

Up to twenty services can be allocated by one project. Resources in the Project are shared between all of the Services created and the resources are divided into Compute Instances/Cores and Storage accounts.

The Project will have 20 Small Compute Instances that you can utilize, by default. These Small Compute Instances could be a variety of combinations of VM sizes as long as the total number of Cores across all deployed services within the Project doesn’t exceed 20.

To increase the number of Cores, simply contact Microsoft ® Online Services customer support to verify the billing account and provide the requested Small Compute Instances/Cores (subject to a possible credit check). You also have the ability to design how you want to have the Cores allocated, although be default the available resources are counted as number of Small Compute Instances. See the conversion on Compute Instances below:

Compute Instance Size CPU Memory Instance Storage
Small 1.6 GHz 1.75 GB 225 GB
Medium 2 x 1.6 GHz 3.5 GB 490 GB
Large 4 x 1.6 GHz 7 GB 1,000 GB
Extra large 8 x 1.6 GHz 14 GB 2,040 GB

Table 1: Compute Instances Comparison

The compute Instances are shared between all the running services in the project—including Production and Stage Environments. This allows you to have multiple Services with different number of Compute Instances (up to the number of maximum available for that Project).

5 Storage accounts are available per Project, although you can request to increase this up to 20 Storage accounts per Project by contacting Microsoft ® Online Services customer support. You will need to purchase a new subscription if you need more than 20 Storage accounts.

Services:

A total of 20 Services per project are permitted. Services are where applications are deployed; each Service provides two environments: Production and Staging. This is visible when you create a service in the Windows ® Azure portal.

A maximum number of five roles per application are permitted within a Service; this includes any combinations of different web and worker roles on the same configuration file up to a maximum of 5. Each role can have any number of VMS, see below:

The Service has two roles in this example, and each role has a specific worker role. Web Role, web tier, handles the Web interface, while the Worker Role, business tier, handles the business logic. Each role can have any number of VMs/Cores up to the maximum available on the project.

If this service is deployed from the Azure ® resources perspective, the following resources will be used:

1 x Service

–       Web Role = 3 Small Compute Nodes (3 x Small VMs)

–       Worker Role = 4 Small Compute Nodes (2 x Medium VMs)

–       2 Roles used

Total resources left on the Project:

–       Services (20 -1) = 19

–       Small Compute Nodes (20 – 7) = 13 small compute instances

–       Storage accounts = 5

For more information regarding the Windows Azure pricing model, please contact a Nubifer representative.

Amazon’s Elastic Compute Cloud Platform EC2 Gets Windows Server Customers from Microsoft

Amazon has launched an initiative for Microsoft customers to bring their Windows Server licenses to Amazons EC2, Elastic Compute Cloud Platform. This initiative is in tandem with a brand new Microsoft pilot program which allows Windows Server customers with an EA (Enterprise Agreement) with Microsoft to bring their licenses to Amazon EC2. Peter DeSantis, general manager of EC2 at Amazon, said in a recent interview with eWEEK that these customers will pay Amazon’s Linux On-Demand or Reserved Instance rates and thus save between 35 to 50 percent, depending on the type of customer and instance.

Also in his interview with eWEEK, DeSantis said that Amazon customers have sought support for Windows Server and Amazon has delivered support for Windows Server 2003 and Windows Server 2008. Customers with EA agreements with Microsoft began to ask if those agreements could be applied to EC2 instances, thus the new pilot program. Amazon announced the new initiative on March 24 and began enrolling customers instantaneously. According to DeSantis, enrollment will continue through September 12, 2010.

Amazon sent out a notice announcing the program and stated the following criteria as requirements laid out by Microsoft to participate in the pilot: your company must be based or have legal entity in the United States; your company must have an existing Microsoft Enterprise Agreement that doesn’t expire within 12 months of your entry into the Pilot; you must already have purchased Software Assurance from Microsoft for your EA Windows Server licenses; you must be an Enterprise customer (this does not include Academic Government institutions).

eWEEK revealed some of the fine print for the project released by Amazon:

“Once enrolled, you can move your Enterprise Agreement Windows Server Standard, Windows Server Enterprise, or Windows Server Datacenter edition licenses to Amazon EC2 for 1 year. Each of your Windows Server Standard licenses will let you launch one EC2 instance. Each of your Windows Server Enterprise or Windows Server Datacenter licenses will let you launch up to four EC2 instances. In either case, you can use any of the EC2 instance types. The licenses you bring to EC2 can only be moved between EC2 and your on-premises machines every 90 days. You can use your licenses in the US East (Northern Virginia) or US West (Northern California) Regions. You will still be responsible for maintaining your Client Access Licenses and External Connector licenses appropriately.” To learn more about Microsoft’s and Amazon’s Cloud offerings visit Nubifer.com.

Microsoft Not Willing to Get Left in the Dust Left by Cloud Services Business

Microsoft may be the largest software company on the globe, but that didn’t stop it from being left in the dust by other companies more than once and eWEEK reports that when it comes to cloud services Microsoft is not willing to make the same mistake.

Although Microsoft was initially weary of the cloud, the company is now singing a different tune and trying to move further into the data center. Microsoft had its first booth dedicated solely to business cloud services at the SaaSCon 2010, held at the Santa Clara Convention Center April 6 and 7. Microsoft is positioning Exchange Online (email), SharePoint Online (collaboration), Dynamics CRM Online (business apps), SQL Azure (structured storage) and AD/Live ID (Active Directory assess) as its lead services for business. All of these services are designed to run on Windows Server 2008 in the data center and sync up with the corresponding on-premises applications.

The services are designed to work hand-in-hand with standard Microsoft client software (including Windows 7, Windows Phone, Office and Office Mobile), thus ensuring that the overarching strategy is set and users will have to report on its cohesiveness over time. Microsoft is also offering its own data centers and its own version of Infrastructure-as-a-Service for hosting client enterprises apps and services. Microsoft is using Azure—a full online stack comprised of Windows 7, the SQL database and additional Web services—as a Platform-as-a-Service for developers.

Featuring Business Productivity Online Suite, Exchange Hosted Services, Microsoft Dynamics CRM Online and MS Office Web Apps, Microsoft Online Services are up and running. In mid-March Microsoft launched a cloud backup service on the consumer side called SkyDrive, which is an online storage repository for files which users can access from anywhere via the Web. SkyDrive may be a very popular service, as it offers a neat (in both senses of the word) 25GB of online space for free (which is more than the 2GB offered as a motivator by other services).

SkyDrive simply requires a Windows Live account (also free) and shows that Microsoft really is taking the plunge. For more information on Microsoft’s Cloud offerings, please visit Nubifer.com.

CA Augments Cloud Business with Nimsoft Buy

CA has announced plans to purchase Nimsoft for $350 million, thus furthering its bolstering of cloud computing capabilities. CA’s series of cloud-related acquisitions already includes Cassatt, NetQS, Oblicore and 3Tera.

On March 10, CA officials announced the $350 million, all-cash acquisition of Nimsoft, revealing that the deal is predicted to close by the end of the March. Nimsoft is the fifth cloud-centric company CA has purchased in the past year, showing CA’s continued aggressive move to build up its cloud computing capabilities.

With the acquisition of Nimsoft, CA gains IT performance and availability monitoring solutions for highly virtualized data centers and cloud computing environments as well as greater traction in key areas like midmarket companies and emerging global markets. CA refers to midmarket companies as emerging enterprises: companies with revenues between $300 million and $2 billion.

CA CEO Bill McCraken said in a conference with analysts and journalists that the deal is about Nimsoft’s technology and customers—of which the company has 800 scattered in over 30 countries. “We want to reach new customers, and we want to reach them in a way we haven’t been able to do here at CA, even after a couple of tries,” said McCraken.

McCraken said that the emerging enterprise space will account for approximately a quarter of the software spending in CA’s market by 2010. Cloud computing for business is provided by MSPs and McCracken said that the cloud is poised to play a major role in emerging economies.

Executive vice president for CA’s Cloud Products and Solutions Business Line Chris O’Malley said via a conference call, “We are looking to build up that off-shore revenue.”

In addition to a variety of public cloud computing environments, Nimsoft’s monitoring and reporting products are used with on-demand offerings like Google Apps for Business, Amazon Web Services, Amazon EC2 (Elastic Compute Cloud), the Rackspace Cloud and Salesforce.com. CA also reports that Nimsoft’s monitoring and reporting products are used by customers for internal applications, databases, and physical and virtual data centers.

MSPs are granted high visibility into customers’ business applications in internal and external infrastructures with Nimsoft’s Unified Monitoring Solution. Nimsoft president and CEO Gary Read and McCracken said that Nimsoft’s technology is created with a high level of automation in order to make it easy to use for MSPs.

Read will become senior vice president and general manager of CA’s Nimsoft business unit when the deal with Nimsoft is finalized. Read said that combining his company—which is 12 years old—with Nimsoft makes sense. Although Nimsoft had done well, Read worried that the company would struggle to stay up to speed with the market changes. Nimsoft will be able to continue with innovation while scale its products easily once part of CA. Most Nimsoft employees are expected to remain with the company once the deal with CA is complete.

CA has acquired Cassatt, NetQS and Oblicore in less than a year and is in the midst of purchasing 3Tera. Each company pushed Ca further into the cloud and Nimsoft will add to CA’s capabilities in the cloud. In McCraken’s words, acquisitions like the current purchase of Nimsoft serve to “accelerate CA’s market leadership.” To learn more about Cloud Computing, please visit Nubifer.com.

Apple iPad Tests the Limits of Google’s Chrome Running on Cloud Computing Devices

With the recent release of its iPad, Apple is poised to challenge Google in the current cloud computing crusade, say Gartner analysts. Apple’s iPad is expected to offer the most compelling mobile Internet experience to date, but later on in 2010 Google is predicted to introduce its own version for mobile Web consumption in the form of netbooks built on its Chrome Operating System.

If Apple’s tablet PC catches on like the company hopes it will, then it could serve as a foil for Google’s cloud computing fans. Apple CEO Steve Jobs has already proclaimed that holding the iPad is like “holding the Internet in your hand.” The 9.7-inch IPS screen on the device displays high-def video and other content, like e-mail, e-books and games to be consumed from the cloud.

Author Nicholas Carr, an avid follower of cloud happenings, explains the intentions of Apple in introducing the iPad by saying, “It wants to deliver the killer device to the cloud era, a machine that will define computing’s new age in the way that the Windows PC defined the old age. The iPad is, as Jobs said today, ‘something in the middle,’ a multipurpose gadget aimed at the sweet spot between the tiny smartphone and the traditional laptop. If it succeeds, we’ll all be using iPads to play iTunes, read iBooks, watch iShows, and engage in iChats. It will be an iWorld.”

An iWorld? Not if Google has its say! Later on in 2010 Google is expected to unveil its very own version of the Internet able to be held in users’ hands: netbooks based on Chrome. Companies like Acer and Asustek Computer are also building a range of Android-based tablets and netbooks, while Dell CEO Michael Dell was recently seen showcasing the Android-based Dell Mini 5 tablet at the World Economic Forum in Davos, Switzerland. It sounds like Apple may have more competition that just Google!

The iPad will undoubtedly be a challenge to Google’s plans for cloud computing, which include making Google search and Google apps able to reach any device connected to the Web. According to Gartner analyst Rau Valdes, Apple and Google are bound to face off with similar machines. Said Valdes to eWeek, “You could look and say that iPad is being targeted to the broad market of casual users rather than, say, the road warrior who needs to run Outlook and Excel and the people who are going to surf the Net on the couch. One could say that a netbook based on Chrome OS would have an identical use case.”

Consumers will eventually have to choose between shelling out around $499 for an iPad (that is just a base price, mind you) or a similar fee (or possibly lower) for a Chrome netbook. Valdes thinks that there are two types of users: a parent figure consuming Internet content on a Chrome OS netbook and a teenager playing games purchased on Apple’s App Store on an iPad. Stay tuned to see what happens when Apple and Google collide with similar machines later on in 2010.

The Effects of Platform-as-a-Service (PaaS) on ISVs

Over the past decade, the ascent of Software-as-a-Service (SaaS) has allowed Independent Software Vendors (ISVs) to develop new applications hosted and delivered on the Web. Until recently, however, any ISV creating a SaaS offering has been required to create its own hosting and service delivery infrastructure. With the rise of Platform-as-a-Service (PaaS) over the past two years, this has all changed. As the online equivalent of conventional computing platforms, PaaS provides an immediate infrastructure on which an ISV can quickly build and deliver a SaaS application.

Many ISVs are hesitant to bind their fate to an emerging platform provider, yet those that have taken a leap of faith and adopted PaaS early on have reaped the benefits, seeing dramatic reductions in development costs and timescales. PaaS supercharges SaaS by lowering barriers to entry and foreshortening time-to-market, this quickening the pace of innovation and intensifying competition.

The nature of ISVs will forever be altered by the advent of PaaS, not only ISVs who choose to introduce SaaS offerings but those who remain tethered to conventionally-licenses, customer-operated software products. The ways in which PaaS alters the competitive landscape across a variety of parameters:

Dramatically quicker cycles of innovation

By implementing the iterative, continuous improvement upgrade model of SaaS, PaaS allows developers to monitor and subsequently respond to customer usage and feedback and quickly incorporate the latest functionality into their own applications.

Lowered price points

Developers’ costs are cut down across multiple dimensions by the shared, pay-as-you-go, elastic infrastructure of PaaS. This results in greatly reduced development and operations costs.

Multiplicity of players from reduced barriers to entry

Large numbers of market entrants are attracted to the low costs of starting on a PaaS provider’s infrastructure. These entrants would not otherwise be able to fund their own infrastructure and thanks to a PaaS are able to significantly increase innovation and competition.

New business models, propositions, partner channels and routes to market

New ways of offering products and bringing them to market, many of them highly disruptive to established models, are created by the “as-a-service” model.

It is important for ISVs to understand and evaluate that PaaS is different than other platforms in order for them to remain in control of their own destiny. PaaS is a new kind of platform, the dynamics of which are different than conventional software platforms. Developers need to be weary of assessing PaaS alternatives on the basis of criteria that are not valid when applied to PaaS. For more information on Platform as a Service please visit Nubifer.com.

The Arrival of Ubiquitous Computing

Among other things, one of the “ah ha” moments taken from this year’s CES (the world’s largest consumer technology tradeshow) was the arrival of ubiquitous computing. Formerly a purely academic concept, the data, voice, device and display convergence is now more relevant than ever. Ubiquitous convergence in consumer technology on enterprise software is poised to impact those highly involved in the field of cloud computing as well as the average consumer in the near future.

Industry prognosticators are now predicting that consumers will begin to expect the ubiquitous experience in practically everything they use on a daily basis, from their car to small household items. Take those that grew up in the digital world and will soon be entering the workforce; they will expect instant gratification when it comes to work and play and everything in between. For example, Apple made the Smartphone popular and a “must-have” item for non-enterprise consumers with its iPhone. The consumer-driven mobile phone revolution will likely seep into other areas as well, with consumers increasingly starting to expect to have a similar experience as with an iPhone in software. Due to this trend, many enterprise software vendors are now making mobile a greater priority than before, and in turn staying ahead of the curve will mean anticipating more and more ubiquitous convergence.

What Does Ubiquitous Computing Mean for ISVs?

CES showcased a wide range of new interface and display technology, such as a multi-touch screen by 3M, a screen with haptic feedback, pico projector and the list goes on. A cheap projector and a camera can combine to make virtually any surface into an interface or display, which will allow consumers to interact with software in innovative, unimaginable and unanticipated ways, thus putting ISVs to the task of supporting these new interfaces and displays. This gives ISVs the opportunity to differentiate their offering by leveraging rather than submitting to this new trend in technology.

The Combination of Location-based Apps and Geotagging

Both Google’s Favorite Places and Nokia’s Point and Find seek to organize and essentially own the information about places and objects using QR codes. The QR codes are generally easy to generate and have flexible and extensible structure to hold useful information, while the QR code readers are the devices—such as a camera phone with a working data connection—that most of us own already. When geotagging is combined with augmented reality that is already propelling the innovation in location-based apps, there is the potential for ample innovation. Smarter supply chain, sustainable product life cycle management and efficient manufacturing are all possible outcomes from the combination of location-based applications and geotagging.

The Evolution of 3D

While 3D simply adds a certain “cool” factor to playing video games or watching movies, 3D is poised to make the transition from merely a novelty into something useful. Although simply replicating 3D analog in the digital world won’t make software better, adding a third dimension could aid those looking at 2D. One way that 3D technology can be more effective is by using it in conjunction with complementing technology like multi-touch interface, to provide 3D accordances, and with location-based and mapping technology to manage objects in 3D analog world.

Rendering Technology to Outpace Non-Graphics Computation Technology

As shown by Toshiba’s TV with cell processors and ATI and nVidia’s graphic cards, the investment into rendering hardware complements the innovation in display elements (like LED, energy-efficient technology, etc). Hi-quality graphics at all former factors are being delivered via the combination of faster processors and sophisticated software. So far, enterprise software ISVs have been focusing on algorithmic computation of large volumes of data to design various solutions, and rendering computation technology lagged non-graphics data computation technology. Now rendering computation has caught up with non-graphics data and will outpace non-graphics data computation in the near future. This will allow for the creation of software that can crunch large volumes of data and leverage high-quality graphics without any lag, that delivers striking user experiences as well as realtime analytics and analysis.  For more information, please visit www.nubifer.com.

Scaling Storage and Analysis of Data Using Distributed Data Grids

One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid. This method is predicted to have important applications in cloud computing over the next couple years, and eWeek Knowledge Center contributor William L. Bain describes ways in which a distributed data grid can be used to implement powerful, Java-based applications for parallel data analysis.

In current Information Age, companies must store and analyze a large amount of business data. Companies that have the ability to efficiently search data for important patterns will have a competitive edge over others. An e-commerce Web site, for example, needs to be able to monitor online shopping carts in order to see which products are selling faster than others. Another example is a financial services company, which needs to hone its equity trading strategy as it optimizes its response to rapidly changing market conditions.

Businesses facing these challenges have turned to distributed data grids (also called distributed caches) in order to scale their ability to manage rapidly changing data and sort through data to identify patterns and trends that require a quick response. A few key advantages are offered by distributed data grids.

Distributed data grids store memory instead of on a disk for quick access. Additionally, they run seamlessly across various servers to scale performance. Lastly, they provide a quick, easy-to-use platform for running “what if” analyses on the data they store. They can take performance to a level unable to be matches by stand-alone database serves by breaking the sequential bottleneck.

Three simple steps for building a fast, scalable data storage and analysis solution:

1. Store rapidly changing business data directly in a distributed data grid rather than on a database server

Distributed data grids are designed to plug directly into the business logic of today’s enterprise application and services. They match the in-memory view of data already used by business logic by storing data as collections of objects rather than relational database tables. Because of this, distributed data grids are easy to integrate into existing applications using simple APIs (which are available for most modern languages like Java, C# and C++).

Distributed data grids run on server farms, thus their storage capacity and throughput scale just by adding more grid servers. A distributed data grid’s ability to store and quickly access large quantities of data can expand beyond a stand-alone database server when hosted on a large server farm or in the cloud.

2. Integrate the distributed data grid with database servers in an overall storage strategy

Distributed data grids are used to complement, not replace data servers, which are the authoritative repositories for transactional data and long-term storage. With an e-commerce Web site, for example, a distributed data grid would hold shopping carts to efficiently manage a large workload of online shopping traffic. A back-end database server would meanwhile store completed transactions, inventory and customer records.

Carefully separating application code used for business logic from other code used for data access is an important factor to integrating a distributed data grid into an enterprise application’s overall strategy. Distributed data grids naturally fit into business logic, which manages data as objects. This code is where rapid access to data is required and also where distributed data grids provide the greatest benefit. The data access layer, in contract, usually focuses on converting objects into a relational form for storage in database servers (or vice versa).

A distributed data grid can be integrated with a database server so that it can automatically access data from the database server if it is missing from the distributed data grid. This is incredibly useful for certain types of data such as product or customer information (stored in the database server and retrieved when needed by the application). Most types of rapidly changing, business logic data, however, can be stored solely in a distributed data grid without ever being written out to a database server.

3. Analyze grid-based data by using simple analysis codes as well as the MapReduce programming pattern

After a collection of objects, such as a Web site’s shopping carts, has been hosted in a distributed data grid, it is important to be able to scan this data for patterns and trends. Researchers have developed a two-step method called MapReduce for analyzing large volumes of data in parallel.

As the first step, each object in the collection is analyzed for a pattern of interest by writing and running a simple algorithm that assesses each object one at a time. This algorithm is run in parallel on all objects to analyze all of the data quickly. The results that were generated by running this algorithm are next combined to determine an overall result (which will hopefully identify an important trend).

Take an e-commerce developer, for example. The developer could write a simple code which analyzes each shopping cart to rate which product categories are generating the most interest. This code could be run on all shopping carts throughout the day in order to identify important shopping trends.

Using this MapReduce programming pattern, distributed data grids offer an ideal platform for analyzing data. Distributed data grids store data as memory-based objects, and thus the analysis code is easy to write and debug as a simple “in-memory” code. Programmers don’t need to learn parallel programming techniques nor understand how the grid works. Distributed data grids also provide the infrastructure needed to automatically run this analysis code on all grid servers in parallel and then combine the results. By using a distributed data grid, the net result is that the application developer can easily and quickly harness the full scalability of the grid to quickly discover data patterns and trends that are important to the success of an enterprise. For more information, please visit www.nubifer.com.

Answers to Your Questions on Cloud Connectors

Jeffrey Schwartz and Michael Desmond, both editors of Redmond Developer News, recently sat down with corporate vice president of Microsoft’s Connected Systems Division, Robert Wahbe, at the recent Microsoft Professional Developers Conference (PDC) to talk about Microsoft Azure and its potential impact on the developer ecosystem at Microsoft. Responsible for managing Microsoft’s engineering teams that deliver the company’s Web services and modeling platforms, Wahbe is a major advocate of the Azure Services Platform and offers insight into how to build applications that exist within the world of Software-as-a-Service, or as Microsoft calls it, Software plus Services (S + S).

When asked how much of Windows Azure is based on Hyper-V and how much is an entirely new set of technologies, Wahbe answered, “Windows Azure is a natural evolution of our platform. We think it’s going to have a long-term radical impact with customers, partners and developers, but it’s a natural evolution.” Wahbe continued to explain how Azure brings current technologies (i.e. the server, desktop, etc.) into the cloud and is fundamentally built out of Windows Server 2008 and .NET Framework.

Wahbe also referenced the PDC keynote of Microsoft’s chief software architect, Ray Ozzie, in which Ozzie discussed how most applications are not initially created with the idea of scale-out. Explained Wahbe, expanding upon Ozzie’s points, “The notion of stateless front-ends being able to scale out, both across the data center and across data centers requires that you make sure you have the right architectural base. Microsoft will be trying hard to make sure we have the patterns and practices available to developers to get those models [so that they] can be brought onto the premises.”

As an example, Wahbe created a hypothetical situation in which Visual Studio and .NET Framework can be used to build an ASP.NET app, which in turn can either be deployed locally or to Windows Azure. The only extra step taken when deploying to Windows Azure is to specify additional metadata, such as what kind of SLA you are looking for or how many instances you are going to run on. As explained by Wahbe, the Metadata is an .XML file and as an example of an executable model, Microsoft is easily able to understand that model. “You can write those models in ‘Oslo’ using the DSL written in ‘M,’ targeting Windows Azure in those models,” concludes Wahbe.

Wahbe answered a firm “yes” when asked if there is a natural fit for application developed in Oslo, saying that it works because Oslo is “about helping you write applications more productively,” also adding that you can write any kind of application—including cloud. Although new challenges undoubtedly face development shops, the basic process of writing and deploying code remains the same. According to Wahbe, Microsoft Azure simply provides a new deployment target at a basic level.

As for the differences, developers are going to need to learn a new set of services. An example used by Wahbe is if two businesses were going to connect through a business-to-business messaging app; technology like Windows Communication Foundation can make this as easy process. With the integration of Microsoft Azure, questions about the pros and cons of using the Azure platform and the service bus (which is part of .NET services) will have to be evaluated. Azure “provides you with an out-of-the-box, Internet-scale, pub-sub solution that traverses firewalls,” according to Wahbe. And what could be bad about that?

When asked if developers should expect new development interfaces or plug-ins to Visual Studio, Wahbe answered, “You’re going to see some very natural extensions of what’s in Visual Studio today. For example, you’ll see new project types. I wouldn’t call that a new tool … I’d call it a fairly natural extension to the existing tools.” Additionally, Wahbe expressed Microsoft’s desire to deliver tools to developers as soon as possible. “We want to get a CTP [community technology preview] out early and engage in that conversation. Now we can get this thing out broadly, get the feedback, and I think for me, that’s the most powerful way to develop a platform,” explained Wahbe of the importance of developers’ using and subsequently critiquing Azure.

When asked about the possibility of competitors like Amazon and Google gaining early share due to the ambiguous time frame of Azure, Wahbe’s responded serenely, “The place to start with Amazon is [that] they’re a partner. So they’ve licensed Windows, they’ve licensed SQL, and we have shared partners. What Amazon is doing, like traditional hosters, is they’re taking a lot of the complexity out for our mutual customers around hardware. The heavy lifting that a developer has to do to tale that and then build a scale-out service in the cloud and across data centers—that’s left to the developer.” Wahbe detailed how Microsoft has base computing and base storage—the foundation of Windows Azure—as well as higher-level services such as the database in the cloud. According to Wahbe, developers no longer have to build an Internet-scale pub-sub system, nor do they have to find a new way to do social networking and contacts nor have reporting services created themselves.

In discussing the impact that cloud connecting will have on the cost of development and the management of development processes, Wahbe said, “We think we’re removing complexities out of all layers of the stack by doing this in the cloud for you … we’ll automatically do all of the configuration so you can get load-balancing across all of your instances. We’ll make sure that the data is replicated both for efficiency and also for reliability, both across an individual data center and across multiple data centers. So we think that be doing that, you can now focus much more on what your app is and less on all that application infrastructure.” Wahbe predicts that it will be simpler for developers to build applications with the adoption of Microsoft Azure. For more information on Cloud Connectors, contact a Nubifer representative today.

Get Your Java with Google App Engine

Google’s App Engine service has embraced Java’s programming language. The most requested feature for App Engine since its exception, Java support is currently in “testing mode,” although Google eventually plans on bringing GAE’s Java tools up to speed with its current Python support.

As Google’s service for hosting scalable and flexible web applications, App Engine is synonymous with cloud computing for Google. Java is one of the most frequently-used languages for coding applications on the web, and by adding Java Google is filling a major break in its cloud services plan. Also by adding Java, Google is catching up with one if its fiercest competitors in cloud computing, Amazon. Amazon’s Web Services platform has provided support for Java virtual machines for some time now.

In addition, Java support also allows for the possibility of making App Engine a means of running applications for Google’s Android mobile platform. Although no plans for Google’s Android GAW apps have not been outlined as of yet, it appears as if Google is preparing for an effortless and quick way to develop for Android, as Java is available on the device as well as the server.

With the addition of Java support to Google App Engine, other programming languages such as JavaScript, Ruby and maybe Scala, can run on Java virtual machines as well. The possibility of JRuby support or support for other JVM languages arriving any time in the near future, however, is unlikely due to the experimental status of Java.

Those wishing to play around with Google App Engine’s new Java support can add their name to the list on the sign up page; the first 10,000 developers will be rewarded with a spot in the testing group.

Along with Java support, the latest update for Google App Engine includes support for cron jobs which enables programmers to easily schedule recurring tasks such as weekly reports. The Secure Data Connector is another new feature; the Secure Data Connector lets Google App Engine access data behind a firewall. Thirdly, there is a new database import tool; the database import too makes it easier to transport large amounts of data into App Engine.

In summary, by embracing the programming language of Java, Google is filling a gap in its cloud services plan and catching up with competitors like Amazon. For more information, please visit nubifer.com.

Thoughts on Google Chrome OS

As a leading cloud computing and SaaS provider, everyone at Nubifer is excited about Google’s new operating system, Chrome. Designed, in Google’s words, for “people who live on the web,” (like us!) Google’s Chrome browser launched in late 2008 and now an extension of Google Chrome—the Google Chrome Operating System—has arrived. Google demonstrated its open source PC operating system on Nov. 19 and revealed that its code will be open-sourced later this year, with netbooks running Google Chrome OS available for consumers as early as the second half of 2010.

Citing speed, simplicity and security as key features, Google Chrome OS is designed as a modified browser which allows netbooks to carry out everyday computing with web-based applications. Google Chrome OS basically urges consumers to abandon the computing experience that they are used to in favor of one that exists entirely in the cloud (albeit Google’s cloud), which, you have to admit, is a pretty enticing offer. The obvious benefits of the Google Chrome OS are saving money (cloud storage replaces pricey external hard-disc drives) and gaining security (thanks to Google’s monitoring for malware in Chrome OS apps).

While may comparisons have been made between Google Chrome OS and Android (admittedly they do overlap somewhat), Chrome is designed for those who spend the majority of their time on the web, and is thus being created to power computers of varying size, while Android was designed to work across devices ranging from netbooks to cell phones. Google Chrome OS will run on x86 and ARM chips and Google is currently teaming up with several OEMs to offer multiple netbooks in 2010. The foundation of Google Chrome is this: Google Chrome runs within a new windowing system on top of a Linux kernel. The web is the platform for application developers, with new applications able to be written using already-in-place web technologies and existing web-based applications being able to work automatically.

Five benefits of using Google Chrome OS are laid out by Wired.com: Cost, Speed, Compatibility, Portability and New Applications. While netbooks are inexpensive, users often fork out a sizable chunk of change for a Windows license, but using Google’s small, fast-booting platform allows for this cost to be greatly downsized. Those with Linux versions of netbooks also ready know that they cost less than $50 on average and that is due to a Microsoft tax; because Chrome Os is based on Linux it would mostly likely be free. As for speed, Chrome OS is created to run on low-powered Atom and ARM processors, with Google promising boot times measured in mere seconds.

Drivers have caused major problems for those using an OS other than Windows XP on a netbook, but there is a chance that Google may devise an OS able to be downloaded, unloaded onto any machine and ready to use—all without being designed specifically for different netbook models. And now we come to portability, as Chrome allows for all of Google’s services, from Gmail and Google Docs to Picasa, to be built-in and available for offline access using Google Gears. Thus users won’t have to worry about not having data available when not connected to the Internet. As for new applications, it remains unclear whether Google will buy open-source options like the Firefox-based Songbird music player (which has the ability to sync with an iPod and currently runs on some Linux flavors) or if it will create its own.

Another company, Phoenix Technologies, is also offering an operating system, called HyperSpace. Instead of serving as a substitution for Windows, HyperSpace is an optional, complementary (notice it’s spelled with an “e,” not an “i”) mini OS which is already featured on some netbooks. Running parallel to Windows as an instant-on environment, HyperSpace allows netbooks to perform Internet-based functions, such as browsers, e-mail, multimedia players, etc., without booting into Windows. Phoenix Technologies’ idea is similar to Google’s, but Phoenix is a lesser-known company and is taking different approach at offering the mini OS than Google is with its Chrome OS.

Google’s eventual goal is to produce an OS that mirrors the streamlined, quick and easy characteristics of its individual web products. Google is the first to admit that it has its work cut out for it, but that doesn’t make the possibility of doing away with hard drives once and for all any less exciting for all of us. For more information please visit Nubifer.com.

Evaluating Zoho CRM

Although Salesforce may be the name most commonly associated with SaaS CRM, Zoho CRM is picking up speed as a cheap option for small business or large companies with only a few people using the service. While much attention has been paid to Google Apps, Zoho has been quietly creating a portfolio of on-line applications that is worth recognition. Now many are wondering if Zoho CRM will have as large of an impact on Salesforce that Salesforce did on SAP.

About Zoho

Part of Advent, Zoho has been producing SaaS Office-like applications since 2006. One of Zoho’s chief architects, Raju Vegesna, joined Advent upon graduating in 2000 and moving from India to the United States. Among Vegesna’s chief responsibilities is getting Zoho on the map.

Zoho initially offered spreadsheet and writing applications although the company, which targets smaller businesses with 10 to 100 employees, now has a complete range of productivity applications such as email, a database, project management, invoicing, HR, document management, planning and last but not least, CRM.

Zoho CRM

Aimed at businesses seeking to manage customer relations to transform leads into profitable relationships, Zoho CRM begins with lead generation. From there are lead conversion, accounts set up, contacts, potential mapping and campaign tabs. One of Zoho CRM’s best features is its layout. Full reporting facilities with formatting, graphical layouts and dashboards, forecasting and other management tools are neatly displayed and optimized.

Zoho CRM is fully email enabled and updates can be sent to any user set up along with full contact administration. Time lines ensure that leads are never forgotten or campaigns slipped. Like Zimbra and ProjectPlace, Zoho CRM offers brand alignment, which means users can change layout colors and add their own logo branding. Another key feature is Zoho’s comprehensive help section, which is constantly updated with comments and posts from other users online. Contact details from a standard comma separated value (.CSV) file from a user’s email system or spreadsheet application (such as Excel, Star or Open Office) can be imported by Zoho CRM. Users can also export CRM data in the same format as well.

The cost of Zoho CRM is surprisingly low. Zoho CRM offers up to three users (1,500) records for free, a Professional Version for $12 a month and as Enterprise version (20,000 records) for $25 a month. For more information about adopting Zoho’s CRM, contact a Nubifer representative today.

How Microsoft Windows 7 Changed the Game for Cloud Computing … and Signaled a Wave of Competition Between Microsoft, Google and Others.

On October 22 Microsoft released the successor to Windows Vista, Windows 7, and while excitement for the operating system mounted prior to its release, many are suggesting that its arrival is a sign of the end of computing on personal computers and the beginning of computing solely in the cloud. Existing cloud services like social networking, online games and web-based email are accessible through smart-phones, browsers or other client services, and because of the availability of these services Windows 7 is Microsoft’s fist operating system to include less features.

Although Windows is not in danger of extinction, cloud computing makes its operating systems less important. Other companies are following in Microsoft’s footsteps by launching products with fewer features than even Microsoft 7. In September, Microsoft opened a pair of data centers containing half a million servers between them and subsequently issued a new version of Windows for smart-phones. Perpetually ahead of the curve, Microsoft also launched a platform for developers, the highly publicized Azure, which allows them to write and run cloud services.

In addition to changing the game for Microsoft, the growth of cloud computing also heightens competition between the computer industry. Thus far, advancements in technology have pushed computing power in the opposite direction of central hubs (as seen in the shift from mainframes to minicomputers to PCs), while power is now being inverted back to the center in some ways, with less expensive and more powerful processors and faster networks. Basically, the cloud’s data centers are outsized public mainframes. While this is occurring, the PC is being pushed aside by more compact, wireless devices like netbooks and smart-phones.

The lessened importance of the PC enables companies like Apple, Google and IBM to fill in the gap caused my Microsoft’s former monopoly. There are currently hundreds of firms offering cloud services, and more by the day, but as The Economist points out, Microsoft, Google and Apple are in their own league. Each of the three companies has its own global network of data centers and plans on offering several services while also seeking to dominate the new field by developing new software or devices. The battle between Microsoft, Google and Apple sees each company trying to one-up each other. For example, Google’s free PC operating system, Chrome OS, shows Google’s attempt to catch up to Microsoft, while Microsoft’s recent operating system for smart-phones shows Microsoft’s attempt to catch up with the Apple iPhone as all as Google’s handset operating system, Android. Did you follow all of that?

Comparing Google, Microsoft and Apple

Professor Michael Cusamano of MIT’s Sloan School of Management recently told The Economist that while there are similarities between Google, Apple and Microsoft, they are each unique enough to carve out their own spot in the cloud because they approach the trend towards cloud computing in different ways.

Google is most well known for its search service as well as other web-based applications, and has recently began diversifying, launching Android for phones and Chrome OS. In this way, it can be said that Google has been a prototype for a cloud computing company since its inception in 1998. Google’s main source of revenue is advertising, with the company controlling over 75% of search-related ads in the States (and even more on a global scale). Additionally, Google is seeking to make money from selling services to companies, announcing in October that all 35,000 employees at the pest-control-to-parcel-delivery group Rentokil Initial will be using Google’s services.

While Microsoft is commonly associated with Microsoft Office and Windows, the company’s relations to cloud computing are not as distant as one might think. Microsoft’s new search engine, Bing, shows the company’s transition into the cloud, as does its web-based version of Office and the fact that Microsoft now offers many of its business software via online services. Microsoft smartly convinced Yahoo! to merge its search and a portion of its advertising business with Microsoft because consumers expect cloud services to be free, with everything paid for by ads.

As evidenced by the iPhone, the epitome of have-to-have-it, innovative bundles of hard- and software, Apple is largely known for its services outside the cloud. Online offering like the App Store, the iTunes store and MobileMe (a suite of online services), however, show that Apple’s hunger to get a piece of the cloud computing pie is growing by the day. Apple is also currently building what many have suggested is the world’s largest data center (worth a whopping $1 billion) in North Carolina.

While Apple, IBM and Microsoft previously battled for the PC in the late 1980s and early 1990s, cloud computing is an entirely different game. Why? Well, for starters, much of the cloud is based on open standards, making it easier for users to switch providers. Antitrust authorities will play into the rivalry between the companies, and so will other possible contenders, such as Amazon and Facebook, the world’s leading online retailer and social network, respectively (not to mention Zoho and a host of others). An interesting fact thrown to the debate on who will emerge victorious is the fact that all current major contenders in the cloud computing race are American, with Asian and European firms not yet showing up in cloud computing in any major way (although Nokia’s suite of online services, Ovi, is in beginning stages). Visit Nubifer.com for more information.

Worldwide SaaS Revenue to Increase 18 Percent in 2009 According to Gartner

According to the folks over at Gartner, Inc., one of the leading information technology research and advisory companies, worldwide SaaS (Software as a Service) revenue is predicted to reach $7.5 billion in 2009. If Gartner’s forecast is correct, this would show a 17.7 percent increase, as 2008 SaaS revenue totaled at $6.4 billion. Gartner also reports that the market will display significant and steady growth through 2013, at which point revenue is anticipated to extend past $14 billion for enterprise application markets.

Research director Sharon Mertz said of the projections, “The adoption of SaaS continues to grow and evolve within the enterprise application markets. The composition of the worldwide SaaS landscape is evolving as vendors continue to extend regionally, increase penetration within existing accounts and ‘greenfield’ opportunities, and offer more-vertical-specific solutions as part of their service portfolio or through partners.” Mertz continued to explain how the on-demand deployment model has flourished because of the broadening of on-demand vendors’ services through partner offerings, alliances and (recently) by offering and promoting user-application development through PaaS (Platform as a Service) capabilities. Added Mertz, “Although usage and adoption is still evolving, deployment of SaaS still varies between the enterprise application markets and within specific market segments because of buyer demand and applicability of the solution.”

Across market segments, the largest amount of SaaS revenue comes from CCC (content, communications and collaboration) and CRM (customer relationship management) markets. Gartner reports that the CCC market is generating $2.6 billion and the CRM market is generating $2.3 billion, in 2009. The CCC and CRM markets generated $2.14 billion and $1.9 billion in 2008, respectively. See Table 1 for figures.

[Insert graphic box here]

Growth in the CRM market continues to be driven by SaaS, a trend which began four year ago, as evidenced by the jump from less than $500 million and over 8 percent of the CRM market in 2005 to nearly $1.9 million in revenue and over 8 percent of the CRM market in 2008. Gartner anticipated this trend to continue, with SaaS representing nearly 24 percent of the CRM market’s total software revenue in 2009. Says Gartner’s Mertz in conclusion, highlighting the need in the marketplace filled by SaaS, “The market landscape for on-demand CRM continues to evolve as the availability and usage of SaaS solutions becomes more pervasive. The rapid adoption of SaaS and the marketplace success of salesforce.com have compelled vendors without an on-demand solution to either acquire smaller niche SaaS providers or develop the solution internationally in response to increasing buyer demand.” To receive more information contact Nubifer today.

Will Zoho Be the Surprise Winner in the Cloud Computing Race?

With all the talk of Microsoft, Google, Apple, IBM, Amazon and other major companies, it might be easy to forget about Zoho—but that would be a big mistake. The small, private company offers online email, spreadsheets and processors, much like one of the giants in cloud computing, Google, and is steadily showing it shouldn’t be discounted!

Based in Pleasanton, Calif., Zoho has never accepted bank loans or venture capital yet shows revenue of over $50 million a year. While Zoho has data center and networking management tools, its fastest-growing operation is its online productivity suite, according to Zoho’s chief executive, Sridhar Vembu. The company’s position suggests that there may be a spot for Zoho among online productivity application markets seemingly dominated by a few major companies. Vembu recently told the New York Times, “For now, the wholesale shift to the Web really creates opportunities for smaller companies like us.” And he may very well be right.

Zoho has 19 online productivity and collaboration applications (including invoicing, product management and customer relationship management), thus Zoho and Microsoft only overlap with five offerings. Zoho’s focus remains on the business market, with half of the company’s distribution through partners integrating Zoho’s products into their offerings. For example, Box.net, a service for storing, backing up and sharing documents, uses Zoho as an editing tool for uploaded documents. Most of Zoho’s partners are web-based services, showing that cheap, web-based software permits these business mash-ups to occur—while traditional software would make it nearly impossible. “Today, in the cloud model, this kind of integration is economical,” explains Vembu to the New York Times.

According to Vembu, most paying customers using Zoho’s hosted applications from its website (with prices ranging from free to just $25 per month, varying on features and services) are small businesses with anywhere from 40 to 200 employees. As evidence for the transition into the cloud, the chief executive of Zoho points to the Splashtop software created by DeviceVM, a start-up company. Dell, Asus and Hewlett-Packard reportedly plan on loading Splashtop, software able to be installed directly into a PCs hardware (thus completely doing without the operating system) on some of their PCs. “It is tailor-made for us. You go right into the browser,” says Vembu, clearly pleased at the evidence that smaller companies like Zoho are making leeway in the field of cloud computing.

Microsoft Azure Uncovered

Everyone is talking about Microsoft Azure, which could leave some people left in the dust wondering what exactly Azure is, how much it costs and what it means for cloud computing and Microsoft as a whole. If you are among those who have unanswered questions about Microsoft Azure, look no further: here is your guide to all things Azure.

The Basics

When cloud computing first emerged, everyone wondered if and how Microsoft would make the transition into the cloud—and Microsoft Azure is the answer. Windows Azure is a cloud operating system that is essentially Microsoft’s first big step into the cloud. Developers can build using .NET, Python, Java, Ruby on Rails and other languages on Azure. According to Windows Azure GM Doug Hauger, Microsoft plans on eventually offering an admin model, which will permit developers to have access to the virtual machine (as with traditional Infrastructure-as-a-Service offerings like Amazon’s EC2, they will have to manually allocate hardware resources). SQL Azure is Microsoft’s relational database in the cloud while .NET Services is Microsoft’s Platform-as-a-Service built on the Azure OS.

The Cost

There are three different pricing models for Azure. The first is consumption-based, in which a customer pays for what they use. The second is subscription-based, in which those committing to six months of use receive discounts. Available as of July 2010, the third is volume licensing for enterprise customers desiring to take existing Microsoft licenses into the cloud.

Azure compute costs 12 center per service hour, which is half a cent less than Amazon’s Windows-based cloud, while Azure’s storage service costs 15 cents per GB of data per moth, with an additional cent for every 10,000 transactions (movements of data within the stored material). .NET Services platform costs 15 cents for every 100,000 times the applications build on .NET Services accesses a chunk of code or tool. As for moving data, it costs 10 cents per GB of inbound data and 15 cents per GB of outbound data. For up to a 1 GB relational database, SQL Azure is $9.99, while it costs $99.99 for up to a 10 GB relational database.

The Impact on Microsoft and Cloud Computing

Although the introduction of Microsoft Windows Azure comes a bit late into the burgeoning field of cloud computing and as a Platform-as-a-Service party, Microsoft remains ahead of enterprises which the company is hoping to attract as customers. In other words, by eyeing enterprises that still remain skeptical of cloud computing, Microsoft may tap into customers not snatched up by other more established cloud computing parties. No enterprise data center runs solely on Microsoft software, which is likely why the company seems willing to test out other programming languages and welcome heterogeneous environments in Azure. Additionally, the Azure platform as has a service-level agreement that offers 99.9 percent uptime on the storage side with 99.95 percent uptime on the compute side.

As many have pointed out, Microsoft may be behind Amazon and others for the time being, but there is room for an open platform directed at enterprises, which is Azure’s niche. For more Azure related information visit Nubifer.com.

Assessing Risks in the Cloud

There is no denying that cloud computing is one of the most exciting alternatives to traditional IT functions, as cloud services—from Software-as-a-Service to Platform-as-a-Service—offer augmented collaboration, scale, availability, agility and cost reductions. Cloud services can both simplify and accelerate compliance initiatives and offer greater security, but some have pointed out that outsourcing traditional business and IT functions to cloud service providers doesn’t guarantee that these services will be realized.

The risks of outsourcing such services—especially those involving highly-regulated information like constituent data—must be actively managed by organizations or those organizations might increase their business risks rather than transferring or mitigating them. When the processing and storage of constituent information is outsourced, it is not inherently more secure, which brings to mind the boundaries of cloud computing as related to privacy legislation.

By definition, the nature of cloud services lacks clear boundaries and raises valid concerns with privacy legislation. The requirement to protect your constituent information remains your responsibility regardless of what contractual obligations were negotiated with the provider and where the data is located, the cloud included. Some important questions to ask include: Does your service provider outsource any storage functions or data processing to third-parties? Do such third-parties have adequate security programs? Do you know if your service provider—and their service providers—have adequate security programs?

Independent security assessments, such as those performed as part of a SAS70 or PCI audit, are point-in-time evaluations, which is better than nothing at all but still needs to be a consideration. Another thing to consider is that the scope of such assessments can be directed at the provider’s discretion, which does not mean that accurate insight into the provider’s ongoing security activities will be provided.

What all of this means is basically that many questions pertaining to Cloud Governance and Enterprise Risk still loom. For example, non-profit organizations looking to possibly migrate fundraising activities and solutions to cloud services need to first look at their own practices, needs and restrictions to identify possible compliance requirements and legal barriers. Because security is a process rather than a product, the technical security of your constituent data is only as strong as our organization’s weakest process. The security of the cloud computing environment is not mutually exclusive to your organization’s internal policies, standards, procedures, processes and guidelines.

When making the decision to put sensitive constituent information into the cloud, it is important to conduct comprehensive initial and ongoing due diligence audits of your business practices and your provider’s practices. For answers to your questions on Cloud Security visit Nubifer.com.

Google’s Continued Innovation of Technology Evolution

Google has the uncanny ability to introduce non-core disruptive innovations while simultaneously defending and expanding its core, and an analysis of the concepts and framework in Clayton Christensen’s book Seeing What’s Next offers insight into how.

Recently, Google introduced free GPS on the Android phone through a strategy that can be described as “sword and shield.” This latest disruptive innovation seeks to beat a current offering serving the “overshot customers,” i.e. the ones who would stop paying for additional performance improvements that historically had called for price premium. Google essentially entered into the “GPS Market” to serve said overshot customers by using a shield: asymmetric skills and motivation in the form of Android OS, mapping data and a lack of direct revenue expectations. Subsequently, Google transformed its “shield” into a “sword” by disinteremediating the map providers and using a revenue-share agreement to incentivize the carriers.

Examples of “incremental to radical,” to use Christensen’s terms, sustaining innovations in which Google sought out the “undershot customers” are GMail and Google’s core search technology. Frustrated with the products’ limitations, these customers are willing to swap their current product for another better one, should it exist. Web-based email solutions and search engines existed before the Google-introduced ones, but those introduced by Google solved problems that were frustrating users of other products. For example, users relished in GMail’s expansive email quota (compared to the limited quota they faced before) and also enjoyed the better indexing and relevancy algorithms of the Google search engine. Although Microsoft is blatantly targeting Google with Bing, Google appears unruffled and continues to steadily, if somewhat slowly, invest in its sustainable innovation (such as with Caffeine, the next-generation search platform, Gmail labs, social searches, profiles, etc.) to continue to maintain the revenue stream out of its core business.

By spending money on lower-end disruptive innovations and not “cramming” sustaining innovation, Google managed to thrive while most companies are practically destined to fail. The issue between Google’s sustaining and disruptive innovations was even coped with by using this strategy! According to insiders at Google, the GMail team was not used to create Google Wave, a fact unbeknownst to the GMail team. If Google had added wave-like functionality to Gmail, it would have been “cramming” sustaining innovation, while innovating outside of email can potentially serve a variety of both undershot and overshot customers.

So what does this mean for AT&T? Basically, AT&T needs to watch its back and keep an eye on Google! Smartphone revenue is predicted to surpass laptop revenue in 2012, after the number of Smartphone units this year surpassed the number of laptops sold. The current number of subscribers to Comcast exceeds 7 million (eight-fold what it used to be). While Google pays a pricey phone bill for Google Voice, which has 1.4 million users (with 570,000 of them using it seven days a week) Google is dedicated to making Google Voice work—and if it does Google could potentially serve a new brand of overshot customers that want to stay connected in realtime but don’t need or want a landline.

Although some argue that Chrome OS is more disruptive, using disruptive innovation theory it can be said that Chrome OS is created for the breed of overshot customer that is frustrated with other market solutions at the same level, not for the majority of customers. Should Google currently be scheming around Chrome OS, the business plan would be an expensive one, not to mention timely and draining in its use of resources. For more information on Google’s continued innovation efforts, please visit Nubifer.com.

Addressing Concerns for Networking in the Cloud

Many concerns arise when moving applications between internal data centers and public clouds. The considerations for cloud networking once transferred to the cloud will be addressed below.

In the respect that clouds have unique networking infrastructures that support flexible and complex multi-tenant environments, clouds do not vary from the enterprise. Each enterprise has an individual network infrastructure used for accessing servers and allowing applicants to communicate between varying components. That unique infrastructure includes address services (like DHCP/DNS), specific addressing (sub-nets), identity/directory services (like LDAP) and firewalls and routing rules.

It is important to remember that the cloud providers have to control their networking in order to route traffic within their infrastructure. The cloud providers’ design is different from enterprise networking in architecture, design and addressing. While this does not pose a problem when doing something stand-alone in the cloud (because it doesn’t matter what the network structure is, as long as it can be accessed over the Internet), discontinuities must be addressed when desiring to extend existing networks and using existing applications.

In terms of addressing, the typical cloud provider will assign a block of addresses as part of the cloud account. Flexiscale and GoGrid, for example, give the user a block of addresses which are able to be attached to the servers created. These are external addresses (i.e. public addresses that are able to be accessed from the Internet) in some cases, and internal in others. Whether external or internal, they are not assigned as part of the user’s addressing, which means that even if the resources are able to be connected to the data center, new routes will need to be built and services will need to be altered to allow these “foreign” addresses into the system.

A different approach was taken by Amazon, which provided a dynamic system where an address is assigned each time a server is started. In doing this, it was difficult to build multi-tier applications which require developers to create systems which are capable of passing changing address information between application components. The problem for connecting to the Amazon cloud is partially solved by the new VPC (Virtual Private Cloud), although some key problems persist, thus other cloud providers continue to look into similar networking capabilities.

Data protection is another key issue concerning networking in the cloud. A secure perimeter defined and developed by an IT organization, comprised of firewalls, rules and systems to create a protected environment for internal applications, is located within the data center. The reason this is important is that most applications need to communicate over ports and services not safe for general Internet access. It can be dangerous to move applications into the cloud unmodified because applications are developed for the protected environment of the data center. The application owner or developer usually has to build protection on a per-server basis and subsequently enact corporate protection policies.

An additional implication for the loss of control of the infrastructure referenced earlier is that in most clouds, the physical interface level cannot be controlled. MAC addresses are assigned in addition to IP addresses, and these can change each time a server is started, meaning that the identity of the server cannot be based on this common attribute.

Whenever enterprise applications require the support of data center infrastructure, networking issues like identity and naming services and access to internal databases and other resources are involved. Cloud resources thus need a way to connect to the data center, and the easiest is a VPN (Virtual Private Network). In creating this solution, it is essential to design for routing to the cloud and provide a method for cloud applications to “reach back” to the applications and services running in the data center. This connection ideally would allow Layer-2 connectivity due to a number of services required to function properly.

In conclusion, networking is a very important part of IT infrastructure, and the cloud contributes several new variables to the design and operation of the data center environment. A well-constructed architecture and solid understanding of the limitations imposed by the cloud are needed if you want to integrate with the public cloud successfully. Currently, this can be a major barrier to cloud adoption because enterprises are understandably reluctant to re-architect their network environments or become knowledgeable about each cloud provider’s underlying infrastructure’s complexities. In designing a cloud strategy, it is essential to choose a migration path which addresses these issues and protects from expensive engineering projects as well as cloud risks. Please visit Nubifer.com for more information.

Amazon Offers Private Clouds

While Amazon initially resisted offering a private cloud, and there are many advocates of the public cloud, Amazon recently introduced a new Virtual Public Cloud, or VPC. While many bloggers question whether or not Amazon’s VPC is truly a “virtually” private cloud or a “virtual” private cloud, there are some who believe that the VPC may be a way to break down the difficulties that face customers seeking to adopt cloud computing, such as security, ownership and virtualization. The following paragraphs will address each of these issues and how Amazon’s VPC would alleviate them.

One of the key concerns facing customers adopting cloud computing is the perceived security risks that may occur, but the placebo cloud may assuage these risks. The security risk stems from the past experiences of customers’; these customers believe that any connections made using Amazon’s VPN must be secure, even if they are connecting into a series of shared resources. Using Amazon’s private cloud, customers will deploy and consume the applications in an environment that they feel is safe and secure.

Amazon’s VPC provides a sense of ownership to customers without letting them actually own the computing. Customers may initially be skeptical about not owning the computing, thus it is up to Amazon’s marketing engine to provide ample information to alleviate that worry.

As long as the customers’ business goals are fully realized with Amazon’s VPC, they need not necessarily understand nor care about the differences between virtualization and the cloud. In using the VPC, customers are able to use VPN, and network-virtualization—the existing technology stack that they are already comfortable with. In addition, the VPC would allow the partners to enable the customers to bridge the gap between their on-premise systems to the cloud to create a hybrid virtualization environment, which spans several resources.

Whether or not some favor the public cloud, the customer should be able to first choose to enter into cloud computing and later choose which way to leverage the cloud on their own.  For more information about Private Clouds, please visit Nubifer.com.

Get Your Java with Google App Engine

Finally! Google’s App Engine service has finally embraced Java’s programming language. The most requested feature for App Engine since its exception, Java support is currently in “testing mode,” although Google eventually plans on bringing GAE’s Java tools up to speed with its current Python support.

As Google’s service for hosting scalable and flexible web applications, App Engine is synonymous with cloud computing for Google. Java is one of the most frequently-used languages for coding applications on the web, and by adding Java Google is filling a major break in its cloud services plan. Also by adding Java, Google is catching up with one if its fiercest competitors in cloud computing, Amazon. Amazon’s Web Services platform has provided support for Java virtual machines for some time now.

In addition, Java support also allows for the possibility of making App Engine a means of running applications for Google’s Android mobile platform. Although no plans for Google’s Android GAW apps have not been outlined as of yet, it appears as if Google is preparing for an effortless and quick way to develop for Android, as Java is available on the device as well as the server.

With the addition of Java support to Google App Engine, other programming languages such as JavaScript, Ruby and maybe Scala, can run on Java virtual machines as well. The possibility of JRuby support or support for other JVM languages arriving any time in the near future, however, is unlikely due to the experimental status of Java.

Those wishing to play around with Google App Engine’s new Java support can add their name to the list on the sign up page; the first 10,000 developers will be rewarded with a spot in the testing group.

Along with Java support, the latest update for Google App Engine includes support for cron jobs which enables programmers to easily schedule recurring tasks such as weekly reports. The Secure Data Connector is another new feature; the Secure Data Connector lets Google App Engine access data behind a firewall. Thirdly, there is a new database import tool; the database import too makes it easier to transport large amounts of data into App Engine.

In summary, by embracing the programming language of Java, Google is filling a gap in its cloud services plan and catching up with competitors like Amazon.  For more information on Google Apps, please visit Nubifer.com.

Answers to Your Questions on Cloud Connectors for Leading Platforms like Windows Azure Platform

Jeffrey Schwartz and Michael Desmond, both editors of Redmond Developer News, recently sat down with corporate vice president of Microsoft’s Connected Systems Division, Robert Wahbe, at the recent Microsoft Professional Developers Conference (PDC) to talk about Microsoft Azure and its potential impact on the developer ecosystem at Microsoft. Responsible for managing Microsoft’s engineering teams that deliver the company’s Web services and modeling platforms, Wahbe is a major advocate of the Azure Services Platform and offers insight into how to build applications that exist within the world of Software-as-a-Service, or as Microsoft calls it, Software plus Services (S + S).

When asked how much of Windows Azure is based on Hyper-V and how much is an entirely new set of technologies, Wahbe answered, “Windows Azure is a natural evolution of our platform. We think it’s going to have a long-term radical impact with customers, partners and developers, but it’s a natural evolution.” Wahbe continued to explain how Azure brings current technologies (i.e. the server, desktop, etc.) into the cloud and is fundamentally built out of Windows Server 2008 and .NET Framework.

Wahbe also referenced the PDC keynote of Microsoft’s chief software architect, Ray Ozzie, in which Ozzie discussed how most applications are not initially created with the idea of scale-out. Explained Wahbe, expanding upon Ozzie’s points, “The notion of stateless front-ends being able to scale out, both across the data center and across data centers requires that you make sure you have the right architectural base. Microsoft will be trying hard to make sure we have the patterns and practices available to developers to get those models [so that they] can be brought onto the premises.”

As an example, Wahbe created a hypothetical situation in which Visual Studio and .NET Framework can be used to build an ASP.NET app, which in turn can either be deployed locally or to Windows Azure. The only extra step taken when deploying to Windows Azure is to specify additional metadata, such as what kind of SLA you are looking for or how many instances you are going to run on. As explained by Wahbe, the Metadata is an .XML file and as an example of an executable model, Microsoft is easily able to understand that model. “You can write those models in ‘Oslo’ using the DSL written in ‘M,’ targeting Windows Azure in those models,” concludes Wahbe.

Wahbe answered a firm “yes” when asked if there is a natural fit for application developed in Oslo, saying that it works because Oslo is “about helping you write applications more productively,” also adding that you can write any kind of application—including cloud. Although new challenges undoubtedly face development shops, the basic process of writing and deploying code remains the same. According to Wahbe, Microsoft Azure simply provides a new deployment target at a basic level.

As for the differences, developers are going to need to learn a new set of services. An example used by Wahbe is if two businesses were going to connect through a business-to-business messaging app; technology like Windows Communication Foundation can make this as easy process. With the integration of Microsoft Azure, questions about the pros and cons of using the Azure platform and the service bus (which is part of .NET services) will have to be evaluated. Azure “provides you with an out-of-the-box, Internet-scale, pub-sub solution that traverses firewalls,” according to Wahbe. And what could be bad about that?

When asked if developers should expect new development interfaces or plug-ins to Visual Studio, Wahbe answered, “You’re going to see some very natural extensions of what’s in Visual Studio today. For example, you’ll see new project types. I wouldn’t call that a new tool … I’d call it a fairly natural extension to the existing tools.” Additionally, Wahbe expressed Microsoft’s desire to deliver tools to developers as soon as possible. “We want to get a CTP [community technology preview] out early and engage in that conversation. Now we can get this thing out broadly, get the feedback, and I think for me, that’s the most powerful way to develop a platform,” explained Wahbe of the importance of developers’ using and subsequently critiquing Azure.

When asked about the possibility of competitors like Amazon and Google gaining early share due to the ambiguous time frame of Azure, Wahbe’s responded serenely, “The place to start with Amazon is [that] they’re a partner. So they’ve licensed Windows, they’ve licensed SQL, and we have shared partners. What Amazon is doing, like traditional hosters, is they’re taking a lot of the complexity out for our mutual customers around hardware. The heavy lifting that a developer has to do to tale that and then build a scale-out service in the cloud and across data centers—that’s left to the developer.” Wahbe detailed how Microsoft has base computing and base storage—the foundation of Windows Azure—as well as higher-level services such as the database in the cloud. According to Wahbe, developers no longer have to build an Internet-scale pub-sub system, nor do they have to find a new way to do social networking and contacts nor have reporting services created themselves.

In discussing the impact that cloud connecting will have on the cost of development and the management of development processes, Wahbe said, “We think we’re removing complexities out of all layers of the stack by doing this in the cloud for you … we’ll automatically do all of the configuration so you can get load-balancing across all of your instances. We’ll make sure that the data is replicated both for efficiency and also for reliability, both across an individual data center and across multiple data centers. So we think that be doing that, you can now focus much more on what your app is and less on all that application infrastructure.” Wahbe predicts that it will be simpler for developers to build applications with the adoption of Microsoft Azure.  For more information regarding Windows Azure, please visit Nubifer.com.