Posts Tagged ‘ Cloud Interop ’

Guidelines for Cloud Consumers and Providers

Business users are drawn to the cloud. That’s not surprising, considering they tend to see mostly benefits: self-service freedom, scalability, availability, flexibility, and the pleasure of avoiding various nasty hardware and software headaches.IT leaders though are a different story—they are not always as ecstatic.  They indicate uneasiness about cloud securityand have legitimate concerns that unauthorized users could get their hands on their applications and data. Moreover, retaining a level of influence and control is a must for them. Can both “sides” meet halfway? Is it attainable to provide the freedom that users want while having the control that IT leaders need?
.
Simply put, Yes…. However, doing so will entail a collaborative effort. Both business users and IT leaders have to assume a few key responsibilities. In addition, you will have to make certain that your cloud provider will be doing its part as well.

.

Your 5 Responsibilities

Here are a few things you need to be held accountable for:
.
1. Define the business need. Identify the root problem you want to solve a cloud technology. Is it a perpetually recurring concern, or one that happens irregularly? Did you need an answer “last week,” or do you have time to construct a solution?

Important note: Not all clouds are created equally. Some can run your applications unchanged, with instant access; while others require little tweaking. Recognizing your needs and differentiating cloud technologies will help you determine the correct strategy for handling the particular business problem that needs attention.

2. Identify your application and process requirements. Once you have accurately defined your business needs, it is time to select the application best-suited to meet those needs. Be clear and precise about the nature of the application, the development process you want to adapt, and the roles and access permissions for each user.

Your teams no longer have to struggle through traditional linear and slow development processes. Instead, the cloud can give them access to the best practices that are fluid and agile. Many self-service solutions can even empower them to run copies of the same environment in parallel.

Simply put, the cloud may lead to breakthrough productivity when used properly. However, if used incorrectly it can also lead to enormous amounts of wasted resources. Having said this, take your time to do your research and choose wisely.

3. Determine your timetable. Cloud projects are not short sprints contrary to popular belief. They are better illustrated as long journeys over time. Please plan accordingly.

Nubifer recommends to define your early experiments in a quarterly basis because cloud technology is transformative. Learn from the first quarter, take note, and execute the necessary adjustments and then move on to the next. The objective is to generate a learning organization that increases control over time and progresses based on data and experience.

4. Establish success factors. Define what success is for you. Do you want to improve the agility of the development process? Maybe you want to increase the availability of your applications? Or perhaps you want to enhance remote collaboration? Define achievement, and have a tool to measure progress as well. Identifying metrics and establishing realistic goals will aid you achieve the solution that meets not only your needs, but also your budget and payback time frame.

5. Define data and application security. Companies overlook this critical responsibility more often than they realize. Make sure to do your due diligence and attentively determine whom you can trust with cloud application. After which, empower them. The following are questions that need unambiguous answers: What specific roles will team members take in the cloud model? Does everyone comprehend fully the nature of the application and data they are planning to bring to the cloud? Does everyone know how to protect your data? Do they understand your password policies? Dealing with these security factors early on enables you to create a solid foundation for cloud success while having your own peace of mind about this issue.

Your Provider’s 5 Responsibilities

Meanwhile, make sure your cloud provider offers the following to attain better cloud control:
1. Self-service solutions. Time equals money. Thus waiting equals wasted time and money. So search for cloud applications that are ready from the get go. Determine if the solution you are considering may implement the applications and business process you have in mind immediately, or if the provider requires you to rewrite the application or change the process entirely.

There is also a need to distinguish if users will require training, or if they already equipped to handle a self-service Web interface. Answers to these questions can determine whether adoption will be rapid and smooth, or slow and bumpy.

2. Scale and speed. A well-constructed cloud solution provides the unique combination of scale and speed. It gives you access to the resources at a scale that you need with on-demand responsiveness. This combination will empower your team to run several instances in parallel, snapshot, suspend/resume, publish, collaborate, and accelerate the business cycle.

3. Reliability and availability. As articulated in the Service Level Agreements (SLAs), it is the responsibility of the cloud provider to make the system reliable and available. The provider should set clear and precise operational expectations, such as 99.9 percent availability, with you, the consumer.

4. Security. Ask for a comprehensive review of your cloud provider’s security technology and processes. In specific, ask about the following:

  • Application and data transportability. Can your provider give you the ability to export existing applications, data and processes into the cloud with ease? And can you import back just as hassle free?
  • Data center physical security.
  • Access and operations security. How does the consumer protect its physical data centers? Are these the SAS 70 Type II data centers? Are there trained and skilled data center operators in those places?
  • Virtual data center security. Your provider must be clear about how to control the method of access to physical machines. How are these machines managed? And who are able to access these machines?
  • In terms of scale and speed, most cloud efficiency derives from how the cloud is architected. Be sure to understand how the individual pieces, the compute nodes, network nodes, storage nodes, etc., are architected and how they are secured and integrated.

Application and data security.

In order to be able to implement your policies, the cloud solution must permit you to define groups, roles with granular role-based access control, proper password policies and data encryption–both iin transit and at rest.

5. Cost efficiencies. Without any commitments upfront, cloud solutions should enable your success to drive success. Unlike a managed service or a hosting solution, a cloud solution uses technology to automate the back-end systems, and therefore can operate large resource pools without the immense human costs. Having this luxury translates all these into real cost savings for you.

Despite business leaders recognizing the benefits of cloud computing technologies, more than a handful still have questions about cloud security and control. Indeed, that is understandable. However, by adopting a collaborative approach and aligning their responsibilities with those of the cloud provider, these leaders can find solutions that offer the best of both worlds. They get the visibility and control they want and need, while giving their teams access to the huge performance gains only the cloud can provide.

Contact Nubifer for a free, no-obligation Cloud Migration consultation.

Has Your Organization Adopted a Cloud Migration Strategy?

There has been an increased amount of research lately that indicates that many organizations will move to the cloud in the short term, there isn’t a lot of information detailing who is using it now and what they are using it for.

A published study by CDW reported that a number of enterprises are actually unaware that they are already using cloud applications and have a limited cloud adoption strategy.

It must be noted though, that this does not mean these enterprises have no intention of moving to the cloud. It just means, that these enterprises have not yet approached cloud computing strategically, and have not implemented an organization wide adoption strategy.

Cloud Computing Strategies

Another interesting note, according to the CDW report, is the percentage of companies claiming to have an enterprise policy on the acclimation to cloud computing — only 38%. This comes as a surprise as the report also concludes that 84% of organizations have already installed, at the minimum, one cloud application.

In March 2011, more than 1,200 IT professionals were asked to answer surveys for the CDW 2011 Cloud Computing Tracking Poll, which drew some interesting conclusions. It was discovered that these enterprises are uneasy with using public clouds and would rather go through the private clouds.

Cloud Application Usage

However, it is necessary to examine these statistics again with more caution. As mentioned above, more than 84% of these organizations claim that they have, at the bare minimum, one cloud application, yet they still do not consider themselves as cloud users.

The reason behind this discrepancy has yet to be determined. In other words, organizations are still unclear as to if and how it can integrate with their current enterprise architecture.

This is emphasized by how only 42% of those surveyed being convinced that their operations and amenities have the ability to operate efficiently in the cloud. Statistics show that applications operated in the cloud most frequently are the following:

  • Commodity applications such as email (50% of cloud users)
  • File storage (39%)
  • Web and video conferencing (36% and 32%)
  • Online learning (34%)

Developing a Cloud Strategy

Eight industries that were surveyed as part of the CDW Cloud Computing Tracking Poll back in March 2011 were—small businesses, medium businesses, large businesses, the Federal government, State and Local governments, healthcare, higher education and K-12 public schools. The poll discovered conclusions specific to each of the eight industries. It also included 150 individuals from each industry who acknowledged themselves as knowledgeable with the current uses and future plans of cloud application usage within their respective organization.

Although there are various hurdles to consider prior to adoption, primarily they can be divided into four segments:

1. Adoption Strategy

Despite having a number as high as 84% of organizations using at least one cloud-based application, only 25% of them have an organization wide adoption strategy and recognize themselves as cloud users. Just over a third has a formal plan for cloud adoption.

2. ROI Considerations

Approximately 75% were noted to have cost reductions upon migrating applications to a cloud platform.

3. Security

One of the, if not the primary obstacle, holding both current and potential users back is security. However, quite a number of users, including those who are currently using cloud applications, have yet to realize the full potential of security applications available.

4. Future spending

It is necessary for organizations to discover what future hardware and software acquisitions can be migrated into a cloud ecosystem.

Cloud Computing Now

A lot can happen in five years—this is especially true for the cloud industry. Currently, this study does not discuss in depth the difference between cloud computing and SaaS. However, it is likely that SaaS could be included in the study as it did define cloud computing as a “model for enabling convenient, on-demand access to a shared pool of configurable computing resources.”

With this in mind, along with the recent Forrester research on IT spending, it is highly likely that the data CDW has outlined will be significantly different five years from now.

According to Forrester, a record number of organizations will be investing in SaaS technologies, which broadly, is a subset of cloud computing. The data includes a finding that 25% of enterprises examined have a adopted a new cloud technology this year, with 14% using IaaS, 8% using PaaS, and 6% using business-process-as-a-service.

Does Your Organization Have a Cloud Migration Strategy?

In the end, the research was able to provide some thought provoking data. It was able to show that many companies are already leveraging the cloud without even knowing it.

Regardless of the potential ROI and efficiency gains offered by cloud computing, a significant number of companies have yet to seize the opportunity to leverage the scalability and efficiency of modern cloud applications.

Aside from this, according to the research, many companies find themselves without a coherent company wide strategy for dealing with cloud adoption. This is important to note because it is no secret a lack of planning can lead to disastrous results—with results like these needing a lot of financial and organizational efforts to fix.

If your organization is one of those lacking a coherent and comprehensive cloud adoption strategy, contact the Cloud accelerator experts at Nubifer to help guide the way. Nubifer partners with the leading vendors in order to provide unbiased cloud application architecture diagrams, white papers, security and compliance risk analysis and migration consulting services.


Start Me Up….Cloud Tools Help Companies Accelerate the Adoption of Cloud Computing

Article reposted form HPC in the Cloud Online Magazine. Article originally posted on Nov. 29 2010:

For decision makers looking to maximize their impact on the business, cloud computing offers a myriad of benefits. At a time when cloud computing is still being defined, companies are actively researching how to take advantage of these new technology innovations for business automation, infrastructure reduction, and strategic utility based software solutions.

When leveraging “the cloud”, organizations can have on-demand access to a pool of computing resources that can instantly scale as demands change. This means IT — or even business users — can start new projects with minimal effort or interaction and only pay for the amount of IT resources they end up using.

The most basic division in cloud computing is between private and public clouds. Private clouds operate either within an organization’s DMZ or as managed compute resources operated for the client’s sole use by a third-party platform provider. Public clouds let multiple users segment resources from a collection of data-centers in order to satisfy their business needs. Resources readily available from the Cloud include:

● Software-as-a-Service (SaaS): Provides users with business applications run off-site by an application provider. Security patches, upgrades and performance enhancements are the application provider’s responsibility.

● Platform-as-a-Service (PaaS): Platform providers offer a development environment with tools to aide programmers in creating new or updated applications, without having to own the software or servers.

● Infrastructure-as-a-Service (IaaS): Offers processing power, storage and bandwidth as utility services, similar to an electric utility model. The advantage is greater flexibility, scalability and interoperability with an organization’s legacy systems.

Many Platforms and Services to Choose From:

Cloud computing is still in its infancy, with a host of platform and application providers serving up a plethora of Internet-based services ranging from scalable on-demand  applications to data storage services to spam filtering. In this current IT environment, organizations’ technology ecosystem have to operate cloud-based services individually, but cloud integration specialists and ISVs (integrated software vendors) are becoming more prevalent and readily available to build on top of the emerging and powerful platforms.

Mashing together services provided by the worlds largest and best funded companies like Microsoft, Google, Salesforce.com, Rackspace, Oracle, IBM, HP and many others, gives way to an opportunity for companies to take hold and innovate, and build a competitive, cost saving cloud of their own on the backs of these software giant’s evolving view of the cloud.

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, licensing and maintenance of new software. Cloud computing involves all subscription-centric or pay-for-what-you-use service that extends your IT environments existing capabilities.

Before deciding whether an application is destined for the cloud, analyze you current cost of ownership. Examine more than just the original licenses and cost of ownership; factor in ongoing expenses for maintenance, power, personnel and facilities. To start, many organizations build an internal private cloud for application development and testing, and decide from their if it is cost-effective to scale fully into a public cloud environment.

“Bridging the Whitespace” between Cloud Applications

One company, Nubifer.com (which in Latin, translates to ‘bringing the clouds’) approaches simplifying the move to the Cloud for its enterprise clients by leveraging a proprietary set of Cloud tools named Nubifer Cloud:Portal, Cloud:Connector and Cloud:Link. Nubifer’s approach with Cloud:Portal enables the rapid development of “enterprise cloud mash-ups”, providing rich dash-boards for authentication, single sign-on and identity management. This increased functionality offers simple administration of accounts spanning multiple SaaS systems, and the ability to augment and quickly integrate popular cloud applications. Cloud Connector seamlessly integrates data management, data sync services, and enables highly available data interchange between platforms and applications. And Cloud:Link provides rich dashboards for analytic and monitoring metrics improving system governance and audit trails of various SLAs (Service Level Agreements).

As a Cloud computing accelerator, Nubifer focuses on aiding enterprise companies in the adoption of emerging SaaS and PaaS platforms. Our recommended approach to an initial Cloud migration is to institute a “pilot program” tailored around your platform(s) of choice to in order to fully iron-out any integration issues that may arise prior to a complete roll-out.

Nubifer’s set of Cloud Tools can be hosted on Windows Azure, Amazon EC2 or Google AppEngine. The scalability offered by these Cloud platforms promote an increased level of interoperability, availability, and a significantly lower financial barrier for entry not historically seen with current on-prem application platforms.

Cloud computing’s many flavors of services and offerings can be daunting at first review, but if you take a close look at the top providers offerings, you will see an ever increasing road map for on-boarding your existing or new applications to “the cloud”. Taking the first step is easy, and companies like Nubifer that provide the platform services, and the partner networks to aid your goals, are resourced and very eager to support your efforts.

Developing Cloud Applications: Pattern Usage and Workload Modeling

For enterprise companies today, the process of determining one or more common application usage profiles for use in cloud platform performance testing is known as ‘application workload modeling’. Cloud application workload modeling can be accomplished in a myriad of ways, and is a critical piece to properly planning, developing and implementing successful cloud solution technologies.

Some General Best Practices when Developing Cloud Applications.

  • Understand your application usage patterns. New business processes are prime candidates for building out such apps. Silo-ed departmental initiatives often evolve into organizational best practices that get adopted by the entire enterprise, and because most of the programs are developed organically from the ground up, they can leverage the interoperability of the cloud and be scaled depending on demand. This also allows the app to be discontinued with minimal cost if the initiative isn’t deemed efficient or necessary to the organization.

  • Develop and Deploy Your Application. Creating a plan and sequence of key metric drivers help you keep your cloud deployment efforts on track. Start small, grow fast is a common mantra of many start-ups (including ours), the overwhelming majority of which are intimidated by the significant cost of on-premise infrastructure.
  1. Define and Identify the objectives
  2. Document and Identify primary usage scenarios
  3. Develop and Determine navigation paths for key scenarios
  4. Design and Determine individual user data and variances
  5. Determine the likely-hood of such scenarios
  6. Identify peak target load levels
  7. Prepare and Deploy the new cloud solution
  • Monitor Spiked Usage Patterns for “Common Utility Apps”. Within every organization, large or small, there’s at least one program or application that receives spiked usage during a certain time of the year, quarter or month. One example of this pattern is related to corporate tax software, as this app is lightly used for many months, but becomes a highly leveraged application during the end of the fiscal year tax calculation process. Another example is Human Resource Information Systems (HRIS) and the periodic need for employees to subscribe to new company health plans, insurance plans, etc. Other examples include e-commerce websites like Ebay and Buy.com which experience this “peak load” requirement during holiday or special sales seasons.

The common thread across all of these types of “on-demand” cloud apps is that their usage rate is relatively standard or predictable most of the time, but become the most demanded of resources periodically. Utilizing a scalable cloud solution approach in this manner enables greater cost savings and ensures high availability of your enterprise business systems.

Application Load and Scalability, and Dynamically Reacting to Peak Load

As it is most often associated with consumer-facing web apps, unpredictable load occurs when an inordinate amount of traffic is directed toward your site, and the app is subsequently unable to meet this demand—causing the entire website to return a load error message. Nubifer has noticed sudden spikes in traffic when organizations launch fresh marketing campaigns, or receive extensive back-linking from prominent authority sites. Apps and sites eminently susceptible to these load spikes are ideal candidates for the cloud, and the most prominent advantage of this methodolgy is the auto-scale or on-demand capability.

Monitoring, a Primary Key to Any Successful Cloud Deployment

Your cloud platform monitors the patterns of Internet traffic and the utilization of the infrastructure, adding additional server resources if the traffic crosses your preset threshold. The extra servers that are added can be safely deactivated once the traffic subsides and the environment isn’t so demanding. This creates an extremely cost-efficient use case for leveraging a cloud platform for app and site hosting.

To the contrary of unpredictable load occurrences, e-commerce sites commonly experience predictable spikes in traffic. For instance, when Amazon launches pre-ordering for the next novel for Oprah’s book club, they prepare their infrastructure to handle these peak loads. Organizations of this size typically have a ballpark budget figure of the infrastructure cost because of its inherent predictability. There are many occurrences in the public sector that experience predictable bursts as well, such as electoral results and the examination of the latest census reports.

Understanding Application Usage Pattern Trends

Within your business, these patterns are manifested during a virtual company meeting or initiation of a compulsory online training for all employees, but the primary difference between this pattern of usage and the first is that there may not be a periodic recurrence of this particular pattern or spike in resource demand.

It’s paramount that your IT personnel remain cognizant of these peak load times, whether they are predictable or not, as this is a key element for effectively leveraging a cloud solution that offers support and business intelligence data regarding peak load and latency issues.

How We Have Evolved to Solve for Peak Load and Usage Monitoring

Nubifer has solved these business scenarios by developing a robust set of tools and monitoring applications for private and public clouds, named Nubifer Cloud:Link. To learn more about Cloud:Link and Nubifer’s approach to enterprise cloud monitoring visit CloudLink.pro

Google Apps Receives Federal Certification for Cloud Computing

On July 26, Google released a version of its hosted suite of applications that meets the primary federal IT security certification, making a major leap forward in its push to drive cloud computing in the government. Nearly one year in the making, Google announces its new edition of Google Apps as the first portfolio of cloud applications to have received certification under the Federal Information Security Management Act (FISMA).

The government version of Google Apps has the same pricing and services as the premier edition, including Gmail, the Docs productivity site and the Talk instant-messaging application.

Google Business Development Executive David Mihalchik said to reporters, “We see the FISMA certification in the federal government environment as really the green light for federal agencies to move forward with the adoption of cloud computing for Google Apps.”

Federal CIO Vivek Kundra announced a broad initiative to embrace the cloud across the federal government last September, as a way to reduce both costs and inefficiencies of redundant and underused IT deployments. The launch of that campaign was accompanied by the launch of Apps.gov. An online storefront for vendors to showcase their cloud-based services for federal IT manager, Apps.gov was revealed at an event at NASA’s Ames Research Center and attended by Google co-founder Sergey Brin. At the same time, Google announced plans to develop a version of its popular cloud-based services that  would meet the federal-government sector’s security requirements.

Mike Bradshaw, director of Google’s Federal Division, said, “We’re excited about this announcement and the benefits that cloud computing can bring to this market.” Bradshaw continued to say that “the President’s budget has identified the adoption of cloud computing in the federal government as a way to more efficiently use the billions of dollars spent on IT annually.” Bradshaw added that the government spends $45 million in electrical costs alone to run its data-centers and servers.

Security concerns are consistently cited by proponents of modernizing the deferral IT apparatus as the largest barrier to the adoption of cloud computing. Google is including extra security features to make federal IT buyers at agencies with more stringent security requirements feel more at ease. These extra security features are in addition to the 1,500 pages of documentation that came with Google’s FISMA certification.

Google will store government cloud accounts on dedicated servers within its data centers that will be segregated from its equipment that houses consumer and business data. Additionally, Google has committed to only use servers located in the continental U.S. for government cloud accounts. Google’s premier edition commercial customers have their data stored on servers in both the U.S. and European Union.

Mihalchik explained that security was the leading priority from the get-go in developing Google Apps for Government saying, “We set out to send a signal to government customers that the cloud is ready for government.” Adding, “today we’ve done that with the FISMA certification, and also going beyond FISMA to meet some of the other specific security requirements of government customers.”

Thus far, Google has won government customers at state and local levels such as in the cities of Los Angeles, California and Orlando, Florida. Mihalchik said that over one dozen federal agencies are in various stages of trialing or deploying elements of Google apps. Mihalchik states that several agencies are using Google anti-spam and anti-virus products to filter their email. Others, like the Department of Energy, are running pilot programs to evaluate the full suite of Google Apps in comparison with competitors’ offerings.

Find out more about cloud security and FISMA certification of Google Apps by talking to a Nubifer Consultant today.

Understanding the Cloud with Nubifer Inc. CTO, Henry Chan

The overwhelming majority of cloud computing platforms consist of dependable services relayed via data centers and built in servers with varying tiers of virtualization capabilities. These services are available anywhere that allows access to the networking platform. Clouds often appear as single arenas of access for all subscribers’ enterprise computing needs. All commercial cloud platform offerings are guaranteed to adhere to the customers’ quality of service (QoS) requirements, and typically offer service level agreements.  Open standards are crucial to the expansion and acceptance of cloud computing, and open source software has layed the ground work for many cloud platform implementations.

The article to follow is what Nubifer Inc. CTO, Henry Chan, recently described to be his summarized view of what cloud computing means, its benefits and where it’s heading in the future:

Cloud computing explained:

The “cloud” in cloud computing refers to your network’s Internet connection. Cloud computing is essentially using the Internet to perform tasks like email hosting, data storage and document sharing which were traditionally hosted on premise.

Understanding the benefits of cloud computing:

Cloud computing’s myriad of benefits depend on your organizational infrastructure needs. If your enterprise is sharing large number of applications between a varying number of office locations, it would be beneficial to your organization to store the apps on a virtual server. Web-based application hosting can save time for people traveling without the ability to connect back to the office because they can have access to everything over their shared virtual private network (VPN).

Examples of cloud computing:

Hosted email (such as GMail or Hotmail), online data back-up, online data storage, any Software-as-a-Service (SaaS) application (such as a cloud hosted CRM from vendors like Salesforce, Zoho or Microsoft Dynamics) or accounting applications, are examples of applications that can be hosted in the cloud. By hosting these applications in the cloud, your business can benefit from the interoperability and scalability cloud computing and SaaS services offer.

Safety in the cloud:

Although there are some concerns over the safety of cloud computing, the reality is that data stored in the cloud can be just as secure as the vast majority of data stored on your internal servers. The key is to implement the necessary solutions to ensure that the proper level of encryption is applied to your data while traveling to and from your cloud storage container, as well as when being stored. This can be as safe as any other solution you could implement locally when designed properly. The leading cloud vendors all currently maintain compliance with Sarbanes-Oxley, SAS90, FISMA and HIPPA.

Cloud computing for your enterprise:

To determine which layer of cloud computing is optimally suited for your organization, it is important to thoroughly evaluate your organizational goals as it relates to your IT ecosystem. Examine how you currently use technology, current challenges with technology, how your organization will evolve technologically in the years to come, and what scalability and interoperability will be required going forward. After a careful gap analysis of these determinants, you can decide what types of cloud-based solutions will be optimally suited for your organizational architecture.

Cloud computing, a hybrid solution:

The overwhelming trend in 2010 and 2011 is to move non-sensitive data and applications into the cloud while keeping trade secrets behind your enterprise firewall, as many organizations are not comfortable hosting all their applications and hardware in the cloud. The trick to making cloud computing work for your business is to understand which applications should be kept local and which would benefit most from leveraging the scalability and interoperability of the cloud ecosystem.

Will data be shared with other companies if it is hosted in the cloud:

Short answer: NO! Reputable SaaS and cloud vendors will make sure that your data is properly segmented according to the requirements of your industry.

Costs of cloud computing:

Leading cloud-based solutions charge a monthly fee for application usage and data storage, but you may be outlaying this capital expenditure already, primarily in the form of hardware maintenance and software fees—some of which could be wiped out by moving to the cloud.

Cloud computing makes it easy for your companies’ Human Resource software, payroll and CRM to co-mingle with your existing financial data, supply chain management and operations installation, while simultaneously reducing your capital requirements on these systems. Contact a Nubifer representative today to discover how leveraging the power of cloud computing can help your business excel.

Confidence in Cloud Computing Expected to Surge Economic Growth

The dynamic and flexible nature of cloud computing, software-as-a-service and platform-as-a-service may help organizations in their recovery from the current economic downturn, according to more than two thirds of IT decision leaders and makers who participated in a recent annual study by Vanson Bourne, an International Research Firm. Vanson Bourne surveyed over 600 IT and business decision makers across the United States, United Kingdom and Singapore. Of the countries sampled, Singapore is leading the shift to the cloud, with 76 percent of responding enterprises using some form of cloud computing. The U.S. follows with 66 percent, with the U.K. at 57 percent.

This two year study about Cloud Computing reveals that IT decision makers are very confident in cloud computing’s ability to deliver within budget and offer CapEx savings. Commercial and public sector respondents also predict cloud use will help decrease overall IT budgets by an average of 15 Percent, with others expecting savings as much as 40 Percent.

“Scalability, interoperability and pay-as-you-go elasticity are moving many of our clients toward cloud computing,” said Chad Collins, CEO at Nubifer Inc., a strategic Cloud and SaaS consulting firm. “However, it’s important, primarily for our enterprise clients, to work with a Cloud provider that not only delivers cost savings, but also effectively integrates technologies, applications and infrastructure on a global scale.”

A lack of access to IT capacity is clearly labeled as an obstacle to business progress, with 76 percent of business decision makers reporting they have been prevented from developing or piloting projects due to the cost or constraints within IT. For 55 percent of respondents, this remains an issue.

Confidence in cloud continues to trend upward — 96 percent of IT decision makers are as confident or more confident in cloud computing being enterprise ready now than they were in 2009. In addition, 70 percent of IT decision makers are using or plan to be using an enterprise-grade cloud solution within the next two years.

The ability to scale resources up and down in order to manage fluctuating business demand was the most cited benefit influencing cloud adoption in the U.S. (30 percent) and Singapore (42 percent). The top factor driving U.K. adoption is lower cost of total ownership (41 percent).

Security concerns remain a key barrier to cloud adoption, with 52 percent of respondents who do not leverage a cloud solution citing security of sensitive data as a concern. Yet 73 percent of all respondents want cloud providers to fully manage security or to fully manage security while allowing configuration change requests from the client.

Seventy-nine percent of IT decision makers see cloud as a straight forward way to integrate with corporate systems. For more information on how to leverage a cloud solution inside your environment, contact a Nubifer.com representative today.

Taking a Closer Look at the Power of Microsoft Windows Azure AppFabric

Microsoft’s Windows Azure runs Windows applications and stores advanced applications, services and data in the cloud. This baseline understanding of Windows Azure, coupled with the practicality of using computers in the cloud makes leveraging the acres of Internet-accessible servers on offer today an obvious choice. Especially when the alternate option of buying and maintaining your own space in data centers and hardware deployed to those data centers can quickly become costly. For some applications, both code and data might live in the cloud, where the systems they use are managed and maintained by someone else. On-premise applications—which run inside an organization—might store data in the cloud or rely on other cloud infrastructure services. Ultimately, making use of the cloud’s capabilities provides a variety of advantages.

Windows Azure applications and on-premises applications can access the Windows Azure storage service using a REST-ful approach. The storage service allows storing binary large objects (blobs), provides queues for communication between components of Windows Azure application, and also offers a form of tables with a simple query language. The Windows Azure platform also provides SQL Azure for applications that need traditional relational storage. An application using the Windows Azure platform is free to use any combination of these storage options.

One obvious need between applications hosted in the cloud and hosted on-premise is communication between applications. Windows Azure AppFabric provides a Service Bus for bi-directional application connectivity and Access Control for federated claims-based access control.

Service Bus for Azure AppFabric

The primary feature of the Service Bus is message “relaying” to and from the Windows Azure cloud to your software running on-premise, bypassing any firewalls, network address translation (NAT) or other network obstacles. The Service Bus can also help negotiate direct connections between applications. Meanwhile, the Access Control feature provides a claims-based access control mechanism for applications, making federation easier to tackle and allowing your applications to trust identities provided by other systems.

A .NET developer SDK is available that simplifies integrating these services into your on-premises .NET applications. The SDK integrates seamlessly with Windows Communication Foundation (WCF) and other Microsoft technologies to build on pre-existing skill sets as much as possible. These SDKs have been designed to provide a first-class .NET developer experience, but it is important to point out that they each provide interfaces based on industry standard protocols. Thus, making it possible for applications running on any platform to integrate with them through REST, SOAP and WS-protocols.

SDKs for Java and Ruby are currently available for download. Combining them with the underlying Windows Azure platform service produces a powerful, cloud-based environment for developers.

Access Control for the Azure AppFabric

Over the last decade, the industry has been moving toward an identity solution based on claims. A claims-based identity model allows the common features of authentication and authorization to be factored out of your code, at which point such logic can then be centralized into external services that are written and maintained by subject matter experts in security and identity. This is beneficial to all parties involved.

Access Control is a cloud-based service that does exactly that. Rather than writing your own customer user account and role database, customers can let AC orchestrate the authentication and most of the user authorization. With a single code base in your application, customers can authorize access to both enterprise clients and simple clients. Enterprise clients can leverage ADFS V2 to allow users to authenticate using their Active Directory logon credentials, while simple clients can establish a shared secret with AC to authenticate directly with AC.

The extensibility of Access Control allows for easy integration of authentication and authorization through many identity providers without the need for refactoring code. As Access Control evolves, support for authentication against Facebook Connect, Google Accounts, and Windows Live ID can be quickly added to an application. To reiterate: over time, it will be easy to authorize access to more and more users without having to change the code base.

When using AC, the user must obtain a security token from AC in order to log in; this token is similar to a signed email message from AC to your service with a set of claims about the user’s identity. AC doesn’t issue a token unless the user first provides his or her identity by either authenticating with AC directly or by presenting a security token from another trusted issuer (such as ADFS) that has authenticated that user. So by the time the user presents a token to the service, assuming it is validated, it is safe to trust the claims in the token and begin processing the user’s request.

Single sign-on is easier to achieve under this model, so a customer’s service is no longer responsible for:

• Authenticating users
• Storing user accounts and passwords
• Calling to enterprise directories to look up user identity details
• Integrating with identity systems from other platforms or companies
• Delegation of authentication (a.k.a. federation) with other security realms

Under this model, a customer’s service can make identity-related decisions based on claims about the user made by a trusted issuer like AC. This could be anything from simple service personalization with the user’s first name, to authorizing the user to access higher-valued features and resources in the customer’s service.

Standards

Due to the fact that single sign-on and claims-based identity have been evolving since 2000, there are a myriad of ways of doing it. There are competing standards for token formats as well as competing standards for the protocols used to request those tokens and send them to services. This fact is what makes AC so useful, because over time, as it evolves to support a broader range of these standards, your service will benefit from broader access to clients without having to know the details of these standards, much less worry about trying to implement them correctly.

Security Assertion Markup Language (SAML) was the first standard. SAML specified an XML format for tokens (SAML tokens) in addition to protocols for performing Web App/Service single sign-on (SAML tokens are sometimes referred to inside Microsoft as SAMLP–for the SAML protocol suite). WS-Federation and related WS-* specifications also define a set of protocols for Web App/Service single sign-on, but they do not restrict the token format to SAML, although it is practically the most common format used today.

To Summarize

The Service Bus and Access Control constituents of the Windows Azure platform provide key building block services that are vital for building cloud-based or cloud-aware applications. Service Bus enables customer to connect existing on-premises applications with new investments being built for the cloud. Those cloud assets will be able to easily communicate with on-premises services through the network traversal capabilities, which are provided through Service Bus relay.

Overall, the Windows Azure platform represents a comprehensive Microsoft strategy designed to make it easy for Microsoft developers to realize the opportunities inherent to cloud computing. The Service Bus and Access Control offer a key component of the platform strategy, designed specifically to aid .NET developers in making the transition to the cloud. These services provide cloud-centric building blocks and infrastructure in the areas of secure application connectivity and federated access control.

For more information on the Service Bus & Access Control, please contact a Nubifer representative or visit these Microsoft sponsored links:

• An Introduction to Windows Azure platform AppFabric for Developers (this paper)
o http://go.microsoft.com/fwlink/?LinkID=150833

• A Developer’s Guide to Service Bus in Windows Azure platform AppFabric
o http://go.microsoft.com/fwlink/?LinkID=150834

• A Developer’s Guide to Access Control in Windows Azure platform AppFabric
o http://go.microsoft.com/fwlink/?LinkID=150835

• Windows Azure platform
o http://www.microsoft.com/windowsazure/

• Service Bus and Access Control portal
o http://netservices.azure.com/

A Guide to Choosing CRM Software

Customer Relationship Management (CRM) software lets you effectively manage your business, but choosing the right software is often a daunting process. This nubifer.com blog is aimed at alleviating some of the more challenging decision making processes.

CRMs offer several levels of organization to help strengthen and deepen customer relationships, ranging from basic contact management software, to tracking and managing sales, or tweets on Twitter. The Return on Investment (ROI) usually is an increase in sales, and should also translate to better customer service. The following guide will help you through the process, from pinpointing your customer relationship needs to ultimately selecting a CRM software application.

Choosing CRM Software: Why Invest in a CRM?

CRM is a term used to describe methodologies, software and Internet capabilities designed to help businesses effectively manage customer relationships. Traditionally, CRMs have been seen as an automated way to track and maintain client contact information, but the CRMs of today are faster, smarter and highlight the most current computing technologies available.

In this way, the CRM can be used as a tool to set and measure sales goals, devise, deliver and track email marketing campaigns up through and including interfacing with social media accounts. The importance of CRMs in the marketplace has grown as well, and with sales, marketing and customer service on the playing field, an enterprise can match customer needs with company offerings, thus becoming more efficient and profitable.

Raju Vegesna, Executive Evangelist for Zoho, an online CRM company based in Pleasanton, California, adds that beyond managing customer relations, “A CRM system comes in handy in such situations as it helps you aggregate all customer related information in a single place,” which is crucial for a small business owner trying to keep track of contracts, invoices and emails.

Vegesna added that if small business owners frequently personalize and email customers manually–or if they are unaware of the status of each customer in the pipeline–they will likely need a CRM system.

Chad Collins, CEO of Nubifer Inc., a Cloud, SaaS and CRM strategic advisory company based in San Diego, California, says that, essentially, CRMs offer “business functionality at your fingertips that will save a ton of time for front-line personnel by streamlining your varied sales processes.”

Collins suggests a top-down approach, in which management sets the example by using the tool, as a way to encourage employee buy-in. Collins also suggests having a designated go-to employee (someone that is not the boss) who really knows the ins and outs of the system, called the “CRM Evangelist.” He also suggested offerings rewards and incentives to help employees approach the new system without fear.

The cost is the next major challenge to CRM success. According to Collins, it can cost anywhere from $300 to $2,000 per user per year to implement a CRM. “The CEO needs to understand the cost of CRM goes beyond simple licensing, rather it encompass the license, training, and whatever business process changes they need to make,” says Collins.

According to Chad Collins of Nubifer Inc., there are three main areas to consider when evaluating the pros and cons of a CRM: Platform, how easy it is to implement the CRM and vendor strength and weakness.

Platform

  • How much flexibility is there in the software/product so the company can create their own process?
  • How easy is it to configure the software or to get started with on-demand (Internet-based) solutions?
  • How easy is it to integrate data from other sources into the software or on-demand solution?
  • How scalable is the software or on-demand solution?
  • Will it deliver what you need it to deliver in terms of performance?
  • Will it offer portals or front end screens to help you and your colleagues to collaborate with one another?

Ease of Implementation

  • Are you looking for on-demand, SaaS, cloud, and Internet-based solutions?
  • Thin or thick clients: Will you have the software on your machine when you travel or do you need to dial up using a browser?
  • How much mobility do you want? Can it be done on a laptop or can it be done using mobile phones?

Vendor Strength and Weakness

  • How long has the company been around­?
  • Where have they gone in terms of their vertical thrust –do they specialize in just one sector?
  • What computing platform are they using to make sure it’s compatible with your system?
  • What’s their domain expertise in terms of your particular business area?
  • What professional services do they offer to help you get up and running?
  • What partnerships do they have with companies like Microsoft Outlook to work with your CRM?

It will be easier to determine what technology is the best fit for your company once these questions are answered.

Choosing CRM Software: Social CRMs

The latest trend to emerge in CRM is social networking, but industry executives are still trying to figure out whether or not small businesses need their CRM to track their social networking. Collins of Nubifer Inc. says that the advantages of social CRM—for those that are ready to embrace it—are three-fold:

  1. The ability to connect with people using free (or very cheap) means.
  2. The ability to find those that you want to do business with on social networks and learn what’s important to them using monitoring tools.
  3. The ability to create a message that responds directly to what customer challenges are right then and there.

Collins added, “What’s [also] really important today is leveraging the web and creating opportunities to engage people. Traditional CRMs weren’t built for that. Now with online social networks you can create content that works for you 24/7 and builds leads for you. People can find what you’re talking about and ask you for questions. You can create more online relationships than you can face to face.”

An example is given by Collins: “If you have a large group of people on Twitter talking about a specific problem they are trying to solve, you want to be able to grab those Tweets or Facebook posts and route them to the appropriate person in your company so the customer can get the answer they require directly from the source.”

When you are ready to take the leap, there is a CRM available to fit your needs, whether you need to simply organize contact information or require robust assistance in meeting and tracking your sales goal. For more information regarding choosing the right CRM for your business contact a Nubifer Consultant.

Jabber Now Supported on Zoho Chat

Launched Wednesday August 4th, the ever-popular Jabber protocol will be supported on Zoho Chat. This enables users to log-in with their personal Zoho credentials and chat with colleagues and personnel if the enterprise network contains a Jabber client. This latest Zoho update interoperates with a multitude of Jabber clients including desktop, web and mobile clients.

HIGHLIGHTS

  • Zoho Chat now supports Jabber. Users can connect to Zoho Chat from any desktop/web/mobile clients
  • Zoho Chat is a multi-protocol IM App that is integrated across all Zoho Apps
  • Zoho Chat can also be used for support when embedded on websites
  • Supports notifications on the desktop clients (for document sharing, missed messages)

In Zoho’s previous release, Jabber on the client side was supported, thus permitting users to connect to other Jabber networks from the Zoho Chat client. With this most recent update, Zoho Chat supports Jabber protocol on the server side allowing you to connect to Zoho from any chat client (encrypted connections only), creating many interesting business use case scenarios.

If your business environment is anything like ours here at Nubifer.com, you need to remain constantly connected to your partners, clients and colleagues. This newest release from Zoho allows users to log-in to their mobile device and run the application in the background. While on Jabber clients, Zoho Chat users can view the status of other connected members, view their profile photos, receive ‘Typing’ notifications, set a users current status and much more. Users will also be notified whenever a connection tries to establish a chat (if the mobile app supports push notifications).

‘Idle Detection’ is also supported with this newest Zoho Chat release. A primary feature in the Zoho Chat Jabber Support release is the ability to retrieve Zoho Groups (Personal groups) from a users account and initiate a group chat from the subscribers preferred desktop Client.

Site Support and Notifications

A highly sought after feature from us here at Nubifer, as well as from other Zoho users, was the ability to support customers chat requests from a desktop client. With this recent release, Zoho Chat can now be embedded on a subscribers website to receive support requests. With this update, users can receive notifications from their website visitors in the subscribers’ preferred desktop client. Once these invitations to chat are received, a user can accept the invitation and initiate a chat session with the website visitors.

Available on users’ desktop clients, Zoho Chat now contains a notification system which alerts a subscriber a document is shared, when someone responds to a topic in Zoho Discussions, or when a chat is missed. Please contact a Nubifer.com representative to learn more about Zoho’s multitude of Cloud hosted officing applications.

Here is what you need to try Zoho Chat on your favorite chat client.

  • Protocol: XMPP/Jabber
  • Username: Zoho username
  • Password: Your Zoho Password
  • Domain: zoho.com
  • Jabber ID: username@zoho.com

For more information about Zoho Apps, please visit nubifer.com

Rackspace Announces Plans to Collaborate with NASA and Other Industry Leaders on OpenStack Project

On July 19, Rackspace Hosting, a specialist in the hosting and cloud computing industry, announced the launch of OpenStackTM, an open-source cloud platform designed to advance the emergence of technology standards and cloud interoperability. Rackspace is donating the code that fuels its Cloud Files and Cloud Servers public-cloud offerings to the OpenStack project, which will additionally incorporate technology that powers the NASA Nebula Cloud Platform. NASA and Rackspace plan on collaborating on joint technology development and leveraging the efforts of open-source software developers on a global scale.

NASA’s Chief Technology Officer for IT Chris C. Kemp said of the announcement, “Modern scientific computation requires ever increasing storage and processing power delivered on-demand. To serve this demand, we built Nebula, an infrastructure cloud platform designed to meet the needs of our scientific and engineering community. NASA and Rackspace are uniquely positioned to drive this initiative based on our experience in building large scale cloud platforms and our desire to embrace open source.”

OpenStack is poised to feature several cloud infrastructure components including a fully distributed object store that is based on Rackspace Cloud Files (currently available at OpenStack.org). A scalable compute-provisioning engine based on the NASA Nebula cloud technology and Rackspace Cloud Servers technology are the next components planned for release, anticipated to be available sometime in late 2010. Organizations using these components would be able to turn physical hardware into scalable and extensible cloud environments using the same code currently in production serving large government projects and tens of thousands of customers.

“We are founding the OpenStack initiative to help drive industry standards, prevent vendor lock-in and generally increase the velocity of innovation in cloud technologies. We are proud to have NASA’s support in this effort. Its Nebula Cloud Platform is a tremendous boost to the OpenStack community. We expect ongoing collaboration with NASA and the rest of the community to drive more-rapid cloud adoption and innovation, in the private and public spheres,” Lew Moorman, President and CSO at Rackspace, said at the time of the announcement.

Both organizations have committed to use OpenStack to power their cloud platforms, while Rackspace will dedicate open-source developers and resources to support adoption of OpenStack among service providers and enterprises. Rackspace hosted an OpenStack Design Summit in Austin, Texas from July 13 to 16, in which over 100 technical advisors, developers and founding members teamed up to validate the code and ratify the project roadmap. Among the more than 25 companies represented at the Design Summit were Autonomic Resources, AMD, Cloud.com, Citrix,  Dell, FathomDB, Intel, Limelight, Zuora, Zenoss, Riptano and Spiceworks.

“OpenStack provides a solid foundation for promoting the emergence of cloud standards and interoperability. As a longtime technology partner with Rackspace, Citrix will collaborate closely with the community to provide full support for the XenServer platform and our other cloud-enabling products,” said Peter Levine, SVP and GM, Datacenter and Cloud Division, Citrix Systems.

Forrest Norrod, Vice President and General manager of Server Platforms, Dell, added, “We believe in offering customers choice in cloud computing that helps them improve efficiency. OpenStack on Dell is a great option to create open source enterprise cloud solutions.”

Dell and Microsoft Partner Up with the Windows Azure Platform Appliance

Dell and Microsoft announced a strategic partnership in which Dell will adopt the Windows Azure platform appliance as part of its Dell Services Cloud to develop and deliver next-generation cloud services at Microsoft’s Worldwide Partner Conference on July 12. With the Windows Azure platform, Dell will be able to deliver private and public cloud services for its enterprise, public, small and medium-sized business customers. Additionally, Dell will develop a Dell-powered Windows Azure platform appliance for enterprise organizations to run in their data-centers.

So what does this mean exactly? By implementing the limited production release of the Windows Azure platform appliance to host public and private clouds for its customers, Dell will leverage its vertical industry expertise in offering solutions for the speedy delivery of flexible application hosting and IT operations. In addition, Dell Services will produce application migration, advisory migration and integration and implementation services.

Microsoft and Dell will work together to develop a Windows Azure platform appliance for large enterprise, public and hosting customers to deploy to their own data centers. The resulting appliance will leverage infrastructure from Dell combined with the Windows Azure platform.

This partnership shows that both Dell and Microsoft recognize that more organizations can reap the benefits of the flexibility and efficiency of the Windows Azure platform. Both companies understand that cloud computing allows IT to increase responsiveness to business needs and also delivers significant efficiencies in infrastructure costs. The result will be an appliance to power a Dell Platform-as-a-Service (PaaS) Cloud.

The announcement with Dell occurred on the same day that Microsoft announced the limited production release of the Windows Azure platform appliance, a turnkey cloud platform for large service providers and enterprises to run in their own data centers. Initial partners (like Dell) and customers using the appliance in their data centers will have the scale-out application platform and data center efficiency of Windows Azure and SQL Azure that Microsoft currently provides.

Since the launch of the Windows Azure platform, Dell Data Center Solutions (DCS) has been working with Microsoft to built out and power the platform. Dell will use the insight gained as a primary infrastructure partner for the Windows Azure platform to make certain that the Dell-powered Windows Azure platform appliance is optimized for power and space to save ongoing operating costs and performance of large-scale cloud services.

A top provider of cloud computing infrastructure, Dell’s client roster boasts 20 of the 25 most heavily-trafficked Internet sites and four of the top global search engines. The company has been custom-designing infrastructure solutions for the top global cloud service providers and hyperscale data center operations for the past three years and has developed an expertise about the specific needs of organizations in hosting, HPC, Web 2.0, gaming, energy, social networking, energy, SaaS, plus public and private cloud builders in that time.

Speaking about the partnership with Microsoft, president of Dell Services Peter Altabef said, “Organizations are looking for innovative ways to use IT to increase their responsiveness to business needs and drive greater efficiency. With the Microsoft partnership and the Windows Azure platform appliance, Dell is expanding its cloud services capabilities to help customers reduce their total costs and increase their ability to succeed. The addition of the Dell-powered Windows Azure platform appliance marks an important expansion of Dell’s leadership as a top provider of cloud computing infrastructure.”

Dell Services delivers vertically-focused cloud solutions with the combined experience of Dell and Perot Systems. Currently, Dell Services delivers managed and Software-as-a-Service support to over 10,000 customers across the globe. Additionally, Dell boasts a comprehensive suite of services designed to help customers leverage public and private cloud models. With the new Dell PaaS powered by the Windows Azure platform appliance, Dell will be able to offer customers an expanded suite of services including transformational services to help organizations move applications into the cloud and cloud-based hosting.

Summarizing the goal of the partnership with Dell, Bob Muglia, president of Microsoft Server and Tools said at the Microsoft Windows Partner Conference on July 12, “Microsoft and Dell have been building, implementing and operating massive cloud operations for years. Now we are extending our longstanding partnership to help usher in the new era of cloud computing, by giving customers and partners the ability to deploy Windows Azure platform in their datacenters.”

Protected: Microsoft Azure® Platform-as-a-Service Breaks Away from the Pack

This content is password protected. To view it please enter your password below:

Microsoft Releases Security Guidelines for Windows Azure

Industry analysts have praised Microsoft for doing a respectable job at ensuring the security of its Business Productivity Online Services, Windows and SQL Azure. With that said, deploying applications to the cloud requires additional considerations to ensure that data remains in the correct hands.

Microsoft released a version of its Security Development Lifecycle in early June as a result of these concerns. Microsoft’s Security Development Lifecycle, a statement of best practices to those building Windows and .NET applications, focuses on how to build security into Windows Azure applications and has been updated over the years to ensure the security of those apps.

Principle security program manager of Microsoft’s Security Development Lifecycle team Michael Howard warns that those practices were not, however, designed for the cloud. Speaking in a pre-recorded video statement embedded in a blog entry, Howard says, “Many corporations want to move their applications to the cloud but that changes the threats, the threat scenarios change substantially.”

Titled “Security Best Practices for Developing Windows Azure Applications,” the 26-page white paper is divided into three sections: the first describes the security technologies that are part of Windows Azure (including the Windows Identity Foundation, Windows Azure App Fabric Access Control Service and Active Directory Federation Services 2.0—a core component for providing common logins to Windows Server and Azure); the second explains how developers can apply the various SDL practices to build more secure Windows Azure applications, outlining various threats like namespace configuration issues and recommending data security practices like how to generate shared-access signatures and use of HTTPS in the request URL;  and the third is a matrix that identifies various threats and how to address them.

Says Howard, “Some of those threat mitigations can be technologies you use from Windows Azure and some of them are threat mitigations that you must be aware of and build into your application.”

Security is a major concern and Microsoft has address many key issues concerning security in the cloud. President of Lieberman Software Corp., a Microsoft Gold Certified Partner specializing in enterprise security Phil Lieberman says, “By Microsoft providing extensive training and guidance on how to properly and securely use its cloud platform, it can overcome customer resistance at all levels and achieve revenue growth as well as dominance in this new area. This strategy can ultimately provide significant growth for Microsoft.”

Agreeing with Lieberman, Scott Matsumoto, a principal consultant with the Washington, D.C.-based consultancy firm Cigital Inc., which specializes in security, says, “I especially like the fact that they discuss what the platform does and what’s still the responsibility of the application developer. I think that it could be [wrongly] dismissed as a rehash of other information or incomplete—that would be unfair.” To find more research on Cloud Security, please visit Nubifer.com.

Don’t Underestimate a Small Start in Cloud Computing

Although many predict that cloud computing will forever alter the economics and strategic direction of corporate IT, it is likely that the impact of the cloud will continue to be largely from small projects. Some users and analysts say that these small projects, which do not project complex, enterprise-class, computing-on-demand services, are what to look out for.

David Tapper, outsourcing and offshoring analyst for IDC says, “What we’re seeing is a lot of companies using Google (GOOG) Apps, Salesforce and other SaaS apps, and sometimes platform-as-a-service providers, to support specific applications. A lot of those services are aimed at consumers, but they’re just as relevant in business environments, and they’re starting to make it obvious that a lot of IT functions are generic enough that you don’t need to build them yourself.” New enterprise offerings from Microsoft, such as Microsoft BPOS, have also shown up on the scene with powerful SaaS features to offer businesses.

According to Tapper, the largest representation of mini-cloud computing is small- and mid-sized businesses using commercial versions of Google Mail, Google Apps and similar ad hoc or low-cost cloud-based applications. With that said, larger companies are doing the exact same thing. “Large companies will have users whose data are confidential or who need certain functions, but for most of them, Google Apps is secure enough. We do hear about some very large cloud contracts, so there is serious work going on. They’re not the rule though,” says Tapper.

First Steps into the Cloud

A poll conducted by the Pew Research Center’s Internet & American Life Project found that 71 percent of the “technology stakeholders and critics” believe that most people will do their work from a range of computing devices using Internet-basd applications as their primary tools by 2020.

Respondents were picked from technology and analyst companies for their technical savvy and as a whole believe cloud computing will dominate information transactions by the end of the decade. The June report states that cloud computing will be adopted because of its ability to provide new functions quickly, cheaply and from anywhere the user wishes to work.

Chris Wolf, analyst at Gartner, Inc.’s Burton Group, thinks that while this isn’t unreasonable, it may be a little too optimistic. Wolf says that even fairly large companies sometimes use commercial versions of Google Mail or instant messaging, but it is a different story when it comes to applications requiring more fine tuning, porting, communications middleware or other heavy work to run on public clouds, or data that has to be protected and documented.

Says Wolf, “We see a lot of things going to clouds that aren’t particularly sensitive–training workloads, dev and test environments, SaaS apps; we’re starting to hear complaints about things that fall outside of IT completely, like rogue projects on cloud services. Until there are some standards for security and compliance, most enterprises will continue to move pretty slowly putting critical workloads in those environments. Right now all the security providers are rolling their own and it’s up to the security auditors to say if you’re in compliance with whatever rules govern that data.”

Small, focused projects using cloud technologies are becoming more common, in addition to the use of commercial cloud-based services, says Tapper.

For example, Beth Israel Deaconnes Hospital in Boston elevated a set of VMware (VMW) physical and virtual servers into a cloud-like environment to create an interface to its patient-records and accounting systems, enabling hundreds of IT-starved physician offices to link up with the use of just one browser.

New York’s Museum of Modern Art started using workgroup-on-demand computing systems from CloudSoft Corp. last year. This allowed the museum to create online workspaces for short-term projects that would otherwise have required real or virtual servers and storage on-site.

Cloud computing will make it clear to both IT and business management that some IT functions are just generic when they’re homegrown as when rented, in about a decade or so. Says Tapper, “Productivity apps are the same for the people at the top as the people at the bottom. Why buy it and make IT spend 80 percent of its time maintaining essentially generic technology?” Contact Nubifer.com to learn more…

Nubifer Cloud:Link Mobile and Why Windows Phone 7 is Worth the Wait

Sure, Android devices become more cutting-edge with each near-monthly release and Apple recently unveiled its new iPhone, but some industry experts suggest that Windows Phone 7 is worth the wait. Additionally, businesses may benefit from waiting until Windows Phone 7 arrives to properly compare the benefits and drawbacks of all three platforms before making a decision.

Everyone is buzzing about the next-generation iPhone and smartphones like the HTC Incredible and HTC EVO 4G, but iPhone and Android aren’t even the top smart phone platforms. With more market share than second place Apple and third place Microsoft combined, RIM remains the number one smartphone platform. Despite significant gains since its launch, Android is in fourth place, with only 60 percent as much market share as Microsoft.

So what gives? In two words: the business market. While iPhone was revolutionary for merging the line between consumer gadget and business tool, RIM has established itself as synonymous with mobile business communications. Apple and Google don’t provide infrastructure integration or management tools comparable to those available with the Blackberry Enterprise Server (BES).

The continued divide between consumer and business is highlighted by the fact that Microsoft is still in third place with 15 percent market share. Apple and Google continue to leapfrog one another while RIM and Microsoft are waiting to make their move.

The long delay in new smartphone technology from Microsoft is the result of leadership shakeups and the fact that Microsoft completely reinvented its mobile strategy, starting from scratch. Windows Phone 7 isn’t merely an incremental evolution of Windows Mobile 6.5. Rather, Microsoft went back to the drawing board to create an entirely new OS platform that recognizes the difference between a desktop PC and a smartphone as opposed to assuming that the smartphone is a scaled-down Windows PC.

Slated to arrive later this year, Windows 7 smartphones promise an attractive combination of the intuitive touch interface and experience found in the iPhone and Android, as well as the integration and native apps to tie in with the Microsoft server infrastructure that comprises the backbone of most customers network and communications architecture.

With that said, the Windows Phone 7 platform won’t be without its own set of issues. Like Apple’s iPhone, Windows Phone 7 is expected to lack true multitasking and the copy and paste functionality from the get-go. Additionally, Microsoft is also locking down the environment with hardware and software restrictions that limit how smartphone manufacturers can customize the devices, and doing away with all backward compatibility with existing Windows Mobile hardware and apps.

As a mobile computing platform, Cloud Computing today touches many devices and end points. From Application Servers to Desktops and of course the burgeoning ecosystem of smart phone devices. When studying the landscapes and plethora of cell phone operating systems, and technology capabilities of the smart phones, you start to see a whole new and exciting layer of technology for consumers and business people alike.

Given the rich capabilities of Windows Phone 7 offering Silverlight, and/or XNA technology, we at Nubifer have become compelled to engineer the upgrades to our cloud services to inter-operate with the powerful new upcoming technologies offered by Windows Phone 7. At Nubifer, we plan to deploy and inter-operate with many popular smart phones and hand-set devices by way of linking these devices to our Nubifer Cloud:Link technology and offering extended functionality delivered by Nubifer Cloud:Connector and Cloud:Portal which enable enterprise companies to gain a deeper view into the analytics and human computer interaction of end users and subscribers of various owned and leased software systems hosted entirely in the cloud or by way of the hybrid model.

It makes sense for companies that don’t need to replace their smartphones at once to wait for Windows Phone 7 to arrive, at which point all three platforms and be compared and contrasted. May the best smartphone win!

Cloud Computing in 2010

A recent research study by the Pew Internet & American Life Project released on June 11 found that most people expect to “access software applications online and share and access information through the use of remote server networks, rather than depending primarily on tools and information housed on their individual, personal computers” by 2010. This means that the term “cloud computing” will likely be referred to as simply “computing” ten years down the line.

The report points out that we are currently on that path when it comes to social networking, thanks to sites like Twitter and Facebook. We also communicate in the cloud using services like Yahoo Mail and Gmail, shop in the cloud on sites like Amazon and eBay, listen to music in the cloud on Pandora, share pictures in the cloud on Flickr and watch videos on cloud sites like Hulu and YouTube.

The more advanced among us are even using services like Google Docs, Scribd or Docs.com to create, share or store documents in the cloud. With that said, it will be some time before desktop computing falls away completely.

The report says: “Some respondents observed that putting all or most of faith in remotely accessible tools and data puts a lot of trust in the humans and devices controlling the clouds and exercising gate keeping functions over access to that data. They expressed concerns that cloud dominance by a small number of large firms may constrict the Internet’s openness and its capability to inspire innovation—that people are giving up some degree of choice and control in exchange for streamlines simplicity. A number of people said cloud computing presents difficult security problems and further exposes private information to governments, corporations, thieves, opportunists, and human and machine error.”

For more information on the current state of Cloud Computing, contact Nubifer today.

The Impact of Leveraging a Cloud Delivery Model

In a recent discussion about the positive shift in the Cloud Computing discourse towards actionable steps as opposed to philosophical rants in definitions, .NET Developer’s Journal issued a list of five things not to do. The first mistake among the list of five (which included #2. assuming server virtualization is enough; #3 not understanding service dependencies; #4 leveraging traditional monitoring; #5 not understanding internal/external costs), was not understanding the business value. Failing to understand the business impact of leveraging a Cloud delivery model for a given application or service is a crucial mistake, but it can be avoided.

When evaluating a Cloud delivery option, it is important to first define the service. Consider: is it new to you or are you considering porting an existing service? On one hand, if new, there is a lower financial bar to justify a cloud model, but on the downside is a lack of historical perspective on consumption trends to aid an evaluating financial considerations or performance.

Assuming you choose a new service, the next step is to address why you are looking at Cloud, which may require some to be honest about their reasons. Possible reasons for looking at cloud include: your business requires a highly scalable solution; your data center is out of capacity; you anticipate this to be a short-lived service; you need to collaborate with a business partner on neutral territory; your business has capital constraints.

All of the previously listed reasons are good reasons to consider a Cloud option, yet if you are considering this option because it takes weeks, months even, to get a new server in production; your Operation team is lacking credibility when it comes to maintaining a highly available service; or your internal cost allocation models are appalling—you may need to reconsider. In these cases, there may be some in-house improvements that need to be made before exploring a Cloud option.

An important lesson to consider is that just because you can do something doesn’t mean you necessarily should, and this is easily applicable in this situation. Many firms have had disastrous results in the past when they exposed legacy internal applications to the Internet. The following questions must be answered when thinking about moving applications/services to the Cloud:

·         Does the application consume or generate data with jurisdictional requirements?

·         Will your company face fines or a public relations scandal is there is a security breach/data loss?

·         What part of your business value chain is exposed if the service runs poorly? (And are there critical systems that rely on it?)

·         What if the application/service doesn’t run at all? (Will you be left stranded or are there alternatives that will allow the business to remain functioning?)

Embracing Cloud services—public or private—comes with tremendous benefits, yet a constant dialogue about the business value of the service in question is required to reap the rewards. To discuss the benefits of adopting a hybrid On-Prem/Cloud solution contact Nubifer today.

Asigra Introduces Cloud Backup Plan

Cloud backup and recovery software provider Asigra announced the launch of Cloud Backup v10 on June 8. Available through the Asigra partner network, the latest edition extends the scope and performance of the Asigra platform, including protection for laptops, desktops, servers, data centers and cloud computing environments with tiered recovery options to meet Recovery Time Objectives (RTOs). Organizations can select an Asigra service provider for offsite backup, choose to deploy the software directly onsite, or both. Pricing begins at $50 per month through cloud backup service providers.

V10 expanded the tiers of backup and recovery (Local-Only Backup and Backup Lifecycle Manager (BLM) enables cloud storage) and also allows the backup of laptops in the field and other environments, enabling businesses to back up and recover their data to and from physical, virtual or both types of servers. Among the features are DS-Mobile support to backup laptops in the field, FIPS 140-2 NIST certified security and encryption of data in-flight and at-rest and new backup sets for comprehensive protection of enterprise applications, including MS Exchange, MS SharePoint, MS SQL, Windows Hyper-V Oracle SBT, Sybase and Local-Only backup.

Senior analyst at the Enterprise Strategy Group Lauren Whitehouse said, “The local backup option is a powerful benefit for managed service providers (MSPs) as they can now offer more pricing granularity for customers on three levels—local, new and aging data. With more pricing flexibility, more reliable and affordable backup service package to attract more business customers and free them from the pain of tape backup.”

At least two-thirds of companies in North America and Europe have already implemented server virtualization, according to Forrester Research. Asigra added enhancements to the virtualization support in v10 as a response to the major server virtualization vendors embracing the cloud as the strategic deliverable of a virtualized infrastructure. The company has offered support for virtual machine backups at the host level; Cloud Backup v10 is able to be deployed as a virtual appliance with virtual infrastructures. The company said that the current version now supports Hyper-V, VMware and XenServer.

“The availability of Asigra Cloud Backup v10 has reset the playing field for Asigra with end-to-end date protection from the laptop to the data center to the public cloud. With advanced features that differentiate Asigra both technologically and economically from comparable solutions, the platform can adapt to the changing nature of today’s IT environments, providing unmatched backup efficiency and security as well as the ability to respond to dynamic business challenges,” said executive vice president for Asigra Eran Farakjun. To discover how a Cloud back-up system can benefit your enterprise, contact Nubifer Inc.

The Future of Enterprise Software in the Cloud

Although there is currently a lot of discussion regarding the impact that cloud computing and Software-as-a-Service will have on enterprise software, it comes mainly from a financial standpoint. It is now time to begin understanding how enterprise software as we know it will evolve across a federated set of private and public cloud services.

The strategic direction being taken by Epicor is a prime example of the direction that enterprise software is taking. A provider of ERP software for the mid-market, Epicor is taking a sophisticated approach by allowing customers to host some components on the Epicor suite on premise rather than focusing on hosting software in the cloud. Other components are delivered as a service.

Epicor is a Microsoft software partner that subscribes to the Software Plus Services mantra and as such is moving to offer some elements of its software, like the Web server and SQL server components, as an optional service. Customers would be able to invoke this on the Microsoft Azure cloud computing platform.

Basically, Epicor is going to let customers deploy software components where they make the most sense, based on the needs of customers on an individual basis. This is in contrast to proclaiming that one model of software delivery is better than another model.

Eventually, every customer is going to require a mixed environment, even those that prefer on-premise software, because they will discover that hosting some applications locally and in the cloud simultaneously will allow them to run a global operation 24 hours a day, 7 days a week more easily.

Much of the argument over how software is delivered in the enterprise will melt away as customers begin to view the cloud as merely an extension of their internal IT operations. To learn more on how the future of Software in the Cloud can aide your enterprise, schedule a discussion time with a Nubifer Consultant today.

What Cloud APIs Reveal about the Budding Cloud Market

Although Cloud Computing remains hard to define, one of its essential characteristics is pragmatic access to virtually unlimited network, compute and storage resources. The foundation of a cloud is a solid Application Programming Interface (API), despite the fact that many users access cloud computing through consoles and third-party applications.

CloudSwitch works with several cloud providers and thus is able to interact with a variety of cloud APIs (both active and about-to-be-released versions). CloudSwitch has come up with some impressions after working with both the APIs and those implementing them.

First, clouds remain different in spite of constant discussion about standards. Cloud APIs have to cover more than start/stop/delete a server, and once the API crosses into provisioning the infrastructure (network ranges, storage capacity, geography, accounts, etc.), it all starts to get interesting.

Second, a very strong infrastructure is required for a cloud to function as it should. The infrastructure must be good enough to sell to others when it comes to public clouds. Key elements of the cloud API can inform you about the infrastructure, what tradeoffs the cloud provider has made and the impact of end users, if you are attuned to what to look out for.

Third, APIs are evolving fast, like cloud capabilities. New API calls and expansion of existing functions as cloud providers add new capabilities and features are now a reality. On balance, we are discussing on-the-horizon services and with cloud providers and what form their API is poised to take. This is a perfect opportunity to leverage the experience and work of companies like CloudSwitch as a means to integrate these new capabilities into a coherent data model.

When you look at the functions beyond simple virtual machine control, an API can give you an indication of what is happening in the cloud. Some like to take a peek at the network and storage APIs in order to understand how the cloud is built. Take Amazon, for example. In Amazon, the base network design is that each virtual server receives both a public and private IP address. These addresses are assigned from a pool based on the location of the machine within the infrastructure. Even though there are two IP addresses, however, the public one is just routed (or NAT’ed) to the private address. You only have a single network interface to your server—which is simply and scalable architecture for the cloud provider for support—with Amazon. The server will cause problems for applications requiring at least two NICs, such as some cluster applications.

Terremark’s cloud offering is in stark contrast to Amazon’s. IP addresses are defined by the provider so they can route traffic to your servers, like Amazon, but Terremark allocates a range for your use when you first sign up (while Amazon uses a generic pool of addresses). This can been seen as a positive because there is better control of the assignment of networking address, but on the flip side is potential scaling issues because you only have a limited number of addresses to work with. Additionally, you can assign up to four NIC’s to each server in Terremark’s Enterprise cloud (which allows you to create more complex network topologies and support applications requiring multiple networks for proper operation).

One important thing to consider is that with the Terremark model, servers only have internal addresses. There is no default public NAT address for each server, as with Amazon. Instead, Terremark has created a front-end load balancer that can be used to connect a public IP address to a specified set of servers by protocol and port. You must first create an “Internal Service” (in the language of Terremark) that defines a public IP/Port/Protocol for each protocol and port. Next, assign a server and port to the Service, which will create a connection. You can add more than one server to each public IP/Port/Protocol group  since this is a load balancer. Amazon does have a load balancer function as well, and although it isn’t required to connect public addresses to your cloud servers, it does support connecting multiple servers to a single public IP address.

When it comes down to it, the APIs and the feature sets they define tell a lot about the capabilities and design of a cloud infrastructure. The end user features, flexibility and scalability of the whole service will be impacted by decisions made at the infrastructure level (such as network address allocation, virtual device support and load balancers). It is important to look down to the API level when considering what cloud environment you want because it helps you to better understand how the cloud providers’ infrastructure decisions will impact your deployments.

Although building a cloud is complicated, it can provide a powerful resource when implemented correctly. Cloud with different “sweet spots” emerge when cloud providers choose key components and a base architecture for their service. You can span these different clouds and put the right application in the right environment with CloudSwitch. To schedule a time to discuss how Cloud Computing can help your enterprise, contact Nubifer today.

App Engine and VMware Plans Show Google’s Enterprise Focus

Google opened its Google I/O developer conference in San Francisco on May 19 with the announcement of its new version of the Google App Engine, Google App Engine for Business. This was a strategic announcement, as it shows Google is focused on demonstrating its enterprise chops. Google also highlighted its partnership with VMware to bring enterprise Java developers to the cloud.

Vic Gundotra, vice president of engineering at Google said via a blog post: “… we’re announcing Google App Engine for Business, which offers new features that enable companies to build internal applications on the same reliable, scalable and secure infrastructure that we at Google use for our own apps. For greater cloud portability, we’re also teaming up with VMware to make it easier for companies to build rich web apps and deploy them to the cloud of their choice or on-premise. In just one click, users of the new versions of SpringSource Tool Suite and Google Web Toolkit can deploy their application to Google App Engine for Business, a VMware environment or other infrastructure, such as Amazon EC2.”

Enterprise organizations can build and maintain their own applications on the same scalable infrastructure that powers Google Applications with Google App Engine for Business. Additionally,  Google App Engine for Business has added management and support features that are tailored for each unique enterprise. New capabilities with this platform include: the ability to manage all the apps in an organization in one place; premium developer support; simply pricing based on users and applications; a 99.9 percent uptime service-level agreement (SLA); access to premium features such as cloud-based SQL and SSL (coming later this year).

Kevin Gibbs, technical lead and manager of the Google App Engine project said during the May 18 Google I/O keynote that “managing all the apps at your company” is a prevalent issue for enterprise Web developers. Google sought to address this concern through its Google App Engine hosting platform but discovered it needed to shore it up to support enterprises. Said Gibbs, “Google App Engine for Business is built from the ground up around solving the problems that enterprises face.”

Product management director for developer technology at Google Eric Tholome told eWEEK that Google App Engine for Business allows developers to use standards-based technology (like Java, the Eclipse IDE, Google Web Toolkit GWT and Python) to create applications that run on the platform. Google App Engine for Business also delivers dynamic scaling, flat-rate pricing and consistent availability to users.

Gibbs revealed that Google will be doling out the features in Google App Engine for Business throughout the rest of 2010, with Google’s May 19 announcement acting as a preview of the platform. The platform includes an Enterprise Administration Console, a company-based console which allows users to see, manage and set security policies for all applications in their domain. The company’s road map states that features like support, the SLA, billing, hosted SQL and custom domain SSL will come at a later date.

Gibbs said that pricing for Google App Engine for Business will be $8 per month per user for each application with the maximum being $1,000 per application per month.

Google also announced a series of technology collaboration with VMware. The goal of these is to deliver solutions that make enterprise software developers more efficient at building, deploying and managing applications within all types of cloud environments.

President and CEO of VMware Paul Maritz said, “Companies are actively looking to move toward cloud computing. They are certainly attracted by the economic advantages associated with cloud, but increasingly are focused on the business agility and innovation promised by cloud computing. VMware and Google are aligning to reassure our mutual important to both companies. We will work to ensure that modern applications can run smoothly within the firewalls of a company’s data center or out in the public cloud environment.”

Google is essentially trying to pick up speed in the enterprise, with Java developers using the popular Spring Framework (stemming from VMware’s SpringSource division). Recently, VMware did a similar partnership with Salesforce.com.

Maritz continued to say to the audience at Google I/O, “More than half of the new lines of Java code written are written in the context of Spring. We’re providing the back-end to add to what Google provides on the front end. We have integrated the Spring Framework with Google Web Toolkit to offer an end-to-end environment.”

Google and VMware are teaming up in multiple ways to make cloud applications more productive, portable and flexible. These collaborations will enable Java developers to build rich Web applications, use Google and VMware performance tools on cloud apps and subsequently deploy Spring Java applications on Google App Engine.

Google’s Gundotra explained, “Developers are looking for faster ways to build and run great Web applications, and businesses want platforms that are open and flexible. By working with VMware to bring cloud portability to the enterprise, we are making it easy for developers to deploy rich Java applications in the environments of their choice.”

Google’s support for Spring Java apps on Google App Engine are part of a shared vision to make building, running and managing applications for the cloud easier and in a way that renders the applications portable across clouds. Developers can build SpringSource Tool Suite using the Eclipse-based SpringSource and have the flexibility to choose to deploy their applications in their current private VMware vSphere environment, in VMware vCloud partner clouds or directly to Google App Engine.

Google and VMware are also collaborating to combine the speed of development of Spring Roo–a next-generation rapid application development tool–with the power of the Google Web Toolkit to create rich browser apps. These GWT-powered applications can create a compelling end-user experience on computers and smartphones by leveraging modern browser technologies like HTML5 and AJAX.

With the goal of enabling end-to-end performance visibility of cloud applications built using Spring and Google Web Toolkit, the companies are collaborating to more tightly integrate VMware’s Spring Insight performance tracing technology within the SpringSource tc Server application server with Google’s Speed Tracer technology.

Speaking about the Google/VMware partnership, vice president at Nucleus Research Rebecca Wettemann told eWEEK, “In short, this is a necessary step for Google to stay relevant in the enterprise cloud space. One concern we have heard from those who have been slow to adopt the cloud is being ‘trapped on a proprietary platform.’ This enables developers to use existing skills to build and deploy cloud apps and then take advantage of the economies of the cloud. Obviously, this is similar to Salesforce.com’s recent announcement about its partnership with VMware–we’ll be watching to see how enterprises adopt both. To date, Salesforce.com has been better at getting enterprise developers to develop business apps for its cloud platform.”

For his part, Frank Gillett, an analyst with Forrester Research, describes the Google/VMware more as “revolutionary” and the Salesforce.com/VMware partnership to create VMforce as “evolutionary.”

“Java developers now have a full Platform-as-a-Service [PaaS] place to go rather than have to provide that platform for themselves,” said Gillett of the new Google/VMware partnership. He added, however, “What’s interesting is that IBM, Oracle and SAP have not come out with their own Java cloud platforms. I think we’ll see VMware make another deal or two with other service providers. And we’ll see more enterprises application-focused offerings from Oracle, SAP and IBM.”

Google’s recent enterprise moves show that the company is set on gaining more of the enterprise market by enabling enterprise organizations to buy applications from others through the Google Apps Marketplace (and the recently announced Chrome Web Store), buy from Google with Google Apps for Business or build their own enterprise applications with Google App Engine for Business. Nubifer Inc. is leading Research and Consulting firm specializing in Cloud Computing and Software as a Service.

Cloud Computing Business Models on the Horizon

Everyone is wondering what will follow SaaS, PaaS and IaaS, so here is a tutorial on some of the emerging cloud computing business models on the horizon.

Computing arbitrage:

Companies like broadband.com are buying bandwidth at a wholesale rate and reselling it to the companies to meet their specific needs. Peekfon began buying data bandwidth in bulk and slice it up to sell to their customers as a way to solve the problem of expensive roaming for customers in Europe. The company was able to negotiate with the operators to buy bandwidth in bulk because they intentionally decided to steer away from the voice plans. They also used heavy compression on their devices to optimize the bandwidth.

While elastic computing is an integral part of cloud computing, not all companies who want to leverage the cloud necessarily like it. These companies with unique cloud computing needs—like fixed long-term computing that grows at relatively fixed low rate and seasonal peaks—have a problem that can easily be solved via intermediaries. Since it requires hi cap-ex, there will be fewer and fewer cloud providers. Being a “cloud VAR” could be a good value proposition for the vendors that are “cloud SI” or have a portfolio of cloud management.

App-driven and content-driven clouds:

Now that the competition between private and public clouds is nearly over, it is time to think about a vertical cloud. The needs to compute depend on what is being computed, and it depends on the applications’ specific needs to compute, the nature and volume of data that is being computed and the kind of content that is being delivered. The vendors are optimizing the cloud to match their application and content needs in the current SaaS world, and some are predicting that a few companies will help ISV’s by delivering app-centric and content-centric clouds.

For advocates of net neutrality, the current cloud-neutrality that is application-agnostic is positive, but innovation on top of raw clouds is still needed. Developer’s need fine knobs for CPU computes, I/O computes, main-memory computing and other varying needs of their applications. The extensions are specific to a programming stack like Heroku for Ruby but the opportunity to provide custom vertical extensions for an existing cloud or to build a cloud that is purpose-built for a specific class of applications and has a range of stack options underneath (making it easy for the developers to leverage the cloud natively) is here. Nubifer Inc. provides Cloud and SaaS Consulting services to enterprise companies.

U.S. Government Moves to the Cloud

The U.S. Recovery, Accountability and Transparency Board recently announced the move of its Recovery.gov site to a cloud computing infrastructure. That cloud computing infrastructure is powered by Amazon.com’s Elastic Compute Cloud (EC2) and will grant the U.S. Recovery Accountability and Transparency Board more efficient computer operation, reduced costs and improved security.

Amazon Web Services’ (AWS) cloud technology was selected as the foundation for the move by Smartronix, which acted as the prime contractor on the migration made by the U.S. Recovery Accountability and Transparency. Also in the May 13 announcement, the board said Recovery.gov is now the first government-wide system to make the move into the cloud.

The U.S. government’s official Website that provides easy access to data related to Recovery Act spending, Recovery.gov allows for the reporting of potential fraud, waste and abuse. The American Recovery and Reinvestment Act of 2009 created the Recovery Accountability and Transparency Board with two goals in mind: to provide transparency related to the use of Recovery-related funds, and to prevent and detect fraud, waste and mismanagement.

CEO of Smartronix John Parris said of the announcement, “Smartronix is honored to have supported the Recovery Board’s historic achievement in taking Recovery.gov, the standard for open government, to the Amazon Elastic Compute Cloud (EC2). This is the first federal Website infrastructure to operate on the Amazon EC2 and was achieved due to the transparent and collaborative working relationship between Team Smartronix and our outstanding government client.”

The board anticipates that the move will save approximately $750,000 during its current budget cycle and result in long-term savings as well. For fiscal year 2010 and 2011 direct cost savings to the Recovery Board will be $334,800 and $420,000 respectively.

Aside from savings, the move to the cloud will free up resources and enable the board’s staff to focus on its core mission of providing Recovery.com’s users with rich content without worrying about management of the Website’s underlying data center and related computer equipment.

In a statement released in conjunction with the announcement, vice president of Amazon Web Services Adam Selipsky said, “Recovery.gov is demonstrating how government agencies are leveraging the Amazon Web Services cloud computing platform to run their technology infrastructure at a fraction of the cost of owning and managing it themselves. Building on AWS enables Recovery.giv to reap the benefits of the cloud–including the ability to add or shed the resources as needed, paying only for resources used and freeing up scarce engineering resources from running technology infrastructure–all without sacrificing operational performance, reliability, or security.”

The Board’s Chairman, Earl Devany, said, “Cloud computing strikes me as a perfect tool to help achieve greater transparency and accountability. Moving to the cloud allows us to provide better service at lower costs. I hope this development will inspire other government entities to accelerate their own efforts. The American taxpayers would be the winners.”

Board officials also said that greater protection against network attacks and real time detection of system tampering are some of the security improvements from the move. Amazon’s computer security platform has been essentially added to the Board’s own security system (which will continue to be maintained and operated by the Board’s staff).

President of Environmental Systems Research Institute (ESRI) Jack Dangermound also released a statement after the announcement was made. “Recovery.gov broke new ground in citizen participation in government and is now a pioneer in moving to the cloud. Opening government and sharing data through GIS are strengthening democratic processes of the nation,” said Dangermound. “The Recovery Board had the foresight to see the added value of empowering citizens to look at stimulus spending on a map, to explore their own neighborhoods, and overlay spending information with other information. This is much more revealing than simply presenting lists and charts and raises the bar for other federal agencies.” For more information please visit Nubifer.com.

EMC CEO Joe Tucci Predicts Many Clouds in the Future

EMC isn’t alone in focusing on cloud computing during the EMC World 2010 show, as IT vendors, analysts and the like are buzzing about the cloud. But according to EMC CEO Joe Tucci, the storage giant has a new prediction for the future of cloud computing. During his keynote speech on May 10, and a subsequent discussion with reporters and analysts, Tucci said that EMC’s vision of the future varies from others because it sees many private clouds. This exists in stark contrast with the vision of only a few vendors—like Google, Amazon and Microsoft—offering massive public clouds.

“There won’t be four, five or six giant cloud providers. At the end of the day, you’ll have tens of thousands of private clouds and hundreds of public clouds,” said Tucci.

EMC plans on taking on the role of helping businesses move to private cloud environments, where IT administrators have the ability to view multiple data centers as a single pool of resources. These enterprises with their public clouds will also work with public cloud environments, according to Tucci.

The increased complexity and costs of current data centers serve as a catalyst for the demand for cloud computing models. Tucci says that this explosion of data—which comes from multiple sources, including the growth of mobile device users, medical imaging advancements, increased access to broadband and smart devices—is poised to grow further. “Obviously, we need a new approach, because … infrastructures are too complex and too costly. Enter the cloud. This is the new approach,” Tucci said.

According to Tucci, clouds will be based mainly on x86 architectures, feature converged networks and federated resources and will be dynamic, secure, flexible, cost efficient and reliable. These clouds will also be accessible via multiple devices, a growing need due to the ever-increasing use of mobile devices.

EMC’s May 10 announcements were focused on the push for the private cloud, including the introduction of the VPlex appliances and an expanded networking strategy. Said Tucci, “Our mission is to be your guide and to help you on this journey to the private cloud.”

Tucci said that because of the high level of performance in x86 processors from Intel and Advances Micro Devices, he isn’t predicting a long-term future for other architectures in cloud computing. Tucci used Intel’s eight-core Xeon 7500 “Nehalem EX” processors, which can offer up to 1 terabyte of storage, with systems OEMs prepping to unveil servers with as many as eight processors as an example.

Speaking about the overall growth of x86 processor shipments and revenues, Tucci said that RISC architectures and mainframes will continue to slip: “What I’m saying is, we’re convinced, and everything, that EMC does, and everything Cisco does, will be x86-based. Yes, we’re placing a bet on x86, and we’re going to an all-x86 world.” EMC is currently in the midst of a three-year process of migrating to a private cloud environment. This will include abandoning platforms like Solaris and moving to an all-x86 environment. For more information, please visit Nubifer.com.

Cloud-Optimized Infrastructure and New Services on the Horizon for Dell

Over the past three years, Dell has gained experience in the Cloud through its Data Center solutions and  group-designed customized offerings for cloud and hyperscaled IT environments. The company is now putting that experience to use, releasing several new hardware, software and service offerings optimized for cloud computing environments. Dell officials launched the new offerings—which include a new partner program, new servers optimized for cloud computing and new services designed to help business migrate to the cloud—at a San Francisco event on March 24.

Based on work the Dell Data Center Solutions group has completed over the past three years, the new offerings were outlined by Valeria Knafo, senior manager of business development and business marketing for the DCS unit. According to Knafo, DCS has built customized computing infrastructures for large cloud service providers and hyperscale data centers and is now trying to make their solutions available to enterprises. Said Knafo, “We’ve taken that experience and brought it to a new set of users.”

Dell officials revealed that they have been working with Microsoft on its Windows Azure cloud platform and that the software giant will work with Dell to create joint cloud-based solutions. Dell and Microsoft will continue to collaborate around Windows Azure (including offering services) and Microsoft will continue buying Dell hardware for its Azure platform as well. Turnkey cloud solutions—including pre-tested and pre-assembled hardware, software and services packages that businesses can use to deploy and run their cloud infrastructures quickly—are among the new offerings.

A cloud solution for Web applications will be the first Platform-as-a-Service made available. The offering will combine Dell servers and services with Web application software from Joyent and will come with challenges, caution Dell officials, like unpredictable traffic and the migrating of the apps from development to production. Dell is also offering a new Cloud Partner Program. According to officials, it will broaden options for customers seeking to move into private or public clouds. Dell announced three new software companies as partners as well: Aster Data, Greenplum and Canonical.

Also on the horizon for Dell is its PowerEdge C-series servers, which are designed to be energy efficient and offer features that are vital to hyperscaled environments—HPC (high-performance computing), social networking, gaming, cloud computing, Web 2.0 functions—like memory capacity and high performance. The C1100 (designed for clustered computing environments), the C2100 (for data analytics, cloud computing and cloud storage) and the C6100 (a four-node cloud and cluster system which offers a shared infrastructure) are the three servers that make up the family.

In unveiling the PowerEdge C-Series, Dell is partaking in the increasing industry trend of offering new systems optimized for cloud computing. For example, on March 17 Fujitsu unveiled the Primergy CX1000, a rack server created to offer the high performance environments need when lowering costs and power consumption. The Primergy CX1000 can also save on data center space through a design which pushes hot air from the system through the top of the enclosure as opposed to the back.

Last, but certainly not least, are Dell’s Integrated Solution Services. They offer complete cloud lifecycle management and include workshops to assess a company’s readiness to move to the cloud. Knafo said that the services are a combination of what Dell gained with the acquisition of Perot Systems and what it had already. “There’s a great interest in the cloud, and a lot of questions on how to get to the cloud. They want a path and a roadmap identifying what the cloud can bring,” said Knafo.

Mike Wilmington, a planner and strategist for Dell’s DCS group, claimed the services will decrease confusion many enterprises may have about the cloud. Said Wilmington, “Clouds are what the customer wants them to be,” meaning that while cloud computing may offer essentially the same benefits to all enterprises (cost reductions, flexibility, improved management and greater energy efficiency) it will look different for every enterprise. For more information please visit Nubifer.com.

Cisco, Verizon and Novell Make Announcements about Plans to Secure the Cloud

Cisco Systems, Verizon Business and Novell announce plans to launch offerings designed to heighten security in the cloud.

On April 28, Cisco announced security services based around email and the Internet that are part of the company’s cloud protection push and its Secure Borderless Network architecture; Cisco’s Secure Borderless Network architecture seeks to give users secure access to their corporate resources on any device, anywhere, at anytime.

Cisco’s IronPort Email Data Loss Prevention and Encryption, and ScanSafe Web Intelligence Reporting are designed to work with Cisco’s other web security solutions to grant companies more flexibility when it comes to their security offerings while streamlining management requirements, increasing visibility and lowering costs.

Verizon and Novell made an announcement on April 28 about their plans to collaborate to create an on-demand identity and access management service called Secure Access Services from Verizon. Secure Access Services from Verizon is designed to enable enterprises to decide and manage who is granted access to cloud-based resources. According to the companies, the identity-as-a-server solution is the first of what will be a host of joint offerings between Verizon and Novell.

According to eWeek, studies continuously indicate that businesses are likely to continue trending toward a cloud-computing environment. With that said, issues concerning security and access control remain key concerns. Officials from Cisco, Verizon and Novell say that the new services will allow businesses to feel more at ease while planning their cloud computing strategies.

“The cloud is a critical component of Cisco’s architectural approach, including its Secure Borderless Network architecture,” said vice president and general manager of Cisco’s Security technology business unit Tom Gillis in a statement. “Securing the cloud is highly challenging. But it is one of the top challenges that the industry must rise to meet as enterprises increasingly demand the flexibility, accessibility and ease of management that cloud-based applications offer for their mobile and distributed workforces.”

Cisco purchased ScanSafe in December 2009 and the result is Cisco’s ScanSafe Web Intelligence Reporting platform. The platform is designed to give users a better idea of how their Internet resources are being used, and the objective is to ensure that business-critical workloads aren’t being encumbered by non-business-related traffic. Cisco’s ScanSafe Web Intelligence Reporting platform can report on user-level data and information on Web communications activities within second, and offers over 80 predefined reports.

Designed to protect outbound email in the cloud, the IronPort email protection solution is perfect for enterprises that don’t want to manage their email. Cisco officials say that it provides hosted mailboxes (while keeping control of email policies) and also offers the option of integrated encryption.

Officials say Cisco operates over 30 data centers around the globe and that security offerings handle large quantities of activity each day—including 2.8 billion reputation look-ups, 2.5 billion web requests and the detection of more than 250 billion span messages—and these are the latest in the company’s expanding portfolio of cloud security offerings.

Verizon and Novell’s collaboration—the Secure Access Services—are designed to enable enterprises to move away from the cost and complexity associated with using traditional premises0based identity and access management software for securing applications. These new services offer centralized management of web access to applications and networks in addition to identity federation and web single sign-on.

Novell CEO Ron Hovsepian released a statement saying, “Security and identity management are critical to accelerating cloud computing adoption and by teaming with Verizon we can deliver these important solutions.” While Verizon brings the security expertise, infrastructure, management capabilities and portal to the service, Novell provides the identity and security software. For more information contact a Nubifer representative today.

Cloud Interoperability Brought to Earth by Microsoft

Executives at Microsoft say that an interoperable cloud could help companies trying to lower costs and governments trying to connect constituents. Cloud services are increasingly seen as a way for businesses and governments to scale IT systems for the future, consolidate IT infrastructure, and enable innovative services not possible until now.

Technology vendors are seeking to identify and solve the issues created by operating in mixed IT environments in order to help organizations fully realize the benefits of cloud services. Additionally, vendors are collaborating to make sure that their products work well together. The industry may still be in the beginning stages of collaborating on cloud interoperability, but has already made great strides.

So what exactly is cloud interoperability and how can it benefit companies now? Cloud interoperability specifically concerns one cloud solution working with other platforms and applications—not just other clouds. Customers want to be able to run applications locally or in the cloud, or even on a combination of both. Currently, Microsoft is collaborating with others in the industry and is working to make sure that the premise of cloud interoperability becomes an actuality.

Microsoft’s general managers Craig Shank and Jean Paoli are spearheading Microsoft’s interoperability efforts. Shank helms the company’s interoperability work on public policy and global standards and Paoli collaborates with the company’s product teams to cater product strategies to the needs of customers. According to Shank, one of the main attractions of the cloud is the amount of flexibility and control it gives customers. “There’s a tremendous level of creative energy around cloud services right now—and the industry is exploring new ideas and scenarios all the time. Our goal is to preserve that flexibility through an open approach to cloud interoperability,” says Shank.

Paoli chimes in to say, “This means continuing to create software that’s more open from the ground up, building products that support technologies such as PHP and Java, and ensuring that our existing products work with the cloud.” Both Shank and Paoli are confident that welcoming competition and choice will allow Microsoft to become more successful down the road. “This may seem surprising,” says Paoli before adding,” but it creates more opportunities for its customers, partners and developers.”

Shank reveals that due to the buzz about the cloud, some forget about the ultimate goal: “To be clear, cloud computing has enormous potential to stimulate economic growth and enable governments to reduce costs and expand services to citizens.” One example of the real-world benefits of cloud interoperability is the public sector. Microsoft is currently showing results in this area via solutions like their Eye for Earth project. Microsoft is helping the European Environment Agency simplify the collection and processing of environmental information for use by the general public and government officials. Eye on Earth obtains data from 22,000 water monitoring points and 1,000 stations that monitor air quality through employing Microsoft® Windows Azure, Microsoft ® SQL Azure and already existing Linux technologies. Eye on Earth then helps synthesize the information and makes it accessible for people in 24 different languages in real time.

Product developments like this emerged out of feedback channels which the company developed with its partners, customers and other vendors. In 2006, for example, Microsoft created the Interoperability Executive Customer (IEC) Council, which is comprised of 35 chief technology officers and chief information officers from a variety of organizations across the globe. The group meats two times per year in Redmond and discuss issues concerning interoperability as well as provide feedback to Microsoft executives.

Additionally, Microsoft recently published a progress report which—for the first time—revealed operational details and results achieved by the Council across six work streams (or priority areas). The Council recently commissioned the creation of a seventh work stream for cloud interoperability geared towards developing standards related to the cloud which addressed topics like data portability, privacy, security and service policies.

Developers are an important part of cloud interoperability, and Microsoft is part of an effort the company co-founded with Zend Technologies, IBM and Rackspace called Simple Cloud. Simple Cloud was created to help developers write basic cloud applications that work on all major cloud platforms.

Microsoft is further engaging in the collaborative work of building technical “bridges” between the company and non-Microsoft technologies, like the recently-released Microsoft ® Windows Azure Software Development Kits (SDKs) for PHP and Java and tools for the new Windows ® Azure platform AppFabric SDKs for Java, PHP and Ruby (Eclipse version 1.0), the SQL CRUD Application Wizard for PHP and the Bing 404 Web Page Error Toolkit for PHP. These examples show the dedication of Microsoft Interoperability team.

Despite the infancy of the industry’s collaboration on cloud interoperability issues, much progress has already been made. This progress has had a major positive impact on the way even average users work and live, even if they don’t realize it yet. A wide perspective and a creative and collaborative approach to problem-solving are required for cloud interoperability. In the future, Microsoft will continue to support more conversation within the industry in order to define cloud principles and make sure all points of view are incorporated. For more information please contact a Nubifer representative today.

Amazon Sets the Record Straight About the Top Five Myths Surrounding Cloud Computing

On April 19, the 5th International Cloud Computing Conference & Expo (Cloud Expo)opened in New York City, and Amazon Web Services (AWS) used the event as a platform to address some of what the company sees as the lingering myths about cloud computing.

AWS officials said that the company continues to grapple with questions about features of the cloud-ranging from reliability and security to cost and elasticity—despite being one of the first companies to successfully and profitably implement cloud computing solutions. Adam Selipsky, vice president of AWS, recently spoke about the persisting myths of cloud computing from Amazon’s Seattle headquarters, specifically addressing five that linger in the face of increased industry adoption of the cloud and continued successful cloud deployments. “We’ve seen a lot of misperceptions about cloud computing is,” said Selipsky before debunking five common myths.

Myth 1: The Cloud Isn’t Reliable

Chief information officers (CIOs) in enterprise organizations have difficult jobs and are usually responsible for thousands of applications, explains Selipsky in his opening argument, adding that they feel like they are responsible for the performance and security of these applications. When problems with the applications arise, CIOs are used to approaching their own people for answers and take some comfort that there is a way to take control of the situation.

Selipsky says that customers need to consider a few things when adopting the cloud, one of which is that the AWS’ operational performance is good. Selipsky reminded users that they own the data, they choose which location to store the data (and it doesn’t move unless the customer decided to move it) and that regardless of whether customers choose to encrypt or not, AWS never looks at the data.

“We have very strong data durability—we’ve designed Amazon S3 (Simple Storage Service) for eleven 9’s of durability. We store multiple copies of each object across multiple locations,” said Selipsky. He added that AWS has a “Versioning” feature which allows customers to revert to the last version of any object they somehow lose due to application failure or an unintentional deletion. Customers can also ensure additional fault-tolerant applications by deploying their applications in various Availability zones or using AWS’ Load Balancing and Auto Scaling features.

“And, all that comes with no capex [capital expenditures] for companies, a low per unit cost where you only pay for what you consume, the ability to focus on engineers on unique incremental value for your business,” said Selipsky before adding that the origin of the reliability claims come merely from an illusion of a control, not actual control. “People think if they can control it they have more say in how things go. It’s like being in a car versus an airplane, but you’re much safer in a plane,” he explained.

Myth 2: The Cloud Provides Inadequate Security and Privacy

When it comes to security, Selipsky notes that it is an end-to-end process and thus companies need to build security at every level of the stack. Taking a look at Amazon’s cloud, it is easy to note that the same security isolations are employed as with a traditional data center—including physical data center security, separation of the network, isolation of the server hardware and isolation of storage. Data centers had already become a frequently-shared infrastructure on the physical data center side before Amazon launched its cloud services. Selipsky added that companies realized that they could benefit by renting space in a data facility as opposed to building it.

When speaking about security fundamentals, Selipsky noted that security could be maintained by providing badge-controlled access, guard stations, monitored security cameras, alarms, separate cages and strictly audited procedures and processes. Not only is Amazon’s Web Services’ data center identical to the best practices employed in private data facilities, there is an added physical security advantage in the fact that customers don’t need to access to the servers and networking gear inside. Access to the data center is thus controlled more strictly than traditional rented facilities. Selipsky also added that the Amazon cloud as equal or better isolation than could be expected from dedicated infrastructure, at the physical level.

In his argument, Selipsky pointed out that networks ceased to be isolated physical islands a long time ago because, as companies increasingly began to need to connect to other companies—and then the Internet—their networks became connected with public infrastructure. Firewalls and switch configurations and other special network functionality were used to prevent bad network traffic from getting in, or conversely from leaking out. Companies began using additional isolation techniques as their network traffic increasingly passed over public infrastructure to make sure that the security of every packet on (or leaving) their network remained secure. These techniques include Multi-protocol Label Switching (MPLS) and encryption.

Amazon used a similar approach to networking in its cloud by maintaining packet-level isolation of network traffic and supporting industry-standard encryption. Amazon Web Services’ Virtual Private Cloud allows a customer to establish their own IP address space and because of that customers can use the same tools and software infrastructure they are familiar with to monitor and control their cloud networks. Amazon’s scale also allows for more investment in security policing and countermeasures than nearly and large corporation could afford. Maintains Selipsky, “Our security is strong and dug in at the DNA level.”

Amazon Web Services invests in testing and validating the security of its virtual server and storage environment significantly as well. When discussing the investments made on the hardware side, Selipsky lists:

After customers release these resources, the server and storage are wiped clean so no important data can be left behind.

Intrusion from other running instances is prevented because each instance has its own customer firewall.

Those in need of more network isolation can use Amazon VPC, which allows you to carry your own IP address space with you into the cloud; your instances are only accessible through those IP addresses only you know.

Those desiring to run on their own boxes—where no other instances are running—can purchase extra large instances where only that XL instance runs on that server.

According to Selipsky, Amazon’s scale allows for more investment in security policing and countermeasures: “In fact, we often find that we can improve companies’ security posture when they use AWS. Take the example lots of CIOs worry about—the rogue server under a developer’s desk running something destructive or that the CIO doesn’t want running. Today, it’s really hard (if not impossible) for CIOS to know how many orphans there are and where they might be. With AWS, CIOs can make a single API call and see every system running in their VPC [Virtual Private Cloud]. No more hidden servers under the desk or anonymously places servers in a rack and plugged into the corporate network. Finally, AWS is SAS-70 certified; ISO 27—1 and NIST are in process.”

Myth 3: Creating My Own In-House Cloud or Private Cloud Will Allow Me to Reap the Same Benefits of the Cloud

According to Selipsky, “There’s a lot of marketing going on about the concept of the ‘private cloud.’ We think there’s a bit of a misnomer here.” Selipsky continued to explain that generally, “we often see companies struggling to accurately measure the cost of infrastructure. Scale and utilization are big advantages for AWS. In our opinion, a cloud has five key characteristics: it eliminates capex; allows you to pay for what you use; provides true elastic capacity to scale up and down; allows you to move very quickly and provision servers in minutes; and allows you to offload the undifferentiated heavy lifting of infrastructure so your engineers work on differentiating problems.

Selipsky also pointed out the following drawbacks of private clouds: still own the capex (and they are expensive!); not pay for  what you use; not have true elasticity; still manage the undifferentiated heavy lifting. “With a private cloud you have to manage capacity very carefully … or you or your private cloud vendor will end up over-provisioning. So you’re going to have to either get very good at capacity management or you’re going to wind up overpaying,” said Selipsky before challenging the elasticity of the private cloud: “The cloud is shapeless. But if it has a tight box around it, it no longer feels very cloud-like.”

One of AWS’ key offerings is Amazon’s ability to save customers money while also driving efficiency. “In virtually every case we’ve seen, we’ve been able to save people a significant amount of money,” said Selipsky. This is in part because AWS’ business has greatly expanded over the last four years and Amazon has achieved enough scale to secure very low costs. AWS has been able to aggregate hundreds of thousands of customers to have a high utilization of its infrastructure. Said Selipsky, “In our conversations with customers we see that really good enterprises are in the 20-30 percent range on utilization—and that’s when they’re good … many are not that strong. The cloud allows us to have several times that utilization. Finally, it’s worth looking at Amazon’s heritage and AWS’ history. We’re a company that works hard to lower its costs so that we can pass savings back to our customers. If you look at the history of AWS, that’s exactly what we’ve done (lowering price on EC2, S3, CloudFront, and AWS bandwidth multiple times already without any competitive pressure to do so).”

Myth 4: The Cloud Isn’t Ideal Because I Can’t Move Everything at Once

Selipsky debunks this myth by saying, “We believe this is nearly impossible and ill-advised. We recommend picking a few apps to gain experience and comfort then build a migration plan. This is what we most often see companies doing. Companies will be operating in hybrid environments for years to come. We see some companies putting some stuff on AWS and then keeping some stuff in-house. And I think that’s fine. It’s a perfectly prudent and legitimate way of proceeding.”

Myth 5: The Biggest Driver of Cloud Adoption is Cost

In busting the final myth, Selipsky said, “There is a big savings in capex and cost but what we find is that one of the main drivers of adoption is that time-to-market for ideas is much faster in the cloud because it lets you focus your engineering resources on what differentiates your business.”

Summary

Speaking about all of the myths surround the cloud, Selipsky concludes that “a lot of this revolves around psychology and fear of change, and human beings needing to gain comfort with new things. Years ago people swore they would never put their credit card information online. But that’s no longer the case. We’re seeing great momentum. We’re seeing, more and more, over time these barriers [to cloud adoption] are moving.” For additional debunked myths regarding Cloud Computing visit Nubifer.com.

IBM Elevates Its Cloud Offerings with Purchase of Cast Iron Systems

IBM Senior Vice President and Group Executive for IBM Software Group Steve Mills announced the acquisition of cloud integration specialist Cast Iron Systems at the IBM Impact 2010 conference in Las Vegas on May 3. The privately held Cast Iron is based in Mountain View, California and delivers cloud integration software, appliances and services, thus the acquisition broadens the delivery of cloud computing services for IMB’s clients. IBM’s business process and integration software portfolio grew over 20 percent during the first quarter and the company sees this deal as a way to expand it further. The financial terms of the acquisition were not disclosed although Cast Iron Systems’ 75 employees will be integrated into IBM.

According to IBM officials, Big Blue anticipated the worldwide cloud computing market to grow at a compounded annual rate of 28 percent from $47 billion in 2008 to a projected $126 billion by 2012. The acquisition of Cast Iron Systems reflects IBM’s expansion of its software business around higher value capabilities that help clients run companies more effectively.

IBM has transformed its business model to focus on higher value, high-margin capabilities through organic and acquisitive growth in the past ten years–and the company’s software business has been a key catalyst in this shift. IBM’s software revenue grew at 11 percent year-to-year during the first quarter and the company generated $8 billion in software group profits in 2008 (up from $2.8 billion in 2000).

Since 2003, the IBM Software Group has acquired over 55 companies, and the acquisition of Cast Iron Systems is part of that. Cast Iron Systems’ clients include Allianz, Peet’s Coffee & Tea, NEC, Dow Jones, Schumacher Group, ShoreTel, Time Warner, Westmont University and Sports Authority and the cloud integration specialist has completed thousands of cloud integrations around the globe for retail organizations, financial institutions and media and entertainment companies.

IBM’s acquisition comes at a time when one of the major challenges facing businesses when adopting cloud delivery models is integrating the disparate systems running in their data centers with new cloud-based applications–which used to be time-consuming work which drained resources. IBM gains the ability to help businesses rapidly integrate their cloud-based applications and on-pemises systems, with the acquisition of Cast Iron Systems. Additionally, the acquisition advances IBM’s capabilities for a hybrid cloud model–which allows enterprises to blend data from on-premises applications with public and private cloud systems.

IBM, which is know for offering application integration capabilities for on-premises and business-to-business applications, will now be able to offer clients a complete platform to integrate cloud applications from providers like Amazon, Salesforce.com, NewSuite and ADP with on-premises applications like SAP and JD Edwards. Relationships between IBM and Amazon and Salesforce.com will essentially become friendlier due to this acquisition.

IBM said that it can use Cast Iron Systems’ hundreds of prebuilt templates and services expertise to eliminate expensive coding, thus allowing cloud integrations to be completed in mere days (rather than weeks, or even longer). These results can be achieved through using a physical appliance, a virtual appliance or a cloud service.

Craig Hayman, general manager for IBM WebSphere said in a statement, “The integration challenges Cast Iron Systems is tackling are crucial to clients who are looking to adopt alternative delivery models to manage their businesses. The combination of IBM and Cast Iron Systems will make it easy for clients to integrate business applications, no matter where those applications reside. This will give clients greater agility and, as a result, better business outcomes.”

IMB cited Cast Iron Systems helping pharmaceutical distributor Amerisource Bergen Specialty Group connecting Saleforce CRM with its on-premise corporate data warehouse as an example. The company has since been able to give its customer service associates access to the accurate, real-time information they need to deliver a positive customer experience while realizing $250,000 in annual cost savings.

Cast Irons Systems additionally aided a division of global corporate insurance leader Allianz integrate Salesforce CRM with its on-premises underwriting applications to offer real-time visibility into contract renewals for its sales team and key performance indicators for sales management. IBM said that Allianz beat its own 30-day integration project deadline by replacing labor-intensive custom code with Cast Iron Systems’ integration solution.

President and chief executive officer of Cast Iron Systems Ken Comee said, “Through IBM, we can bring Cast Iron Systems’ capabilities as the world’s leading provider of cloud integration software and services to  global customer set. Companies around the world will now gain access to our technologies through IBM’s global reach and its vast network of partners. As part of IBM, we will be able to offer clients a broader set of software, services and hardware to support their cloud and other IT initiatives.”

IBM will remain consistent with its software strategy by supporting and enhancing Cast Iron Systems’ technologies and clients while simultaneously allowing them to utilize the broader IBM portfolio. For more information, visit Nubifer.com.

Transforming Into a Service-Centric IT Organization By Using the Cloud

While IT executives typically approach cloud services from the perspective of how they are being delivered, this model neglects what cloud services are and how they are consumed. These two facets can have a large impact on the overall IT organizations, points out eWeek Knowledge Center contributor Keith Jahn. Jahn maintains that it is very important for IT executives to veer away from the current delivery-only focus by creating a world-class supply chain for managing the supply and demand of cloud services.

Using the popular fable The Sky Is Falling, known lovingly as Chicken Little, Jahn explains a possible future scenario that IT organizations may face due to cloud computing. As the fable goes, Chicken Little embarks on a life-threatening journey to warn the king that the sky is falling and on this journey she gathers friends who join her on her quest. Eventually, the group encounters a sly fox who tricks them into thinking that he has a better path to help them reach the king. The tale can end one of two ways: the fox eats the gullible animals (thus communicating the lesson “Don’t believe everything you hear”) or the king’s hunting dogs can save the day (thus teaching a lesson about courage and perseverance).

So what does this have to do with cloud computing? Cloud computing has the capacity to bring on a scenario that will force IT organizations to change, or possibly be eliminated altogether. The entire technology supply chain as a whole will be severely impacted if IT organizations are wiped out. Traditionally, cloud is viewed as a technology disruption, and is assessed from a deliver orientation, posing questions like how can this new technology deliver solutions cheaper and better and faster? An equally important yet often ignored aspect of this equation is how cloud services are consumed. Cloud services are ready to run, self-sourced, available wherever you are and are pay-as-you-go or subscription based.

New capabilities will emerge as cloud services grow and mature and organizations must be able to solve new problems as they arise. Organizations will also be able to solve old problems cheaper, better and faster. New business models will be ushered in by cloud services and these new business models will force IT to reinvent itself in order to remain relevant. Essentially, IT must move away from its focus on the delivery and management of assets and move toward the creation of a world-class supply chain for managing supply and demand of business services.

Cloud services become a forcing function in this scenario because they are forcing IT to transform. CIOs that choose to ignore this and neglect to make transformative measures will likely see their role shift from innovation leader to CMO (Chief Maintenance Officer), in charge of maintaining legacy systems and services sourced by the business.

Analyzing the Cloud to Pinpoint Patterns

The cloud really began in what IT folks now refer to as the “Internet era,” when people were talking about what was being hosted “in the cloud.” This was the first generation of the cloud, Cloud 1.0 if you will—an enabler that originated in the enterprise. Supply Chain Management (SCM) processes were revolutionized by commercial use of the Internet as a trusted platform and eventually the IT architectural landscape was forever altered.

This model evolved and produced thousands of consumer-class services, which used next-generation Internet technologies on the front end and massive scale architectures on the back end to deliver low-cost services to economic buyers. Enter Cloud 2.0, a more advanced generation of the cloud.

Beyond Cloud 2.0

Cloud 2.0 is driven by the consumer experiences that emerged out of Cloud 1.0. A new economic model and new technologies have surfaced since then, due to Internet-based shopping, search and other services. Services can be self-sourced from anywhere and from any device—and delivered immediately—while infrastructure and applications can be sourced as services in an on-demand manner.

Currently, most of the attention when it comes to cloud services remains focused on the new techniques and sourcing alternatives for IT capabilities, aka IT-as-a-Service. IT can drive higher degrees of automation and consolidation using standardized, highly virtualized infrastructure and applications. This results in a reduction in the cost of maintaining existing solutions and delivering new solutions.

Many companies are struggling with the transition from Cloud 1.0 to Cloud 2.0 due to the technology transitions required to make the move. As this occurs, the volume of services in the commercial cloud marketplace is increasing, propagation of data into the cloud is taking place and Web 3.0/semantic Web technology is maturing. The next generation of the cloud, Cloud 3.0 is beginning to materialize because of these factors.

Cloud 3.0 is significantly different because it will enable access to information through services set in the context of the consumer experience. This means that processes can be broken into smaller pieces and subsequently automated through a collection of services, which are woven together with massive amounts of data able to be accessed. With Cloud 3.0, the need for large-scale, complex applications built around monolithic processes is eliminated. Changes will be able to be made by refactoring service models and integration achieved by subscribing to new data feeds. New connections, new capabilities and new innovations—all of which surpass the current model—will be created.

The Necessary Reinvention of IT

IT is typically organized around the various technology domains taking in new work via project requests and moving it through a Plan-Build-Run Cycle. Here lies the problem. This delivery-oriented, technology-centric approach has inherent latency built-in. This inherent latency has created increasing tension with the business it serves, which is why IT must reinvent itself.

IT must be reinvented so that it becomes the central service-sourcing control point for the enterprise or realize that the business with source them on their own. By becoming the central service-sourcing control point for the enterprise, IT can maintain the required service levels and integrations. Changes to behavior, cultural norms and organizational models are required to achieve this.

IT Must Become Service-Centric in the Cloud

IT must evolve from a technology-centric organization into a service-centric organization in order to survive, as service-centric represents an advanced state of maturity for the IT function. Service-centric allows IT to operate as a business function—a service provider—created around a set of products which customers value and are in turn willing to pay for.

As part of the business strategy, these services are organized into a service portfolio. This model differs from the capability-centric model because the deliverable is the service that is procured as a unit through a catalog and for which the components—and sources of components—are irrelevant to the buyer. With the capability-centric model, the deliverables are usually a collection of technology assets which are often visible to the economic buyer and delivered through a project-oriented life cycle.

With the service-centric model, some existing roles within the IT organization will be eliminated and some new ones will be created. The result is a more agile IT organization which is able to rapidly respond to changing business needs and compete with commercial providers in the cloud service marketplace.

Cloud 3.0: A Business Enabler

Cloud 3.0 enables business users to source services that meet their needs quickly, cost-effectively and at a good service level—and on their own, without the help of an IT organization. Cloud 3.0 will usher in breakthroughs and innovations at an unforeseen pace and scope and will introduce new threats to existing markets for companies while opening new markets for others. In this way, it can be said that cloud is more of a business revolution than a technology one.

Rather than focusing on positioning themselves to adopt and implement cloud technology, a more effective strategy for IT organizations would be to focus on transforming the IT organization into a service-centric model that is able to source, integrate and manage services with high efficiency.

Back to the story and its two possible endings:

The first scenario suggests that IT will choose to ignore that its role is being threatened and continue to focus on the delivery aspects of the cloud. Under the second scenario, IT is rescued by transforming into the service-centric organization model and becoming the single sourcing control point for services in the enterprise. This will effectively place IT in control of fostering business innovation by embracing the next wave of cloud. For more information please visit Nubifer.com.

New Cloud-Focused Linux Flavor: Peppermint

A new cloud-focused Linux flavor is in town: Peppermint. The Peppermint OS is currently a small, private beta which will open up to more testers in early to late May. Aimed at the cloud, the Peppermint OS is described on its home page as: “Cloud/Web application-centric, sleek, user friendly and insanely fast! Peppermint was designed for enhances mobility, efficiency and ease of use. While other operating systems are taking 10 minutes to load, you are already connected, communicating and getting things done. And, unlike other operating systems, Peppermint is ready to use out of the box.”

The Peppermint team announced the closed beta of the new operating system in a blog post on April 14, saying that the operating system is “designed specifically for mobility.” The description of the technology on Launchpad describes Peppermint as “a fork of Lubuntu with an emphasis on cloud apps and using many configuration files sources from Linux Mint. Peppermint uses Mozilla Prism to create single site browsers for easily accessing many popular Web applications outside of the primary Web applications outside of the primary browser. Peppermint uses the LXDE desktop environment and focuses on being easy for new Linux users to find their way around in.”

Lubuntu is described by the Lubuntu project as a lighter, faster and energy-saving modification of Ubuntu using LXDE (the Lightweight X11 Desktop Environment). Kendall Weaver and Shane Remington, a pair of developers in North Carolina, make up the core Peppermint team. Weaver is the maintainer for the Lunix Mint Fluxbox and LXDE editions as well as the lead software developer for Astral IX Media in Asheville, NC and the director of operations for Western Carolina Produce in Hendersonville, NC. Based in Asheville, NC, Remington is the project manager and lead Web developer for Astral IX Media and, according to the Peppermint site, “provides the Peppermint OS project support with Web development, marketing, social network integration and product development.” For more information please visit Nubifer.com.

Using Business Service Management to Manage Private Clouds

Cloud computing promises an entirely new level of flexibility through pay-as-you-go, readily accessible, infinitely scalable IT services, and executives in companies of all sizes are embracing the model. At the same time, they are also posing questions about the risks associated with moving mission-critical workloads and sensitive data into the cloud. eWEEK’s Knowledge Center contributor Richard Whitehead has four suggestions for managing private clouds using service-level agreements and business service management technologies.

“Private clouds” are what the industry is calling hybrid cloud computing models which offer some of the benefits of cloud computing without some of the drawbacks that have been highlighted. These private clouds host all of the company’s internal data and applications while giving the user more flexibility over how service is rendered. The transition to private clouds is part of the larger evolution of the data center, which makes the move from a basic warehouse of information to a more agile, smarter deliverer of services. While virtualization helps companies save on everything from real estate to power and cooling costs, it does pose the challenge of managing all of the physical and virtual servers—or virtual sprawl. Basically, it is harder to manage entities when you cannot physically see and touch them.

A more practical move into the cloud can be facilitated through technology, with private clouds being managed through the use of service-level agreements (SLAs) and business service management (BSM) technologies. The following guide is a continuous methodology to bring new capabilities into an IT department within a private cloud network. Its four steps will give IT the tools and knowledge to overcome common cloud concerns and experience the benefits that a private cloud provides.

Step 1: Prepare

Before looking at alternative computing processes, an IT department must first logically evaluate its current computing assets and ask the following questions. What is the mixture of physical and virtual assets? (The word asset is used because this process should examine the business value delivered by IT.) How are those assets currently performing?

Rather than thinking in terms of server space and bandwidth, IT departments should ask: will this private cloud migration increase sales or streamline distribution? This approach positions IT as a resource rather than as a line item within an organization. Your private cloud migration will never take off if your resources aren’t presented in terms of assets and RIO.

Step 2: Package

Package refers to resources and requires a new set of measurement tools. IT shops are beginning to think in terms of packaging “workloads” in the virtualized world as opposed to running applications on physical servers. Workloads are portable, self-contained units of work or services built through the integration of the JeOS (“just enough” operating system), middleware and the application. They are portable and able to be moved across environments ranging from physical and virtual to cloud and heterogeneous.

A business service is a group of workloads, and this shows a fundamental shift from managing physical servers and applications to managing business services composed of portable workloads that can be mixed and matched in the way that will be serve the business. Managing IT to business services (aka the service-driven data center) is becoming a business best practice and allows the IT department to price and validate its provide cloud plan as such.

Step 3: Price

A valuation must be assigned to each IT unit after you’ve packaged up your IT processes into workloads and services. How much does it cost to run the service? How much will it cost if the service goes offline? The analysis should be presented around how these costs effect the business owner because the costs assessments are driven by the business need.

One of the major advantages of a service-driven data center is that business services are able to be dynamically manages to SLAs and moved around appropriately. This allows companies to attach processes to services by connecting workloads to virtual services and, for the first time, connects a business process to the hardware implementing that business process.

The business service can be managed independent of the hardware because they aren’t tied to the business server and can thus be moved around on an as-needed basis.

Price is dependent on the criticality of the service, what resources it will consume or whether it is worthy of backup and/or disaster recovery support. This shows a new approach not usually disclosed by IT and transparency in a cloud migration plan can be seen as a crucial part of demonstrating the value the cloud provides in a way that is cost-effective.

Step 4: Present

After you have an IT service package, you must present a unified catalog to the consumers of those services. This catalog must be visible to all relevant stakeholders within the organization and can be considered an IT storefront or showcase featuring various options and directions for your private cloud to demonstrate value to the company.

This presentation allows your organization the flexibility to balance IT and business needs for a private cloud architecture that works for all parties; the transparency gives customers a way to interact directly with IT.

Summary

Although cloud computing remains an intimidating and abstract concept for many companies, enterprises can still start taking steps towards extending their enterprise into the cloud with the adoption of private clouds. An organization can achieve a private cloud that is virtualized, workload-based and managed in terms of business services with the service-driven data center. Workloads are managed in a dynamic manner in order to meet business SLAs. The progression from physical server to virtualization to the workload to business service to business service management is clear and logical.

In order to insure that your private cloud is managed effectively—thus providing optimum visibility to the cloud’s business value—it is important to evaluate and present your cloud migration in this way. Cloud investment can seem less daunting when viewed as a continuous process and the transition can be make in small sets which makes the value a private cloud can provide to a business more easily recognizable to stakeholders. For more information, visit Nubifer.com.

Microsoft and Intuit Pair Up to Push Cloud Apps

Despite being competitors, Microsoft and Intuit announced plans to pair up to encourage small businesses to develop cloud apps for the Windows Azure platform in early January 2010.

Intuit is offering a free, beta software development kit (SDK) for Azure and citing Azure as a “preferred platform” for cloud app deployment on the Intuit Partner Platform as part of its collaboration with Microsoft. This marriage opens up the Microsoft partner network to Intuit’s platform and also grants developers on the Intuit cloud platform access to Azure and its tool kit.

As a result of this collaboration, developers will be encouraged to use Azure to make software applications that integrate with Intuit’s massively popular bookkeeping program, QuickBooks. The companies announced that the tools will be made available to Intuit partners via the Intuit App Center.

Microsoft will make parts of its Online Business Productivity Suite (such as Exchange Online, SharePoint Online, Office Live Meeting and Office Communications Online) available for purchase via the Intuit app Center as well.

The agreement occurred just weeks before Microsoft began monetizing the Windows Azure platform (on February 1)—when developers who had been using the Azure beta free of charge began paying for use of the platform.

According to a spokesperson for Microsoft, the Intuit beta Azure SDK will remain free, with the timing for stripping the beta tag “unclear.”

Designed to automatically manage and scale applications hosted on Microsoft’s public cloud, Azure is Microsoft’s latest Platform-as-a-Service. Azure will serve as a competitor for similar offerings like Force.com and Google App Engine. Contact a Nubifer representative to see how the Intuit – Microsoft partnership can work for your business.

Public vs. Private Options in the Cloud

The demand for cloud computing is perpetually increasing, which means that business and technology managers need to clear up any questions they have about the differences between public and private clouds—and quickly at that.

The St. Louis-based United Seating and Mobility is one company that faced the common dilemma of choosing between a public or private cloud. The company—which sells specialized wheelchairs at 30 locations in 12 states—initially used phones and email to stay up to date on vendor contracts and other matters before monitoring these developments with off-the-shelf applications on its own servers. Finally, United Seating and Mobility decided to move to the public cloud.

United Seating and Mobility’s director of operations Michael DeHart tells Baseline Magazine of the move, “The off-the-shelf applications didn’t collaborate. You’d log on to all of the apps and try to remember which one needed which password.” Staffers across the nation now share the information seamlessly via the enhanced tools available in the public cloud.

Another example illustrating the difference between the public and private cloud is the Cleveland Cavaliers. The NBA team uses a private cloud to run its arena’s website. Going private allowed for increased one-on-one interaction with the cloud provider partner while simultaneously giving the franchise more resources to handle increased traffic to the site. Traffic on the area site has been known to spike when, for example, the team makes the playoffs or a major artist is coming to the venue. “When you’ve booked Miley Cyrus you’d better be ready,” says the Cleveland Cavaliers director of web services Jeff Lillibridge.

Despite choosing different versions of the cloud, both United Seating and Mobility and the Cleveland Cavaliers have noticed that few enterprise managers will be able to avoid the topic of private verses public clouds. According to research firm IDC, worldwide cloud services revenue will reach $44.2 billion in 2013, compared to $17.4 billion last year.

Business and technology professionals remain stumped about what private and public clouds are despite the increased demand for worldwide cloud services. Examples of public clouds include Google AppEngine, IBM’s Blue Cloud, LotusLive Engage and Amazon’s Elastic Compute Cloud (EC2). A public cloud is a shared technology resource used on an as-needed basis and available via the Internet while a private cloud is created specifically for the use of one organization.

Enhanced by virtualization technologies, both concepts are making way for an “evergreen” approach to IT in which enterprises can obtain technologies when they need them without purchasing and maintaining a host of in-house services.

Bob Zukis, national leader of IT strategy for PricewaterhouseCoopers (PwC) says, “It all stems from the legacy model of ‘build it and forget about it.’ Changes taking place in the industry are making it much more efficient and effective to provision what IT needs. So ‘build it and forget about it’ no longer meets the needs of the business. Whether you’re going with a public or private cloud, you’re pursuing a way to increase your technological resources in a more efficient flexible way.”

In addition to being evergreen, this movement is also green-friendly. Says Frost and Sullivan’s Vanessa Alvarez, “Cloud computing allows for resources and paying only for what they use. When an application is not utilizing resources, those resources can be moved to another application that needs them, enabling maximum resource efficiencies. If additional capacity or resources are no longer needed, virtual servers can be powered down or shut off.”

Organizations continue to struggle to choose between private versus public clouds. On one hand, private clouds offer security and increased flexibility compared to traditional legacy systems, but they have a higher barrier of entry compared to public clouds. In contrast, private cloud services require that an enterprise IT manager handle technology standardization, virtualization and operations automation in addition to operations support and business support systems.

“With public clouds, you provision your organization very quickly, by increasing service, storage and other computing needs, “says Zukis. “A private cloud takes a lot more time because you’re essentially rearchitecting your legacy environment.” Although public clouds don’t require this organizational shift and are thus faster and more convenient, they fail to provide the same amount of transparency as private clouds. Says Zukis, “It’s not always clear what you’re buying off the shelf with public clouds.”

Assessing the Value of Security

Another major issue in the cloud debate is security. All organizations value security but each has to decide between balance between cost and convenience, on one hand, and data security, on the other. Some organizations might have a higher threshold for potential violations than others and thus require a need-for-speed strategy.

Head of strategic sales and marketing at NIIT Technologies Aninda Bose, who has analyzed both cloud structures through her job and also in her position with nonprofit research organization Project Management Institute, states that the public cloud is the better option for an enterprise dealing with high-transaction/low-security or low data value. An example illustrating this is a local government office, which needs to tell a citizen that their car registration is up for renewal and simply needs to give the citizen a renewal date—a perfect situation for public cloud hosting.

Examples better suited for the private cloud model due to the sensitivity of their data include a federal agency, financial institution or health care provider. Mark White, principal with Deloitte Consulting, explains, “Accounting treatments and taxation applications are not yet fully tested for public cloud services. So enterprises with significant risk from information exposure may want to focus on the private cloud approach. This caution is most relevant for systems that process, manage and report key customer, financial or intelligence information. It’s less important for ‘edge’ systems, such as salesforce automation and Web order-entry applications.”

Sioux Falls, South Dakota-based medical-practice company The Orthopedic Institute is very data-dependent and concluded that the private cloud structure best fit its needs—specifically because the company must comply with strict rules for protecting patient information laid out by HIPAA (Health Insurance Portability and Accountability Act).

IT Director David Vrooman explains that The Orthopedic Institute was seeking to change it domain name from Orth-I.com but after exploring possibilities with the exclusive provider of .md domains MaxMD it determines that MaxMd could also provide private cloud services for highly secured, encrypted email transmissions. Moreover, the cost of entry was less than doing it in-house. “We didn’t want to use one of our servers for this because it would have amounted to a $20,000 startup cost. By going with a private cloud option, we launched this at one-fifth of that expense—and it only took an afternoon to get it started, ” says Vrooman. “It would have taken at least a week for my staff and me to get this done. And because MaxMD has taken over the email encryption, I’m not getting up at 3am to find out what’s wrong with the server.”

Some industry experts warn that traditional views about security and cloud computing may be changing, however, and that includes organizations which are dependent on highly secured data. CPA2Biz, the New York-based American Institute of Certified Public Accountants, wanted to provide its 350,000 members with access to the latest software tools for its business resources-providing subsidiary. CPA2Biz worked with Intacct to create a public cloud model for its CPA members. The program was launched in April and since then concerns have about security have been addressed and hundreds of firms are supporting approximately 2,000 clients through the public cloud services offered through CPA2Biz.

“Only those in the largest of member organizations would be able to consider a private cloud system. Plus, we don’t believe there are security advantages to a private cloud system,” says vice president of corporate alliances at CPA2Biz Michael Cerami. “We’ve selected partners who operate highly secure public cloud environments. This allows us to provide our members with great collaborative tools that enable them to work proactively with their clients in real time.”

The Choice

Going back to United Seating and Mobility, the organization was interested in the public cloud structure because it isn’t dependent on high-volume, automated sales. The company uses IMB’s LotusLive Engage for online meetings, file-sharing and project-management tasks.

DeHart estimates that it would have taken up a server and a half had it done this in house saying, “Being on the public cloud allows us to avoid this entirely. It’s a leasing-versus-owning concept—an operational expense versus a capital one. And the Software-as-a-Service offerings are better than what we could get off the shelf. We certainly can’t use this cloud to work with any sensitive health data. But we can run much of our business operations on it, freeing up our IT people to focus on email, uptime and cell phone services.”

Now, take the Cleveland Cavaliers. They opted for private cloud services to support the website for their venue, Quicken Loans Arena, aka “the Q.” Fans can search for information about upcoming events on TheQArena.com and are directed to a business called Veritix is they want to buy tickets. The arena site acts as a traffic conduit for Veritiix, thus a private cloud was the best option and the team partnered with Hosted Solutions. Since the current NBA season began last fall, the site’s page views and visits have seen an increase of over 60 percent and the number of unique visitors has increased by 55 percent. The team avoids uncertainly about who is minding the data by employing Hosted Solutions.

The private cloud also enables the team to manage site traffic that can jump significantly in the case of a last-second, playoff-determining shot, for example. “The need to scale was significant but we didn’t want to oversee our own dedicated hosting,” says Lillibridge. “It would have been more expensive, and we would have had the headache of managing our own servers. We needed dedicated services that would avoid this, while allowing our capacity to increase during peak times and decrease when we don’t have a lot of traffic.”

There is no clear cut answer for whether the private or public cloud is better, rather companies needs to assess their own individual requirements for sped, security, resources and scalability. To learn more about which Cloud option is right for your enterprise, contact a Nubifer representative today.

A Guide to Windows® Azure Platform Billing

Understanding billing for Windows® Azure Platform can be a bit daunting, so here is a brief guide, including useful definitions and explanations.

The Microsoft ® Online Customer Service Portal (MCOP) limits one Account Owner Windows Live ID (WLID) per MOCP account, and the Account Owner has the ability to create and manage subscriptions, view billing and usage data and specify the Service Administrator for each subscription. While this is convenient for smaller companies, large corporations may need to create multiple subscriptions in order to design an effective account structure that will able to support and also reflect their market strategy. Although the Service Administrator (Service Admin WLID) manages deployments, they cannot create subscriptions.

The Account Administrator can create one or more subscriptions for each individual MOCP account and for each subscription, the Account Administrator can specify a different WLID as the Service Administrator. It is also important to note that the Service Administrator WLID can be the same or different as the Account Owner and is the person actually using the Windows ® Azure Platform. Once a subscription is created in the Microsoft ® Online Customer Service Portal (MOPC), a Project appears in the Windows ® Azure portal.

The relationship between components is clearly displayed in the diagram below:

Projects:

Up to twenty services can be allocated by one project. Resources in the Project are shared between all of the Services created and the resources are divided into Compute Instances/Cores and Storage accounts.

The Project will have 20 Small Compute Instances that you can utilize, by default. These Small Compute Instances could be a variety of combinations of VM sizes as long as the total number of Cores across all deployed services within the Project doesn’t exceed 20.

To increase the number of Cores, simply contact Microsoft ® Online Services customer support to verify the billing account and provide the requested Small Compute Instances/Cores (subject to a possible credit check). You also have the ability to design how you want to have the Cores allocated, although be default the available resources are counted as number of Small Compute Instances. See the conversion on Compute Instances below:

Compute Instance Size CPU Memory Instance Storage
Small 1.6 GHz 1.75 GB 225 GB
Medium 2 x 1.6 GHz 3.5 GB 490 GB
Large 4 x 1.6 GHz 7 GB 1,000 GB
Extra large 8 x 1.6 GHz 14 GB 2,040 GB

Table 1: Compute Instances Comparison

The compute Instances are shared between all the running services in the project—including Production and Stage Environments. This allows you to have multiple Services with different number of Compute Instances (up to the number of maximum available for that Project).

5 Storage accounts are available per Project, although you can request to increase this up to 20 Storage accounts per Project by contacting Microsoft ® Online Services customer support. You will need to purchase a new subscription if you need more than 20 Storage accounts.

Services:

A total of 20 Services per project are permitted. Services are where applications are deployed; each Service provides two environments: Production and Staging. This is visible when you create a service in the Windows ® Azure portal.

A maximum number of five roles per application are permitted within a Service; this includes any combinations of different web and worker roles on the same configuration file up to a maximum of 5. Each role can have any number of VMS, see below:

The Service has two roles in this example, and each role has a specific worker role. Web Role, web tier, handles the Web interface, while the Worker Role, business tier, handles the business logic. Each role can have any number of VMs/Cores up to the maximum available on the project.

If this service is deployed from the Azure ® resources perspective, the following resources will be used:

1 x Service

–       Web Role = 3 Small Compute Nodes (3 x Small VMs)

–       Worker Role = 4 Small Compute Nodes (2 x Medium VMs)

–       2 Roles used

Total resources left on the Project:

–       Services (20 -1) = 19

–       Small Compute Nodes (20 – 7) = 13 small compute instances

–       Storage accounts = 5

For more information regarding the Windows Azure pricing model, please contact a Nubifer representative.

Amazon’s Elastic Compute Cloud Platform EC2 Gets Windows Server Customers from Microsoft

Amazon has launched an initiative for Microsoft customers to bring their Windows Server licenses to Amazons EC2, Elastic Compute Cloud Platform. This initiative is in tandem with a brand new Microsoft pilot program which allows Windows Server customers with an EA (Enterprise Agreement) with Microsoft to bring their licenses to Amazon EC2. Peter DeSantis, general manager of EC2 at Amazon, said in a recent interview with eWEEK that these customers will pay Amazon’s Linux On-Demand or Reserved Instance rates and thus save between 35 to 50 percent, depending on the type of customer and instance.

Also in his interview with eWEEK, DeSantis said that Amazon customers have sought support for Windows Server and Amazon has delivered support for Windows Server 2003 and Windows Server 2008. Customers with EA agreements with Microsoft began to ask if those agreements could be applied to EC2 instances, thus the new pilot program. Amazon announced the new initiative on March 24 and began enrolling customers instantaneously. According to DeSantis, enrollment will continue through September 12, 2010.

Amazon sent out a notice announcing the program and stated the following criteria as requirements laid out by Microsoft to participate in the pilot: your company must be based or have legal entity in the United States; your company must have an existing Microsoft Enterprise Agreement that doesn’t expire within 12 months of your entry into the Pilot; you must already have purchased Software Assurance from Microsoft for your EA Windows Server licenses; you must be an Enterprise customer (this does not include Academic Government institutions).

eWEEK revealed some of the fine print for the project released by Amazon:

“Once enrolled, you can move your Enterprise Agreement Windows Server Standard, Windows Server Enterprise, or Windows Server Datacenter edition licenses to Amazon EC2 for 1 year. Each of your Windows Server Standard licenses will let you launch one EC2 instance. Each of your Windows Server Enterprise or Windows Server Datacenter licenses will let you launch up to four EC2 instances. In either case, you can use any of the EC2 instance types. The licenses you bring to EC2 can only be moved between EC2 and your on-premises machines every 90 days. You can use your licenses in the US East (Northern Virginia) or US West (Northern California) Regions. You will still be responsible for maintaining your Client Access Licenses and External Connector licenses appropriately.” To learn more about Microsoft’s and Amazon’s Cloud offerings visit Nubifer.com.

Microsoft Not Willing to Get Left in the Dust Left by Cloud Services Business

Microsoft may be the largest software company on the globe, but that didn’t stop it from being left in the dust by other companies more than once and eWEEK reports that when it comes to cloud services Microsoft is not willing to make the same mistake.

Although Microsoft was initially weary of the cloud, the company is now singing a different tune and trying to move further into the data center. Microsoft had its first booth dedicated solely to business cloud services at the SaaSCon 2010, held at the Santa Clara Convention Center April 6 and 7. Microsoft is positioning Exchange Online (email), SharePoint Online (collaboration), Dynamics CRM Online (business apps), SQL Azure (structured storage) and AD/Live ID (Active Directory assess) as its lead services for business. All of these services are designed to run on Windows Server 2008 in the data center and sync up with the corresponding on-premises applications.

The services are designed to work hand-in-hand with standard Microsoft client software (including Windows 7, Windows Phone, Office and Office Mobile), thus ensuring that the overarching strategy is set and users will have to report on its cohesiveness over time. Microsoft is also offering its own data centers and its own version of Infrastructure-as-a-Service for hosting client enterprises apps and services. Microsoft is using Azure—a full online stack comprised of Windows 7, the SQL database and additional Web services—as a Platform-as-a-Service for developers.

Featuring Business Productivity Online Suite, Exchange Hosted Services, Microsoft Dynamics CRM Online and MS Office Web Apps, Microsoft Online Services are up and running. In mid-March Microsoft launched a cloud backup service on the consumer side called SkyDrive, which is an online storage repository for files which users can access from anywhere via the Web. SkyDrive may be a very popular service, as it offers a neat (in both senses of the word) 25GB of online space for free (which is more than the 2GB offered as a motivator by other services).

SkyDrive simply requires a Windows Live account (also free) and shows that Microsoft really is taking the plunge. For more information on Microsoft’s Cloud offerings, please visit Nubifer.com.

ERP and CRM Integration Via Business Intelligence for the Cloud

The masterminds behind Crystal Reports are unveiling a new business intelligence cloud offering being sold through channel partners. Not only do solution providers get an ongoing annuity on the sale, but they can perform the integration work to link the cloud-based BI to the data source (whichever ERP/CRM solution it is, such as Oracle, Salesforce.com, SAP or something else).

Traditional VARs gaging the potential of the cloud business model may have a difficult time seeing how much money per user per month will be enough for a business to reap the benefit of the cloud. Indicee executives Mark Cunningham, CEP, and Craig Todd, director of partnerships, understand the businesses are accustomed to the big sale upfront and ongoing services after that sale. Cunningham and Todd were both part of the team that created the Crystal Reports business intelligence software–which sold to Seagate before becoming part of SAP–and decided to bring their technology expertise into the cloud.

Although Cunningham and Todd knew that business was moving into the cloud and that their expertise had revealed that channel partners are the ideal way to connect with end customers, they just didn’t know how to merge those two ideas. Said Todd to Channel Insider, “The biggest single difference in what SaaS is removes those boxes. It has initially been seen as a threat by some of our partners.”

“A lot of VARs are worried about being disintermediated. Their expertise in installing software is no longer required.But the ones we’ve been working with the last few months see it as an opportunity,” continued Todd.

Arxis Technology in Simi Valley, California, an ERP, CRM and BI specialist, is one such partner. The 25-person company has two offices in California as well as offices in Chicago and Phoenix. Director of sales and marketing Mark Severance told Channel Insider that whether the customer is deploying on-premises solutions or in the cloud solutions the revenue comes out even. “The biggest thing people are having a hard time with is that you are used to the big upfront sale. But, honestly, from our perspective, if you have great products and do a great job taking care of the customer, then there’s a business model for that you do,” explains Severance.

Severance said that the annuity part of the business (in which Arxis receives a commission per user per month on an ongoing basis) will eventually make up for the lack of large upfront sale. Additionally, Arxis can offer the integration and implementation services which customers need, which means setting up the BI solution’s data sources, whether they may be Salesforce.com or an internal CRM or ERP solution.

Arxis continues to offer traditional on-premises CRM and ERP software sales and implementation; the biggest vendor Arxis works with currently is Sage. Arxis offers a BI solution from Business Objects in on-premises and cloud form and recently added Indicee’s cloud-based BI solution for a variety of reasons. One major reason is that some customers are unable to afford an on-premises-based BI solution and thus a cloud-based solution is more economically accessible.

Severance further pointed out that most of computing is making the transition into the cloud. While companies used to feel safe having their server in-house, they now want to be able to access there data whenever, wherever they are, from whichever device they are using.

Indicee’s Cunningham and Todd also pointed out that VARs can provide their end customers with training services as well as services like change management. Said Todd, “There’s an exciting opportunity here for traditional VARs. This creates a platform that allows partners to focus on the V and A in the VAR–the value add.”

Pricing at Indicee starts at $69 per user per month, with a five-user pack priced at $150 per month. The VAR cut generally is a 20 percent commission on sales of five packs or more, calculated monthly and paid out quarterly, but Todd noted that it is dependent on how much work the VAR is completing to get the customer.

Gartner predicts sales of $150 million by 2013. Cunningham notes that SaaS is poised for growth and that if solution providers are seeking to enter the cloud, business intelligence is a lucrative starting point, even with its required integrative work. To learn more about CRM Applications in the Cloud, please visit Nubifer.com.

The Role of Multitenancy in the Cloud

The debate over whether or not multitenancy is a prerequisite for cloud computing wages on. While those pondering the use of cloud apps might think they are removed from this debate, they might want to think again, because multitenancy is the clearest path to getting more from a cloud app while spending less.

Those in the multitenancy camp, so to say, point out that there is only a slight only difference between two subscription-based cloud apps is that one is multitenant and the other is single-tenant. The multitenant option will offer more value over time while lowering a customer’s costs and the higher degree of multitenancy—i.e. the more a cloud provider’s infrastructure and resources are shared—the lower the customer cost.

At the root of the debate is revenue and cost economics of cloud services. Revenues for most cloud app providers come from selling monthly or annual per-seat subscriptions. These bring in just a portion of the annual revenue that would be generated by an on-premise software license with comparable functionality. The challenge for selling software subscriptions comes from reducing operating costs to be able to manage with less. If this is not achieved, the provider may have to do more than an on-premise vendor does—like run multiple infrastructures, maintain multiple versions, perform upgrades and maintain customer-specific code—with less money. The answer to this conundrum is multitenancy. Multitenancy extends the cost of infrastructure and labor across the customer base. Customers sharing resources down to the database schema is perfect for scaling.

As the provider adds customers, and those customers benefit from this scaling up, the economies of scale improve. The cloud app provider is able to grow and innovate more as costs decrease and in turn value increases. Over time customers can expect to see more value (like in the form of increased functionality), even if costs don’t lower. For more information of Multitenancy, visit Nubifer.com.

Microsoft and Citrix Come to a Desktop Virtualization Agreement

On March 18, Microsoft announced a partnership with Citrix Systems which seeks to promote the pair of companies’ end-to-end virtualization packages for businesses. One aspect of the broad-based partnership sees Microsoft and Citrix aggressively offering customers of rival VMware View the option of trading in 500 licenses with no additional cost. This highly aggressive facet of the recent alliance between Microsoft and Citrix highlights the perpetually increasing competitive nature of the entire virtualization industry.

Also during the company’s March 18 announcement, Microsoft put a number of changes in place in its virtualization policy. One such change which was instituted was making virtual desktop access rights a Windows Client Software Assurance benefit. Beginning on July 1, Software Assurance clients will no longer need to buy a separate license in order to access Windows in a virtual environment.

Windows Client Software Assurance and Virtual Desktop Access license customers will be able to access virtualized Windows and Office applications beginning on July 1 as well. These applications will be accessible through non-corporate network devices, like home PCs. Under Microsoft’s agreement with Citrix, Windows XP Mode will no longer require hardware virtualization technology and assets like Citrix XenDesktop’s HDC technology will be able to be applied to the capabilities of the Microsoft RemoteFX platform.

In an interview with eWEEK one day before the March 18 announcement, Brad Anderson, corporate vice president of Microsoft’s management and Services Division, said, “What we’re bringing to the market together is this end-to-end experience with a simple and consistent interface for the end user. It’s comprehensive, and it leverages what customers already have. If you take a look at the assets that our companies already have in virtualization, it’s the most comprehensive group of assets on the market.”

Together, Microsoft and Citrix are trying to fire a broadside into rival VMware with the “rescue for VMware VDI” promotion. The promotion allows VMware View customers to trade in up to 500 licenses for no additional cost. New Microsoft-Citrix customers also receive about 50 percent off the estimated retail price for virtual desktop infrastructure through another promotion.

In its media portrayal, Microsoft emphasized the announcement as a value proposition. “Two infrastructures are more expensive than one infrastructure,” said Anderson before adding, “When customers see the chance to consolidate multiple infrastructures into one, it’s a chance to manage virtual and hardware desktop so it’s truly one infrastructure. It enables administrators to do everything through system center. And reducing infrastructure reduces cost.”

The partnership with Citrix comes on the heels of another Microsoft virtualization initiative, which arrived on February 22. Microsoft unveiled two business-focused virtualization applications, App-V 4.6 and MED-V 1.0 SP1 Release Candidate designed to better integrate proprietary applications into business’ evolving IT infrastructure APP-V 4.6 extends 64-bit support for Microsoft’s application virtualization product to streaming applications. MED-V 1.o SP1 RC allows applications which require Internet Explorer 6—or that otherwise cannot be supported on Windows 7—to run in a managed virtual desktop environment. For more information about Cloud Computing, please visit Nubifer.com.

Apple iPad Tests the Limits of Google’s Chrome Running on Cloud Computing Devices

With the recent release of its iPad, Apple is poised to challenge Google in the current cloud computing crusade, say Gartner analysts. Apple’s iPad is expected to offer the most compelling mobile Internet experience to date, but later on in 2010 Google is predicted to introduce its own version for mobile Web consumption in the form of netbooks built on its Chrome Operating System.

If Apple’s tablet PC catches on like the company hopes it will, then it could serve as a foil for Google’s cloud computing fans. Apple CEO Steve Jobs has already proclaimed that holding the iPad is like “holding the Internet in your hand.” The 9.7-inch IPS screen on the device displays high-def video and other content, like e-mail, e-books and games to be consumed from the cloud.

Author Nicholas Carr, an avid follower of cloud happenings, explains the intentions of Apple in introducing the iPad by saying, “It wants to deliver the killer device to the cloud era, a machine that will define computing’s new age in the way that the Windows PC defined the old age. The iPad is, as Jobs said today, ‘something in the middle,’ a multipurpose gadget aimed at the sweet spot between the tiny smartphone and the traditional laptop. If it succeeds, we’ll all be using iPads to play iTunes, read iBooks, watch iShows, and engage in iChats. It will be an iWorld.”

An iWorld? Not if Google has its say! Later on in 2010 Google is expected to unveil its very own version of the Internet able to be held in users’ hands: netbooks based on Chrome. Companies like Acer and Asustek Computer are also building a range of Android-based tablets and netbooks, while Dell CEO Michael Dell was recently seen showcasing the Android-based Dell Mini 5 tablet at the World Economic Forum in Davos, Switzerland. It sounds like Apple may have more competition that just Google!

The iPad will undoubtedly be a challenge to Google’s plans for cloud computing, which include making Google search and Google apps able to reach any device connected to the Web. According to Gartner analyst Rau Valdes, Apple and Google are bound to face off with similar machines. Said Valdes to eWeek, “You could look and say that iPad is being targeted to the broad market of casual users rather than, say, the road warrior who needs to run Outlook and Excel and the people who are going to surf the Net on the couch. One could say that a netbook based on Chrome OS would have an identical use case.”

Consumers will eventually have to choose between shelling out around $499 for an iPad (that is just a base price, mind you) or a similar fee (or possibly lower) for a Chrome netbook. Valdes thinks that there are two types of users: a parent figure consuming Internet content on a Chrome OS netbook and a teenager playing games purchased on Apple’s App Store on an iPad. Stay tuned to see what happens when Apple and Google collide with similar machines later on in 2010.

Looking Back at the Changing Face of the Software Industry from 2004 and Beyond

Bill Gates may have made a whole lot of predictions about the future of software in the first edition of his 1995 book The Road Ahead, but even the founder of Microsoft couldn’t image the magnitude of the impact of the Internet.

Within a few years, the Web altered everything. As old software companies faded away—unable to adjust to the new paradigm—new ones cropped up in their place. Although many of these new companies weren’t able to survive the dot-com bust, they did make an impact on the software industry as a whole. The way in which companies coped with the industry in flux back then can be easily applied to the way companies are adopting the cloud computing model in 2010.

Driven by emerging business needs, new customer demands and market forces, the way software was developed and the vendors that deliver it were greatly altered in the mid-2000s. Said Microsoft’s platform strategy general manager Charles Fitzgerald in 2004, “There’s an argument that almost every company is in the software business in one way or another.” Fitzgerald added that although American Express and eBay aren’t commonly thought of as being in the software business, they are. “If you participate in the information economy, you will be a software company. If you’re in a customer-facing business, software is the way you’re going to differentiate yourself,” he explained.

The fact of the matter is that the industry that provided much of the software in 2004-05 was poised to change dramatically in the years that followed. The industry will continue to enter periodic waves of consolidation and expansions, and the industry consensus is that it will remain in consolidation mode for the next couple of years. Larry Ellison, CEO of Oracle, predicted that within a few years the software market will be dominated by just a few companies: Oracle, Microsoft, Salesforce.com, Adobe and SAP.

Ellison wasn’t alone in his predictions, as some software buyers, like Mani Shabrang, head of technology deployment and research and development in Dow Chemical Co.’s business-intelligence center, agreed with him. “The number of software vendors will definitely get smaller and smaller,” said Shabrang in 2004. Another variable to consider, brought up by Shabrang, was that vendors of new types of software would emerge as vendors of mature software categories (like enterprise resource planning) consolidate. Shabrang predicted that a new generation of tools for visualizing data and intelligent software that recognizes the tone and meaning of written prose (in addition to mining text) would pop up as well.

Another group believed that there will be just as many software vendors in the future as there were back then. Danny Sabbah, chief technology officer of IBM’s software group, said that new companies would develop higher-level applications, thus leaving the markets for infrastructure software, middlewear and even core applications such as ERP to a few major companies.

CEO of business-intelligence software vendor Information Builders Inc. Gerald Cohen said, “Roughly every two or three years, new software categories appear. As long as there’s a venture-capital industry, there will be new categories of software.”

So what would the next application be? No one knew, although emerging service-oriented architecture technology was poised to lay the foundation for a new generation of software applications. The software of the future was predicted to be made up components, many of which would be developed in-house by the business requiring them. This is in contrast to what was the model back in 2004, in which vendors developed ever-larger applications that often took months to install.

According to Sabbah, software would likely switch from integrating business processes within a company to integrating these processes between companies. For example, applications might link ordering, invoicing, and inventory-management tasks up and down a supply chain within an industry in the not-so-distant future.

Another looming question was what the predominant operating system and underlying new applications would be. Microsoft ® Windows and Linux distributions would continue to compete, that much was sure, and the battle only got fiercer when Microsoft unveiled its next-generation Longhorn client and server in 2006-07, respectively.

Even in 2004, industry prognosticators knew that larger and more-complex systems weren’t going anywhere. The question was, how would the process of developing software be managed, especially as geographically disbursed programmers and offshore developers were doing an increasing amount of development work? The challenged awaiting users of the complex applications they create also needed to be addressed.

IBM’s Sabbah had this to say about the future of software, “The real challenge of our industry is to build software that is [easy to use] and simple to deploy but not simplistic.”

As shown by the growth of companies which provide software on a hosted basis, like Salesforce.com, it became increasingly important to pay attention to changes in vendor-buyer relationships and how software functionality was delivered.

Co-founder and CEO of business-intelligence and data-analysis software vendor SAS Institute Inc. Jim Goodnight wasn’t worried by these potential changes, instead placing his focus on that new opportunities awaited him and his company. In 2004 Goodnight said, “The IT industry needs to jeep a fairly shortened horizon. Our horizon is about two years. We make it a practice not to have these big five-year plans. If you do, you’re going to get about halfway through, and the world if going to change.” In 2010 Goodnight’s words still ring true.  For more information regarding the changing Software landscape, please visit Nubifer.com.

Microsoft and IBM Compete for Space in the Cloud as Google Apps Turns 3

Google may have been celebrating the third birthday of Google Apps Premier Edition on February 22, but Microsoft and IBM want a piece of the cake, errr cloud, too. EWeek.com reports that Google is trying to dislodge legacy on-premises installations from Microsoft and IBM while simultaneously fending off SaaS solutions from said companies. In addition, Google has to fend off offerings from Cisco Systems and startups like Zoho and MinTouch, to name a few. Despite the up-and-comers, Google, Microsoft and IBM are the main three companies competing for pre-eminence in the market for cloud collaborative software.

Three year ago, Google launched its Google Apps Premier Edition, marking a bold gamble on the future of collaborative software. Back then, and perhaps even still, the collaborative software market was controlled by Microsoft and IBM. Microsoft and IBM have over 650 million customers for their Microsoft ® Office, Sharepoint and IBM Lotus suite combined. These suits are licensed as “on-premises” software which customers install and maintain on their own servers.

When Google launched Google Apps Premier Edition (GAPE), it served as a departure from this on-premises model by offering collaboration software hosted on Google’s servers and delivered via the Web. We now know this method as cloud computing.

Until the introduction of GAPE, Google Apps was available in a free standard edition (which included Gmail, Google Docs word processing, spreadsheet and presentation software), but with GAPE Google meant to make a profit. For just $50 per user per year, companies could provide their knowledge workers with GAPE, which featured the aforementioned apps as well as additional storage, security and, most importantly, 24/7 support.

Google Apps now has over two million business customers–of all shapes and sizes–and is designed to appeal to both small companies desiring low-cost collaboration software but are lacking the resources to manage it and large enterprises desiring to eliminate the cost of managing collaboration applications on their own. At the time, Microsoft and IBM were not aggressively exploring this new cloud approach.

Fast-forward to 2009. Microsoft and IBM had released hosted collaboration solutions (Microsoft ® Business Productivity Office Suite and LotusLive respectively) to keep Google Apps from being lonely in the cloud.

On the third birthday of GAPE, Google has its work cut out for it. Google is trying to dislodge legacy on-premises installations from Microsoft and IBM while fending of SaaS solutions from Microsoft, IBM, Zoho, Mindtouch and the list goes on.

Dave Girouard, Google Enterprise President, states that while Google spent 2007 and 2008 debating the benefits of the cloud, the release of Microsoft and IBM products validated the market. EWeek.com quotes Girouard as saying, “We now have all major competitors in our industry in full agreement that the cloud is worth going to. We view this as a good thing. If you have all of the major vendors suggesting you look at the cloud, the consideration of our solutions is going to rise dramatically.”

For his part, Ron Markezich, corporate vice president of Microsoft Online Services, thinks that there is room for everyone in the cloud because customer needs vary by perspective. Said Markezich to EWeek.com, “Customers are all in different situations. Whether a customer wants to go 100 percent to the cloud or if they want to go to the cloud in a measured approach in a period of years, we want to make sure we can bet on Microsoft to serve their needs. No one else has credible services that are adopted by some of the larger companies in the world.”

Microsoft’s counter to Google Apps is Microsoft’s ® Business Productivity Online Suite (BPOS). It includes Microsoft ® Exchange Online with Microsoft ® Exchange Hosted Filtering, Microsoft ® SharePoint Online, Microsoft ® Office Communications Online and Microsoft ® Office Living Meeting. Microsoft also offers the Business Productivity Online Deskless Worker Suite (which includes Exchange Online Deskless Worker for email, calendars and global address lists, antivirus and anti-spam filters) and Microsoft ® Outlook Web Access Light (for access to company email) for companies with either tighter budgets or those in need of lower cost email and collaboration software. Sharepoint Online Deskless Worker provides easy access to SharePoint portals, team sites and search functionality.

The standard version of BPOS costs $1 user per month or $120 per user per year while BPOS Deskless Worker Suite is $4 per user per month or $36 per user per year. Users may also license single apps as stand-alone services from $2 to $5 per user per month, which serves as a departure from Google’s one-price-for-the-year GAPE package.

The same code base is used by Microsoft for its BPOS package, on-premises versions of Exchange and SharePoint, thus making legacy customers’ transition into the cloud easier should they decide to migrate to BPOS. Microsoft thinks that this increases the likelihood that customers will remain with Microsoft rather than switching to Google Apps or IBM Lotus.

At Lotusphere 2008, IBM offered a hint at its cloud computing goals with Bluehouse, a SaaS extranet targeted toward small- to mid-size business. The product evolved as LotusLive Engage, a general business collaboration solution with social networking capabilities from IBM’s LotusLive Connections suite, at Lotusphere 2009. In the later half of 2009, the company sought to fill the void left open by the absence of email, by introducing the company’s hosted email solution LotusLive iNotes. iNotes costs $3 per user per month and $36 per user per year. Additionally, IBM offers LotusLive Connections, a hosted social networking solution, as well as the aforementioned LotusLive Engage.

Vice president of online collaboration for IBM Sean Pouelly told EWeek.com that IBM is banking on companies using email to adopt their social networking services saying, “It’s unusual that they just buy one of the services.” Currently over 18 million paid seats use hosted versions of IBM’s Lotus software.

IBM’s efforts in the cloud began to really get attention when the company scored Panasonic as a customer late last year. In its first year of implementing LotusLive iNotes, the consumer electronics maker plans on migrating over 100,000 users from Lotus Notes, Exchange and Panasonic’s proprietary email solution to LotusLive.

When it comes down to it, customers have different reasons for choosing Google, Microsoft or IBM. All three companies have major plans for 2010, and each company has a competitive edge. For more information regarding Cloud Computing please visit Nubifer.com.

The Main Infrastructure Components of Cloud Computing

Cloud computing is perhaps the most-used buzz word in the tech world right now, but to understand cloud computing is to be able to point out its main infrastructure components in comparison to older models.

So what is cloud computing? It is an emerging computing model that allows users to gain access to their applications from virtually anywhere by using any connected device they have access to. The cloud infrastructure supporting the applications is made transparent to users by a user-centric interface. Applications live in massively scalable data centers where computational resources are able to be dynamically provisioned and shared in order to achieve significant economies of scale. The management costs of bringing more IT resources into the cloud can be significantly decreased due to a strong service management platform.

Cloud computing can be viewed simultaneously as a business delivery model and an infrastructure management methodology. As a business delivery model, it provides a user experience through which hardware, software and network resources are optimally leveraged in order to provide innovative services on the web. Servers are provisioned in adherence with the logical requirements of the service using advanced, automated tools. The cloud enables program administrators and service creators to use these services via a web-based interference that abstracts away the complex nature of the underlying dynamic infrastructure.

IT organizations can manage large numbers of highly virtualized resources as a single large resource thanks to the infrastructure management methodology. Additionally, it allows IT organizations to greatly increase their data center resources without ramping up the number of people typically required to maintain that increase. A cloud will thus enable organizations currently using traditional infrastructures to consume IT resources in the data center in new, exciting, and previously-unavailable ways.

Companies with traditional data center management practices know that it can be time-intensive to make IT resources available to an end user because of the many steps it involves. These include procuring hardware, locating raised floor space, not to mention sufficient power and cooling, allocating administrators to install operating systems, middleware and software, provisioning the network and securing the environment. Companies have discovered that this process can take two to three months, if not more, while IT organizations re-provisioning existing hardware resources find that it takes weeks to finish.

This problem is solved by the cloud—as the cloud implements automation, business workflows and resource abstraction that permits a user to look at a catalog of IT services, add them to a shopping cart and subsequently submit the order. Once the order is approved by an administrator, the cloud handles the rest. In this way, the process cuts down on the time usually required to make those resources available to the customer from long months to mere minutes.

Additionally, the cloud provides a user interface that allows the user and the IT administrator to manage the provisioned resources through the life cycle of the service request very easily. Once a user’s resources have been delivered by the cloud, the user can track the order (which usually consists of a variable of servers and software); view the health of those resources; add additional servers; change the installed software; remove servers; increase or decrease the allocated processing power, storage or memory; start, stop and restart servers. Yes, really. These self-service functions are able to be performed 24 hours a day and take just minutes to perform. This is in stark contrast to a non-cloud environment, in which it would take hours or even days to have hardware or software configurations changed to have a server restarted. For more information regarding Infrastructure components for a Cloud ecosystem please visit Nubifer.com.

Media Streaming Added to Amazon CloudFront

Amazon Web Services LLC unveiled media streaming for its content delivery service, Amazon CloudFront, on December 16, 2009. The brand new feature enables streaming delivery of audio and video content, thus providing an alternative to progressive download where end users download a full media file.

According to Amazon officials, Amazon CloudFront streams content from a worldwide network of 14 edge locations, which ensures low latencies and also offers cost-effective delivery. Like all Amazon Web Services, Amazon CloudFront requires no up-front investment, minimum fees or long-term contracts and uses the pay-what-you-use model.

General manager of Amazon CloudFront Tal Saraf said in a statement released in conjunction with the company’s announcement, “Many customers have told us that an on-demand streaming media service with low latency, high performance and reliability has been out of reach—it was technically complex and required sales negotiations and up-front commitments. We’re excited to add streaming functionality to Amazon CloudFront that is so easy, customers of any size can start streaming content in minutes.”

Amazon reports that viewers literally watch the bytes as they are delivered because content is delivered to end users in real time. In addition to giving the end user more control over their viewing experience, streaming also lowers costs for content owners by reducing the amount of data transferred when end users fail to watch the whole video.

Users only need to store the original copy of their media objects in the Amazon Simple Storage Service (Amazon S3) in order to stream content with Amazon CloudFront, and then enable those files for distribution in Amazon CloudFront with a simple command using the AWS Management Console or the Amazon CloudFront API. Amazon officials said that end users requesting streaming content are automatically routed to the CloudFront edge location best suited to serve the stream, thus end users can get the highest bit rate, lowest latency and highest-quality stream possible. Due to multiple levels of redundancy built into Amazon CloudFront, customers’ streams are served reliably and with high quality.

Daniel Rhodes of video sharing website Vidly said in a statement, “In the five minutes it took us to implement Amazon CloudFront’s streaming service, Vidly was able to both cut costs and offer additional features that significantly improved the in-video experience for our worldwide audience. Without any upfront capital, we are able to side-step the purchase and administration of streaming servers while still getting all the same benefits. Amazon CloudFront brings all the benefits together in such a great tightly integrated way with Amazon’s other services we use and is reliably distributed worldwide, all with barely any work on our part.”

LongTail Video had added support for Amazon CloudFront Streaming to their popular open source video player, JW Player. “There was a great fit between the JW player and Amazon CloudFront streaming: both focus on making it as easy as possible for anyone to incorporate high quality video into Websites,” said LongTail Video co-founder Jeroen “JW” Wijering.

Using Adobe’s Flash Media Server 3.5.3 (FMS), Amazon CloudFront lets developers take advantage of many features of FMS. Customers can decide to deliver their content via the Flash standard Real Time Messaging Protocol (RTMP) or using its encrypted version, RTMPE (for added security). Customers can also use advanced features like dynamic bit rate streaming (which automatically adjusts the bit rate of the stream plated to the end user based on the quality of the user’s connection). Currently supporting on-demand media, Amazon CloudFront streaming support for live events is slated for 2010. For more information regarding Cloud Hosting options please visit Nubifer.com.

The Effects of Platform-as-a-Service (PaaS) on ISVs

Over the past decade, the ascent of Software-as-a-Service (SaaS) has allowed Independent Software Vendors (ISVs) to develop new applications hosted and delivered on the Web. Until recently, however, any ISV creating a SaaS offering has been required to create its own hosting and service delivery infrastructure. With the rise of Platform-as-a-Service (PaaS) over the past two years, this has all changed. As the online equivalent of conventional computing platforms, PaaS provides an immediate infrastructure on which an ISV can quickly build and deliver a SaaS application.

Many ISVs are hesitant to bind their fate to an emerging platform provider, yet those that have taken a leap of faith and adopted PaaS early on have reaped the benefits, seeing dramatic reductions in development costs and timescales. PaaS supercharges SaaS by lowering barriers to entry and foreshortening time-to-market, this quickening the pace of innovation and intensifying competition.

The nature of ISVs will forever be altered by the advent of PaaS, not only ISVs who choose to introduce SaaS offerings but those who remain tethered to conventionally-licenses, customer-operated software products. The ways in which PaaS alters the competitive landscape across a variety of parameters:

Dramatically quicker cycles of innovation

By implementing the iterative, continuous improvement upgrade model of SaaS, PaaS allows developers to monitor and subsequently respond to customer usage and feedback and quickly incorporate the latest functionality into their own applications.

Lowered price points

Developers’ costs are cut down across multiple dimensions by the shared, pay-as-you-go, elastic infrastructure of PaaS. This results in greatly reduced development and operations costs.

Multiplicity of players from reduced barriers to entry

Large numbers of market entrants are attracted to the low costs of starting on a PaaS provider’s infrastructure. These entrants would not otherwise be able to fund their own infrastructure and thanks to a PaaS are able to significantly increase innovation and competition.

New business models, propositions, partner channels and routes to market

New ways of offering products and bringing them to market, many of them highly disruptive to established models, are created by the “as-a-service” model.

It is important for ISVs to understand and evaluate that PaaS is different than other platforms in order for them to remain in control of their own destiny. PaaS is a new kind of platform, the dynamics of which are different than conventional software platforms. Developers need to be weary of assessing PaaS alternatives on the basis of criteria that are not valid when applied to PaaS. For more information on Platform as a Service please visit Nubifer.com.

Collaboration Transitioned to the Cloud

Cloud computing provides ample possibilities when enabling richer communication, whether inside or outside the firewall. Regardless of the location, area of specialization or the format of information, the Web offers an ideal forum for project stakeholders to share ideas. Collaboration can play a vital role in the discovery process when a browser is all that is required to interact.

There are many technical considerations that need to be addressed when moving collaboration into the cloud. The data involved in modern scientific research is vast and complex, and as such it isn’t possible to take legacy infrastructure that is firmly planted on the ground and move it into the cloud. There are simply too many transactional systems bundled around these data hubs to get to the core.

On balance, too much latency would be introduced if thick-client technologies were installed at every site to transact on one or many data warehouses. Organizations should instead focus on enabling the integration, shared access and reporting of project-centric date via a cloud-based project data mart. This should be done rather than isolating information within disciplinary silos and requires a services-based formation platform. The services-based information platform must be capable of extracting the most relevant scientific intelligence from diverse systems and formats.

Take a fictional pharmaceutical company, for example, that is working on a drug discovery project with a Contact Research Organization (CRO). Many scientific organizations actually install their legacy IT systems at the outsourcer’s site as a way to exchange and analyze data. This is costly and also inefficient because systems need to be maintained within the organization;s internal IT infrastructure and at the CRO site.

The redundancies multiply with each department, location and partner involved. Data mart and reporting are on top of a serviced-based architecture with a cloud-based project and workflows, critical information and transactions, which need to be accessed by collaborators, and can be maintained globally with a lower support burden and seat cost. To learn more about Collaboration in the Cloud, please visit Nubifer.com.

Nubifer Cloud:Portal

Reducing capital expenditure for hardware supporting your software is a no-brainer, and Nubifer Cloud:Portal allows you to leverage the computing power and scalability of the top-tier cloud platforms. A powerful suite of core portal technologies, interfaces, database schematics and service-oriented architecture libraries, Cloud:Portal comes in several configuration options and you are sure to find the right fit for your enterprise.

Nubifer understands that certain clients requiring custom on-premise and cloud-hosted portals may also require different application layers and data layer configurations. For this reason, Nubifer leverages RAD development techniques to create robust, scalable programming code in ASP.NET (C#), ASP, PHP, Java Servlets, JSP and ColdFusion and Perl. Nubifer also supports a myriad of data formats and database platform types, cloud SOA and architectures such as SQL Server (and Express), Microsoft ® Access, MYSQL, Oracle and more.

Nubifer Cloud:Portal Provides Enterprise Grade Solutions

Your new Nubifer Cloud:Portal is created by Nubifer’s professional services team through customizing and enhancing one or more combinations. In addition, a wide range of cloud modules are compatible and can be added as “plug-in” modules to extend your portal system.

The following Options in Portal types are available:

·         Online Store

·         Task Management System

·         Employee Directory

·         Bug / Task Tracker

·         Forum / Message Board

·         Wizard Driven Registration Forms

·         Time Sheet Manager

·         Blog / RSS Engine Manager

·         Calendar Management System

·         Events Management

·         Custom Modules to Match Business Needs

At its most basic, the cloud is a nebulous infrastructure owned and operated by an outside party that accepts and runs workloads created by customers. Nubifer Cloud:Portal is compatible with cloud platforms and APIs like Google APIs for Google Applications and Windows® Azure, and also runs on standard hosting platforms.

Cloud Portal boasts several attractive portal management features. Multi-level Administrative User Account Management lets you manage accounts securely, search by account and create and edit all accounts. Public Links and Articles Manager allows to you create, edit or archive new articles, search indexed and features the Dynamic Links manager. Through “My Account” User Management, users can manage their own account and upload and submit custom files and information. The Advanced Security feature enables session-based authentication and customized logic.

That’s not all! There are other great features association with Nubifer Cloud Portal. Calendar and Events lets you add and edit; calendars can be user specific or group organization specific and events can be tied to calendar events. The system features dynamic styles because it supports custom styles sheets dynamically triggered by user choice or by configuration settings, which is great for co-branding or the multi-host look and feel. Web Service XML APIs for 3rd party integration feature SOA architecture, are web service enables and are interoperable with the top-tier cloud computing platforms by exposing and consuming XML APIs. Lastly, submission forms, email and database submission is another important feature. Submission forms trigger send mail functionality and are manageable by Portal Admins.

Cloud Portal employs R.I.A. Reporting such as User Reports, Search BY Category Reports, Transaction Details Reports, Simple Report and Timesheet Report through Flex and Flash Reports.

Companies using Cloud Portal are delivered a “version release” code base for their independent endeavors. These companies leveraging Nubifer’s professional portal service have access, ownership and full rights to the “code instance” delivered as the final release version of their customized cloud portal. This type of licensing gives companies a competitive edge by being the sole proprietor of their licenses copy of the cloud portal.

Enterprise companies leverage the Rapid and Rich offering delivered by out portal code models and methodologies. As a result, companies enjoy the value of rapid prototyping and application enhancement with faster to market functionality in their portals.

Nubifer Cloud:Portal technology is designed to facilitate and support your business model today and in the future, by expanding as your company evolves. Within our process for portal development, we define and design the architecture, develop and enhance the portal code and deliver and deploy to your public or private environment. Please visit nubifer.com to learn more about our proprietary offering, Cloud:Portal.

Security in the Cloud

One major concern has loomed over companies considering a transition into the cloud: security. The “S” word has affected the cloud more than other types of hosted environments, but most concerns about security are not based on reality.

Three factors about cloud security:

1.       Cloud security is almost identical to internal security, and the security tools used to protect your data in the cloud are the same ones you use each day. The only difference is that the cloud is a multi-tenant environment with multiple companies sharing the same cloud service provider.

2.       Security issues within the cloud can be address with the very same security tools you currently have in place. While security tools are important, they should not be perceived as a hindrance when making the transition into the cloud. Over time, the commodity nature of IT will require that you transition your technologies to the cloud in order to remain financially competitive. This is why it is important to start addressing security measures now in order to prepare for the future.

3.       As long as you choose a quality cloud provider, your security within the cloud will be as good—perhaps even better!—than your current security. The level of security within in the cloud is designed for the most risky client in the cloud, and thus you will receive that same security whatever your level of risk.

Internal or External IT?

Prior to asking questions about security within the cloud, you need to ask what exactly should move into the cloud in the first place, such as commodities. Back when companies first began taking advantage of IT, the initial businesses to computerize their organization’s processes had significant gains over competitors. As the IT field grew, however, the initial competitive benefits of computerization began to wane, and computerization thus became a requirement in order to simply remain relevant. As such, there is an increasing amount of IT operating as a commodity.

Cloud computing essentially allows business to offload commodity technologies and free up resources and time to concentrate on the core business. For example, a company manufacturing paper products requires a certain amount of IT to run its business and also make it competitive. The company also runs a large quantity of commodity IT; this commodity technology takes time, money, energy and people away from the company’s business of producing paper products at a price that rivals competitors. This is where cloud computing comes in.

The commodity IT analysis form helps you determine what parts of your IT can be moved externally by helping you list out all of the functions that your IT organization performs and decide if you think of this activity as a commodity, or not.

Internal IT Security

Some think that internal IT no longer helps businesses set themselves apart from other businesses. The devaluing of IT leads to many companies failing to adequately fund required budgets to operate a first-class IT infrastructure. In addition, there is an increasing number of security mandates from external and internal courses means that IT can’t always fund and operate as required.

Another problem involves specialization and its effect on business function, as businesses exist as specialized entities. When looking at funding and maintaining a non-core part of the business, IT faces a problem. For example, an automotive maker avoids starting a food production company even though it could feed its employees that way because that is not its core business. It is unlikely that the automotive manufacturer’s IT department will be as successful as its manufacturing business. On balance, a business with IT as its only product line or service should be more successful as providing IT. Thus if the automotive maker isn’t going to operate as a best-in-class IT business, why would its security be expected to be best-in-class? A company with IT as its business is the best choice for securing your data because the quality of its product and its market success depends on its security being effective.

Factors to consider when picking a cloud provider:

Cloud providers have internal and external threats that can be accepted or mitigated, like internal IT, and these challenges are all manageable:

Security assessment: Most organizations usually relax their level of security over time, and as a way to combat this, the cloud provider must perform regular security assessments. The subsequent security report must be given to each client immediately after it is performed so the client knows the current state over their security in the cloud.

Multi-tenancy: The cloud provider should design its security to ensure that it meets the needs of its higher-risk clients, and in turn all clients will reap the rewards of this.

Shared Risk: The cloud service provider will not be the cloud operator in many instances, but the cloud service provider may nonetheless be providing a value-added service in addition to another cloud provider’s service. Take a Software-as-a-Service provider, for example. The SaaS provider needs infrastructure, and it may make more sense to get that infrastructure from an Infrastructure-as-a-Service provider as opposed to building it on its own. Within this kind of multi-tier service provider, the risk of security issues are shared by each part because the risk affects all parties involved at various layers. The architecture used by the main cloud provider must be addressed and that information taken into account when assessing the client’s total risk mitigation plan.

Distributed Data Centers: Due to the fact that providers can offer an environment that is geographically distributed, a cloud computing environment should be less prone to disasters–in theory. In reality, many organizations sign up for cloud computing services that are not geographically distributed, this they should require that their provider have a working and regularly-tested disaster recovery plan (including SLAs).

Staff Security Screening: As with other types of organizations, contractors are often hired to work for cloud providers, and these contractors should be subject to a full background investigation.

Physical Security: When choosing a cloud security provider, physical external threats should be analyzed carefully. Some important questions to ask are: Do all of the cloud provider’s facilities have the same levels of security? Is your organization being offered the most secure facility with no guarantee that your data will actually reside there?

Policies: Cloud providers are not exempt from suffering from data leaks or security incidents, which is why cloud providers need to have incident response policies and procedures for each client that they feed into their overall incident response plan.

Data Leakage: One of the greatest organizational risks from a security standpoint is data leakage. As such, the cloud provider must have the ability to map its policy to the secure mandate you must comply with and talk about the issues at hand.

Coding: In-house software used by all cloud providers may contain application bugs. For this reason, each client should make sure that the cloud provider follows secure coding practices. All code should additionally be written using a standard methodology that is documented and can also be demonstrated to the customer.

In conclusion, security remains a major concern, but it is important to understand that the technology used to secure your organization within the cloud isn’t untested or new. Security questions within the cloud represent the logical progression to outsourcing of commodity services to some of the same IT providers that you have been confidently using for years already. Moving IT elements into the cloud is simply a natural progression in the overall IT evolution. Visit nubifer.com for more information regarding the ever-changing environment of Cloud security.

Survey Reveals Developers Concentrating on Hybrid Cloud in 2010

According to a survey of application developers conducted by Evans Data, over 60 percent of IT shops polled have plans to adopt a hybrid cloud model in 2010. The results for the poll, released on January 12, 2010, indicate that 61 percent of over 400 participating developers stated that some portion of their companies’ IT resources will transition into the public cloud within the next year.

The hybrid cloud is set to dominate the IT landscape in 2010 because of those surveyed, over 87 percent of the developers said that half or less of their resources will move. A statement obtained by eWeek.com quotes CEO of Evans Data Janel Garvin as saying, “The hybrid Cloud presents a very reasonable model, which is easy to assimilate and provides a gateway to Cloud computing without the need to commit all resources or surrender all control and security to an outside vendor. Security and government compliance are primary obstacles to public cloud adoption, but a hybrid model allows for selective implementation so these barriers can be avoided.”

Evans Data conducted its survey over November and December of last year as a way to examine timelines for public and private cloud adoption, ways in which to collaborate and develop within the cloud, obstacles and benefits of cloud development, architectures and tools for cloud, development, virtualization in the private data center and other aspects of cloud computing. The survey also concluded that 64 percent of developers surveyed expect their clod apps to venture into mobile devices in the near future as well.

Additional information about the future of cloud computing revealed by Evans Data’s poll revealed that the preferred database for use in the public cloud is MySQL, preferred by over 55 percent of developers. Following by Microsoft and IBM, VMware was also revealed to be the preferred hypervisor vendor or user in a virtualized private cloud. To learn more please visit nubfer.com.

Maximizing Effectiveness in the Cloud

At its most basic, the cloud is a nebulous infrastructure owned and operated by an outside party that accepts and runs workloads created by customers. When thinking about the cloud in this way, the basic question concerning cloud computing becomes, “Can I run all of my applications in the cloud?” If you answer “no” to that question, then ask yourself, “What divisions of my data can safely be run in the cloud?” When assessing how to include cloud computing in your architecture, one way to maximize your effectiveness in the cloud is to see how you can effectively complement your existing architectures.

The current cloud tools strive to manage provisioning and a level of mobility management, with security and audit capabilities on the horizon, in addition to the ability to move the same virtual machine in and out of the cloud. This is where virtualization, a new data center which includes a range of challenges for traditional data center management tools, comes into play. Identity, mobility and data separation are a few obvious sues for virtualization.

1.       Identity

Server identity becomes crucial when you can make 20 identical copies of an existing server and then distribute them around the environment with just a click of a mouse. In this way, the traditional identity based on physicality doesn’t measure up.

2.       Mobility

While physical servers are stationary, VMs are designed to be mobile, and tracking and tracing them throughout their life cycles is an important part of maintaining and proving control and compliance.

3.       Data separation

Resources are shared between host servers and the virtual servers running on them, thus portions of the host’s hardware (like the processor and memory) are allocated to each virtual server. There have not been any breaches of isolation between virtual servers yet, but this may not last.

These challenges are highlighted by cloud governance. While these three issues are currently managed and controlled by someone outside of the IT department, additional challenges that are specific to the cloud now exist. Some of them include life cycle management, access control, integrity and cloud-created VMS.

1.       Life cycle management

How is a workload’s life cycle managed once it has been transferred to the cloud?

2.       Access control

Who was given access to the application and its data while it was in the cloud?

3.       Integrity

Did its integrity remain while it was in the cloud, or was it altered?

4.       Cloud-created VMS

Clouds generate their own workloads and subsequently transfer them into the data center. These so-called “virtual appliances” are being downloaded into data centers each day and identity, integrity and configuration need to be managed and controlled there.

Cloud computing has the potential to increase the flexibility and responsiveness of your IT organization and there are things you can do to be pragmatic about the evolution of cloud computing. They include understanding what is needed in the cloud, gaining experience with “internal clouds” and testing external clouds.

1.       Understanding that is needed to play in the cloud

The term “internal clouds” has resulted from the use of virtualization in the data center. It is important to discuss with auditors how virtualization is impacting their requirements and new requirements and new policies may subsequently be added to your internal audit checklists.

2.       Gaining experience with “internal clouds”

It is important to be able to efficiently implement and enforce the policies with the right automation and control systems. It becomes easier to practice that in the cloud once you have established what you need internally.

3.       Testing external clouds

The use of low-priority workloads help provide a better understanding of what is needed for life cycle management as well as establish what role external cloud infrastructures may play in your overall business architecture.

Essentially, you must be able to manage, control and audit your own internal virtual environment in order to be able to do so with an external cloud environment. Please visit nubifer.com to learn more on maximizing officing effectiveness in the cloud.

Scaling Storage and Analysis of Data Using Distributed Data Grids

One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid. This method is predicted to have important applications in cloud computing over the next couple years, and eWeek Knowledge Center contributor William L. Bain describes ways in which a distributed data grid can be used to implement powerful, Java-based applications for parallel data analysis.

In current Information Age, companies must store and analyze a large amount of business data. Companies that have the ability to efficiently search data for important patterns will have a competitive edge over others. An e-commerce Web site, for example, needs to be able to monitor online shopping carts in order to see which products are selling faster than others. Another example is a financial services company, which needs to hone its equity trading strategy as it optimizes its response to rapidly changing market conditions.

Businesses facing these challenges have turned to distributed data grids (also called distributed caches) in order to scale their ability to manage rapidly changing data and sort through data to identify patterns and trends that require a quick response. A few key advantages are offered by distributed data grids.

Distributed data grids store memory instead of on a disk for quick access. Additionally, they run seamlessly across various servers to scale performance. Lastly, they provide a quick, easy-to-use platform for running “what if” analyses on the data they store. They can take performance to a level unable to be matches by stand-alone database serves by breaking the sequential bottleneck.

Three simple steps for building a fast, scalable data storage and analysis solution:

1. Store rapidly changing business data directly in a distributed data grid rather than on a database server

Distributed data grids are designed to plug directly into the business logic of today’s enterprise application and services. They match the in-memory view of data already used by business logic by storing data as collections of objects rather than relational database tables. Because of this, distributed data grids are easy to integrate into existing applications using simple APIs (which are available for most modern languages like Java, C# and C++).

Distributed data grids run on server farms, thus their storage capacity and throughput scale just by adding more grid servers. A distributed data grid’s ability to store and quickly access large quantities of data can expand beyond a stand-alone database server when hosted on a large server farm or in the cloud.

2. Integrate the distributed data grid with database servers in an overall storage strategy

Distributed data grids are used to complement, not replace data servers, which are the authoritative repositories for transactional data and long-term storage. With an e-commerce Web site, for example, a distributed data grid would hold shopping carts to efficiently manage a large workload of online shopping traffic. A back-end database server would meanwhile store completed transactions, inventory and customer records.

Carefully separating application code used for business logic from other code used for data access is an important factor to integrating a distributed data grid into an enterprise application’s overall strategy. Distributed data grids naturally fit into business logic, which manages data as objects. This code is where rapid access to data is required and also where distributed data grids provide the greatest benefit. The data access layer, in contract, usually focuses on converting objects into a relational form for storage in database servers (or vice versa).

A distributed data grid can be integrated with a database server so that it can automatically access data from the database server if it is missing from the distributed data grid. This is incredibly useful for certain types of data such as product or customer information (stored in the database server and retrieved when needed by the application). Most types of rapidly changing, business logic data, however, can be stored solely in a distributed data grid without ever being written out to a database server.

3. Analyze grid-based data by using simple analysis codes as well as the MapReduce programming pattern

After a collection of objects, such as a Web site’s shopping carts, has been hosted in a distributed data grid, it is important to be able to scan this data for patterns and trends. Researchers have developed a two-step method called MapReduce for analyzing large volumes of data in parallel.

As the first step, each object in the collection is analyzed for a pattern of interest by writing and running a simple algorithm that assesses each object one at a time. This algorithm is run in parallel on all objects to analyze all of the data quickly. The results that were generated by running this algorithm are next combined to determine an overall result (which will hopefully identify an important trend).

Take an e-commerce developer, for example. The developer could write a simple code which analyzes each shopping cart to rate which product categories are generating the most interest. This code could be run on all shopping carts throughout the day in order to identify important shopping trends.

Using this MapReduce programming pattern, distributed data grids offer an ideal platform for analyzing data. Distributed data grids store data as memory-based objects, and thus the analysis code is easy to write and debug as a simple “in-memory” code. Programmers don’t need to learn parallel programming techniques nor understand how the grid works. Distributed data grids also provide the infrastructure needed to automatically run this analysis code on all grid servers in parallel and then combine the results. By using a distributed data grid, the net result is that the application developer can easily and quickly harness the full scalability of the grid to quickly discover data patterns and trends that are important to the success of an enterprise. For more information, please visit www.nubifer.com.

Answers to Your Questions on Cloud Connectors

Jeffrey Schwartz and Michael Desmond, both editors of Redmond Developer News, recently sat down with corporate vice president of Microsoft’s Connected Systems Division, Robert Wahbe, at the recent Microsoft Professional Developers Conference (PDC) to talk about Microsoft Azure and its potential impact on the developer ecosystem at Microsoft. Responsible for managing Microsoft’s engineering teams that deliver the company’s Web services and modeling platforms, Wahbe is a major advocate of the Azure Services Platform and offers insight into how to build applications that exist within the world of Software-as-a-Service, or as Microsoft calls it, Software plus Services (S + S).

When asked how much of Windows Azure is based on Hyper-V and how much is an entirely new set of technologies, Wahbe answered, “Windows Azure is a natural evolution of our platform. We think it’s going to have a long-term radical impact with customers, partners and developers, but it’s a natural evolution.” Wahbe continued to explain how Azure brings current technologies (i.e. the server, desktop, etc.) into the cloud and is fundamentally built out of Windows Server 2008 and .NET Framework.

Wahbe also referenced the PDC keynote of Microsoft’s chief software architect, Ray Ozzie, in which Ozzie discussed how most applications are not initially created with the idea of scale-out. Explained Wahbe, expanding upon Ozzie’s points, “The notion of stateless front-ends being able to scale out, both across the data center and across data centers requires that you make sure you have the right architectural base. Microsoft will be trying hard to make sure we have the patterns and practices available to developers to get those models [so that they] can be brought onto the premises.”

As an example, Wahbe created a hypothetical situation in which Visual Studio and .NET Framework can be used to build an ASP.NET app, which in turn can either be deployed locally or to Windows Azure. The only extra step taken when deploying to Windows Azure is to specify additional metadata, such as what kind of SLA you are looking for or how many instances you are going to run on. As explained by Wahbe, the Metadata is an .XML file and as an example of an executable model, Microsoft is easily able to understand that model. “You can write those models in ‘Oslo’ using the DSL written in ‘M,’ targeting Windows Azure in those models,” concludes Wahbe.

Wahbe answered a firm “yes” when asked if there is a natural fit for application developed in Oslo, saying that it works because Oslo is “about helping you write applications more productively,” also adding that you can write any kind of application—including cloud. Although new challenges undoubtedly face development shops, the basic process of writing and deploying code remains the same. According to Wahbe, Microsoft Azure simply provides a new deployment target at a basic level.

As for the differences, developers are going to need to learn a new set of services. An example used by Wahbe is if two businesses were going to connect through a business-to-business messaging app; technology like Windows Communication Foundation can make this as easy process. With the integration of Microsoft Azure, questions about the pros and cons of using the Azure platform and the service bus (which is part of .NET services) will have to be evaluated. Azure “provides you with an out-of-the-box, Internet-scale, pub-sub solution that traverses firewalls,” according to Wahbe. And what could be bad about that?

When asked if developers should expect new development interfaces or plug-ins to Visual Studio, Wahbe answered, “You’re going to see some very natural extensions of what’s in Visual Studio today. For example, you’ll see new project types. I wouldn’t call that a new tool … I’d call it a fairly natural extension to the existing tools.” Additionally, Wahbe expressed Microsoft’s desire to deliver tools to developers as soon as possible. “We want to get a CTP [community technology preview] out early and engage in that conversation. Now we can get this thing out broadly, get the feedback, and I think for me, that’s the most powerful way to develop a platform,” explained Wahbe of the importance of developers’ using and subsequently critiquing Azure.

When asked about the possibility of competitors like Amazon and Google gaining early share due to the ambiguous time frame of Azure, Wahbe’s responded serenely, “The place to start with Amazon is [that] they’re a partner. So they’ve licensed Windows, they’ve licensed SQL, and we have shared partners. What Amazon is doing, like traditional hosters, is they’re taking a lot of the complexity out for our mutual customers around hardware. The heavy lifting that a developer has to do to tale that and then build a scale-out service in the cloud and across data centers—that’s left to the developer.” Wahbe detailed how Microsoft has base computing and base storage—the foundation of Windows Azure—as well as higher-level services such as the database in the cloud. According to Wahbe, developers no longer have to build an Internet-scale pub-sub system, nor do they have to find a new way to do social networking and contacts nor have reporting services created themselves.

In discussing the impact that cloud connecting will have on the cost of development and the management of development processes, Wahbe said, “We think we’re removing complexities out of all layers of the stack by doing this in the cloud for you … we’ll automatically do all of the configuration so you can get load-balancing across all of your instances. We’ll make sure that the data is replicated both for efficiency and also for reliability, both across an individual data center and across multiple data centers. So we think that be doing that, you can now focus much more on what your app is and less on all that application infrastructure.” Wahbe predicts that it will be simpler for developers to build applications with the adoption of Microsoft Azure. For more information on Cloud Connectors, contact a Nubifer representative today.

Nubifer Cloud:Link

Nubifer Cloud:Link monitors your enterprise systems in real-time and strengthens interoperability with disparate owned and leased SaaS systems. When building enterprise mash-ups, custom addresses and custom source codes are created by engineers to bridge the white space, also known as electronic hand-shakes, between the various enterprise applications within your organization. By utilizing Nubifer Cloud:Link, you gain a real-time and historic view of system-based interactions.

Cloud:Link is designed and configured via robust administrative tools to monitor custom enterprise mash-ups and deliver real-time notifications, warning and performance metrics of your separated yet interconnected business systems. Cloud:Link offers the technology and functionality to help your company monitor and audit your enterprise system configurations.

ENTERPRISE MONITORING
Powerful components of Cloud:Link make managing enterprise grade mash-ups simple and easy.

  • Cloud:Link inter-operates with other analytic engines including popular tracking engines (eg: Google Analytics)
  • RIA (Rich Internet Applications): reporting, graphs and charts
  • WEB API handles secure key param calls
  • Verb- and Action-based scripting language powered by “Verbal Script”
  • XML Schema Reporting capabilities
  • Runs on-premise, as an installed solution, or in the cloud as a SaaS offering
  • Client-side recording technology tracks and stores ‘x’ and ‘y’ coordinate usage of enterprise screens for compliance, legal and regulatory play back
  • Graphical snapshots of hot maps show historical views of user interaction and image hit state selections
  • Creates a method for large systems to employ “data and session playback” technologies of system-generated and user-generated interaction sessions in a meaningful and reproducible way

USE CASE
Cloud:Link monitors and reports enterprise system handshakes, configurations, connections and latency reports in real time. Additionally, Cloud:Link rolls the data view up to your IT staff and system stakeholders via rich dashboards of charts and performance metrics. Cloud:Link also has a robust and scalable analytic data repository that keeps an eye on the connection points of enterprise applications, and audits things like “valid ssl cert warnings or pending expirations”, “mid to high latency warnings”, “ip logging”, “custom gateway SSO (Single Sign-On) landing page monitoring” among many other tracking features.

SUPPORTS POPULAR WEB ANALYTICS
Cloud:Link
also leverages Google Analytics by way of Cloud:Link extended AP,  which can complete parallel calls to your Google Analytics account API, and send data, logs, analytic summaries, and physical click and interface points by the end users to any third party provider or data store for use in your own systems.

SERVER SIDE
On the server side, Cloud:Link is a server-based application you can install or subscribe to as a service. Data points and Machine-to-Machine interaction is tracked at every point during a systems interaction. The Cloud:Link monitor can track remote systems without being embedded or adopted by the networked system, however, if your company chooses to leverage the Cloud:Link API for URI Mashup Tracking, you can see even more detailed real time reports of system interoperability and up-time.

CLIENT SIDE
On the client side, leverage Cloud:Link’s browser plug-in within your enterprise to extend your analytic reach into the interactions by your end-users. This approach is particularly powerful when tracking large systems being used by all types of users. Given the proper installation and setup, your company can leverage robust “Session Playback” of human interaction with your owned and leased corporate business systems.

ADMIN FUNCTIONALITY
Nubifer Inc. focuses on interoperability in the enterprise. Disparate applications operating in independent roles and duties need unified index management, Single Sign-On performance tracking, and application integration monitoring.

  • User Admin logs in and sees a dashboard with default reporting widgets configurable by the admin user
  • “My Reports” (Saved Wizard generated reports) and can be setup to auto send reports to key stake holders in your IT or Operations group
  • Logs (Raw log review in Text Area, exportable to csv, or API post to remote FTP account)
  • Users (Connecting known vs. unknown connecting IP’s)
  • Systems (URI lists of SSO (Single Sign-On)paths to your SaaS and on Premise Apps) – An Enterprise Schematic Map of your On-Prem and Cloud-Hosted Applications

At the core of Nubifer’s products are Nubifer Cloud:Portal, Nubifer Cloud:Link, and Nubifer Cloud:Connector, which offer machine-to-machine real time analytics, tracking and playback of machine to machine interaction for human viewers using Rich Internet Application Components to view on customize-able dashboards. Nubifer Cloud:Link enables large publicly traded or heavily regulated companies to follow compliance laws, regulations, such as SOX, SaS70, HL7/HPPA, and mitigate the risk of not knowing how your systems are interacting on a day to day basis.

PUBLIC AND PRIVATE CLOUD PLATFORM SUPPORT
Currently Cloud:Link is hosted on, and compatible with:

  • Microsoft® Windows Azure™ Platform
  • Amazon® EC3
  • Google® App Engine
  • On-Premise Hosted

To learn more about Cloud:Link technology please contact cloudlink@Nubifer.com or visit nubifer.com/cloud:link to find out how you can begin using the various features offered by Nubifer Cloud:Link.

Answers to Your Questions on Cloud Connectors for Leading Platforms like Windows Azure Platform

Jeffrey Schwartz and Michael Desmond, both editors of Redmond Developer News, recently sat down with corporate vice president of Microsoft’s Connected Systems Division, Robert Wahbe, at the recent Microsoft Professional Developers Conference (PDC) to talk about Microsoft Azure and its potential impact on the developer ecosystem at Microsoft. Responsible for managing Microsoft’s engineering teams that deliver the company’s Web services and modeling platforms, Wahbe is a major advocate of the Azure Services Platform and offers insight into how to build applications that exist within the world of Software-as-a-Service, or as Microsoft calls it, Software plus Services (S + S).

When asked how much of Windows Azure is based on Hyper-V and how much is an entirely new set of technologies, Wahbe answered, “Windows Azure is a natural evolution of our platform. We think it’s going to have a long-term radical impact with customers, partners and developers, but it’s a natural evolution.” Wahbe continued to explain how Azure brings current technologies (i.e. the server, desktop, etc.) into the cloud and is fundamentally built out of Windows Server 2008 and .NET Framework.

Wahbe also referenced the PDC keynote of Microsoft’s chief software architect, Ray Ozzie, in which Ozzie discussed how most applications are not initially created with the idea of scale-out. Explained Wahbe, expanding upon Ozzie’s points, “The notion of stateless front-ends being able to scale out, both across the data center and across data centers requires that you make sure you have the right architectural base. Microsoft will be trying hard to make sure we have the patterns and practices available to developers to get those models [so that they] can be brought onto the premises.”

As an example, Wahbe created a hypothetical situation in which Visual Studio and .NET Framework can be used to build an ASP.NET app, which in turn can either be deployed locally or to Windows Azure. The only extra step taken when deploying to Windows Azure is to specify additional metadata, such as what kind of SLA you are looking for or how many instances you are going to run on. As explained by Wahbe, the Metadata is an .XML file and as an example of an executable model, Microsoft is easily able to understand that model. “You can write those models in ‘Oslo’ using the DSL written in ‘M,’ targeting Windows Azure in those models,” concludes Wahbe.

Wahbe answered a firm “yes” when asked if there is a natural fit for application developed in Oslo, saying that it works because Oslo is “about helping you write applications more productively,” also adding that you can write any kind of application—including cloud. Although new challenges undoubtedly face development shops, the basic process of writing and deploying code remains the same. According to Wahbe, Microsoft Azure simply provides a new deployment target at a basic level.

As for the differences, developers are going to need to learn a new set of services. An example used by Wahbe is if two businesses were going to connect through a business-to-business messaging app; technology like Windows Communication Foundation can make this as easy process. With the integration of Microsoft Azure, questions about the pros and cons of using the Azure platform and the service bus (which is part of .NET services) will have to be evaluated. Azure “provides you with an out-of-the-box, Internet-scale, pub-sub solution that traverses firewalls,” according to Wahbe. And what could be bad about that?

When asked if developers should expect new development interfaces or plug-ins to Visual Studio, Wahbe answered, “You’re going to see some very natural extensions of what’s in Visual Studio today. For example, you’ll see new project types. I wouldn’t call that a new tool … I’d call it a fairly natural extension to the existing tools.” Additionally, Wahbe expressed Microsoft’s desire to deliver tools to developers as soon as possible. “We want to get a CTP [community technology preview] out early and engage in that conversation. Now we can get this thing out broadly, get the feedback, and I think for me, that’s the most powerful way to develop a platform,” explained Wahbe of the importance of developers’ using and subsequently critiquing Azure.

When asked about the possibility of competitors like Amazon and Google gaining early share due to the ambiguous time frame of Azure, Wahbe’s responded serenely, “The place to start with Amazon is [that] they’re a partner. So they’ve licensed Windows, they’ve licensed SQL, and we have shared partners. What Amazon is doing, like traditional hosters, is they’re taking a lot of the complexity out for our mutual customers around hardware. The heavy lifting that a developer has to do to tale that and then build a scale-out service in the cloud and across data centers—that’s left to the developer.” Wahbe detailed how Microsoft has base computing and base storage—the foundation of Windows Azure—as well as higher-level services such as the database in the cloud. According to Wahbe, developers no longer have to build an Internet-scale pub-sub system, nor do they have to find a new way to do social networking and contacts nor have reporting services created themselves.

In discussing the impact that cloud connecting will have on the cost of development and the management of development processes, Wahbe said, “We think we’re removing complexities out of all layers of the stack by doing this in the cloud for you … we’ll automatically do all of the configuration so you can get load-balancing across all of your instances. We’ll make sure that the data is replicated both for efficiency and also for reliability, both across an individual data center and across multiple data centers. So we think that be doing that, you can now focus much more on what your app is and less on all that application infrastructure.” Wahbe predicts that it will be simpler for developers to build applications with the adoption of Microsoft Azure.  For more information regarding Windows Azure, please visit Nubifer.com.

Welcome to Nubifer Cloud Computing blogs

In this location, we share blogs, research, tutorials and opinions about the ever changing and emerging arena of cloud computing, software-as-a-service, platform-as-a-service, hosting-as-a-service, and user-interface-as-a-service. We also share key concepts focused on interoperability while always maintaining an agnostic viewpoint of technologies and services offered by the top cloud platform providers. For more information, please visit Nubifer.com.