Archive for the ‘ Cloud Monitoring ’ Category

Cloud Computing in 2012 (continued) – On-Demand Elasticity

Cloud computing, at its core, offers a large set of resources that  enable a concept known as elasticity. Elasticity is a part of the core feature set that comprise cloud computing. The concept behind elasticity is so integral to cloud computing that Amazon Web services decided to categorize the major offering in their cloud as Amazon EC2 (Elastic Cloud Compute).

The definition of elasticity can be described, or sometimes known as, dynamic scaling. The ability to dynamically scale and change resource requirements or consumption needs in direct response to runtime requirements makes this paradigm of cloud computing an integral part of the model. Most applications require a standard level of resources operating under normal, ready state environmental conditions, but also require additional computing resources during peak usage situations.

Before the advent of the cloud model, companies were required to pre-build, pre-purchase and configure sufficient capacities to not just operate properly under standard load requirements, but also handle extensive peak load situations while offering sufficient performance. When looking into the past and present of the self-hosted model, this means companies having to over provision and purchase additional hardware and software for their given application requirements and further requires engineers to try to accurately predict customer or end user usage in peak load scenarios.

When looking into managed hosting, it is possible to start with a small subset of computing resources and hardware and continue to grow the resource as the applications requirements grow. But in the model of managed hosting, provisioning for new hardware and software dedicated to the application’s needs can take weeks, or even larger companies, months.

With cloud computing having hundreds and thousands of virtualized computing resources which can be leveraged, provisioned, and released in conjunction to the application and peak load requirements on demand make the elastic cloud model the most powerful and convenient paradigm available to business. When businesses incorporate automation via dynamic scaling, also known as elasticity, the service-level offerings to end-users increase substantially.

Our next blog will focus on virtualization in cloud computing. Please check back often, or subscribe to our blog to stay up-to-date on the latest posts and perspectives and news about cloud computing. For more information about Nubifer Cloud Computing visit www.NUBIFER.com

Cloud Computing in 2012 (continued) – Shared Resources in the Cloud

A primary characteristic of cloud computing is that the platform leverages pooled or shared assets. These computing resources can be bought, controlled externally, and used for public or private usage. As we look further into the validity of these shared computing resources, one can easily see that they are an integral component to any public or private cloud platform.

Take, for example, a business website. We begin to see standard options commonly available in today’s market. Shared hosting, is one of the choices companies have had for quite some time now. The shared approach leads them to be free from managing their own data center, and in turn, leverage a third party. Most of the time, managed hosting services lease out to their customers a dedicated server which is not the shared with other users.

Based solely on this, cloud computing looks a lot like a shared hosting model of managed services. This is due to the fact that the cloud platform provider is the third-party that manages, operates and owns the physical computing hardware and software resources which are distributed and shared. At this juncture in the paradigm is where the similarities between shared or dedicated hosting and cloud computing end.

With cloud computing set aside for a moment, the move away from IT departments utilizing self hosted resources and using outsourced IT services  has been evolving for years. This change has substantial economic impacts. Two of the main areas of change are in CAPEX and OPEX. This furthers the potential opportunity for reducing OPEX in conjunction with operating the hardware and software infrastructure. The change from CAPEX toward OPEX defines a lowering of the barrier for entry when starting a new project.

When examining self hosting, companies are required to allocate funding to be spent up front for licenses and hardware purchases. Operating under fixed costs, it is an out-of-pocket expense in the beginning of that project. Furthermore, when leveraging and outsourced offering (a.k.a. managed hosting), the upfront fees can typically be equal to a one-month start-up operational cost, and possibly a set up fee. When analyzed from a financial perspective, the annual cost is close to the same, or just a little bit lower, than the CAPEX expense for an equal project. Additionally, this can be offset by the reduction of required OPEX to manage and care for the infrastructure.

In stark comparison, when analyzing the cloud model, it is standard to see no up-front fees. With closer examination, a subscriber to cloud services can register, purchase, and be leveraging the services in much less time than it takes to read this blog.

The dramatic differential comparisons in financial expenditures you might see between these hosting models, and the cloud model, exist because the cost structures when utilizing cloud infrastructures are drastically more attractive than earlier models offered to IT.  With further investigation, it’s clear the economies of scale are multi-faceted, and driven by relation to the economics of volume. The largest cloud platform providers are able to offer a better price point to the IT consumers because they are able to bulk purchase, and offer better goods and services; which in this paradigm, are capacity, power, data storage, and compute processing power.

And so continues our 2012 blog series dedicated to understanding the core layers of cloud computing. Our next blog will focus on elasticity in cloud computing. Please check back often, or subscribe to our blog to stay up-to-date on the latest posts and perspectives and news about cloud computing. For more information about Nubifer Cloud Computing visit www.NUBIFER.com

Guidelines for Cloud Consumers and Providers

Business users are drawn to the cloud. That’s not surprising, considering they tend to see mostly benefits: self-service freedom, scalability, availability, flexibility, and the pleasure of avoiding various nasty hardware and software headaches.IT leaders though are a different story—they are not always as ecstatic.  They indicate uneasiness about cloud securityand have legitimate concerns that unauthorized users could get their hands on their applications and data. Moreover, retaining a level of influence and control is a must for them. Can both “sides” meet halfway? Is it attainable to provide the freedom that users want while having the control that IT leaders need?
.
Simply put, Yes…. However, doing so will entail a collaborative effort. Both business users and IT leaders have to assume a few key responsibilities. In addition, you will have to make certain that your cloud provider will be doing its part as well.

.

Your 5 Responsibilities

Here are a few things you need to be held accountable for:
.
1. Define the business need. Identify the root problem you want to solve a cloud technology. Is it a perpetually recurring concern, or one that happens irregularly? Did you need an answer “last week,” or do you have time to construct a solution?

Important note: Not all clouds are created equally. Some can run your applications unchanged, with instant access; while others require little tweaking. Recognizing your needs and differentiating cloud technologies will help you determine the correct strategy for handling the particular business problem that needs attention.

2. Identify your application and process requirements. Once you have accurately defined your business needs, it is time to select the application best-suited to meet those needs. Be clear and precise about the nature of the application, the development process you want to adapt, and the roles and access permissions for each user.

Your teams no longer have to struggle through traditional linear and slow development processes. Instead, the cloud can give them access to the best practices that are fluid and agile. Many self-service solutions can even empower them to run copies of the same environment in parallel.

Simply put, the cloud may lead to breakthrough productivity when used properly. However, if used incorrectly it can also lead to enormous amounts of wasted resources. Having said this, take your time to do your research and choose wisely.

3. Determine your timetable. Cloud projects are not short sprints contrary to popular belief. They are better illustrated as long journeys over time. Please plan accordingly.

Nubifer recommends to define your early experiments in a quarterly basis because cloud technology is transformative. Learn from the first quarter, take note, and execute the necessary adjustments and then move on to the next. The objective is to generate a learning organization that increases control over time and progresses based on data and experience.

4. Establish success factors. Define what success is for you. Do you want to improve the agility of the development process? Maybe you want to increase the availability of your applications? Or perhaps you want to enhance remote collaboration? Define achievement, and have a tool to measure progress as well. Identifying metrics and establishing realistic goals will aid you achieve the solution that meets not only your needs, but also your budget and payback time frame.

5. Define data and application security. Companies overlook this critical responsibility more often than they realize. Make sure to do your due diligence and attentively determine whom you can trust with cloud application. After which, empower them. The following are questions that need unambiguous answers: What specific roles will team members take in the cloud model? Does everyone comprehend fully the nature of the application and data they are planning to bring to the cloud? Does everyone know how to protect your data? Do they understand your password policies? Dealing with these security factors early on enables you to create a solid foundation for cloud success while having your own peace of mind about this issue.

Your Provider’s 5 Responsibilities

Meanwhile, make sure your cloud provider offers the following to attain better cloud control:
1. Self-service solutions. Time equals money. Thus waiting equals wasted time and money. So search for cloud applications that are ready from the get go. Determine if the solution you are considering may implement the applications and business process you have in mind immediately, or if the provider requires you to rewrite the application or change the process entirely.

There is also a need to distinguish if users will require training, or if they already equipped to handle a self-service Web interface. Answers to these questions can determine whether adoption will be rapid and smooth, or slow and bumpy.

2. Scale and speed. A well-constructed cloud solution provides the unique combination of scale and speed. It gives you access to the resources at a scale that you need with on-demand responsiveness. This combination will empower your team to run several instances in parallel, snapshot, suspend/resume, publish, collaborate, and accelerate the business cycle.

3. Reliability and availability. As articulated in the Service Level Agreements (SLAs), it is the responsibility of the cloud provider to make the system reliable and available. The provider should set clear and precise operational expectations, such as 99.9 percent availability, with you, the consumer.

4. Security. Ask for a comprehensive review of your cloud provider’s security technology and processes. In specific, ask about the following:

  • Application and data transportability. Can your provider give you the ability to export existing applications, data and processes into the cloud with ease? And can you import back just as hassle free?
  • Data center physical security.
  • Access and operations security. How does the consumer protect its physical data centers? Are these the SAS 70 Type II data centers? Are there trained and skilled data center operators in those places?
  • Virtual data center security. Your provider must be clear about how to control the method of access to physical machines. How are these machines managed? And who are able to access these machines?
  • In terms of scale and speed, most cloud efficiency derives from how the cloud is architected. Be sure to understand how the individual pieces, the compute nodes, network nodes, storage nodes, etc., are architected and how they are secured and integrated.

Application and data security.

In order to be able to implement your policies, the cloud solution must permit you to define groups, roles with granular role-based access control, proper password policies and data encryption–both iin transit and at rest.

5. Cost efficiencies. Without any commitments upfront, cloud solutions should enable your success to drive success. Unlike a managed service or a hosting solution, a cloud solution uses technology to automate the back-end systems, and therefore can operate large resource pools without the immense human costs. Having this luxury translates all these into real cost savings for you.

Despite business leaders recognizing the benefits of cloud computing technologies, more than a handful still have questions about cloud security and control. Indeed, that is understandable. However, by adopting a collaborative approach and aligning their responsibilities with those of the cloud provider, these leaders can find solutions that offer the best of both worlds. They get the visibility and control they want and need, while giving their teams access to the huge performance gains only the cloud can provide.

Contact Nubifer for a free, no-obligation Cloud Migration consultation.

Has Your Organization Adopted a Cloud Migration Strategy?

There has been an increased amount of research lately that indicates that many organizations will move to the cloud in the short term, there isn’t a lot of information detailing who is using it now and what they are using it for.

A published study by CDW reported that a number of enterprises are actually unaware that they are already using cloud applications and have a limited cloud adoption strategy.

It must be noted though, that this does not mean these enterprises have no intention of moving to the cloud. It just means, that these enterprises have not yet approached cloud computing strategically, and have not implemented an organization wide adoption strategy.

Cloud Computing Strategies

Another interesting note, according to the CDW report, is the percentage of companies claiming to have an enterprise policy on the acclimation to cloud computing — only 38%. This comes as a surprise as the report also concludes that 84% of organizations have already installed, at the minimum, one cloud application.

In March 2011, more than 1,200 IT professionals were asked to answer surveys for the CDW 2011 Cloud Computing Tracking Poll, which drew some interesting conclusions. It was discovered that these enterprises are uneasy with using public clouds and would rather go through the private clouds.

Cloud Application Usage

However, it is necessary to examine these statistics again with more caution. As mentioned above, more than 84% of these organizations claim that they have, at the bare minimum, one cloud application, yet they still do not consider themselves as cloud users.

The reason behind this discrepancy has yet to be determined. In other words, organizations are still unclear as to if and how it can integrate with their current enterprise architecture.

This is emphasized by how only 42% of those surveyed being convinced that their operations and amenities have the ability to operate efficiently in the cloud. Statistics show that applications operated in the cloud most frequently are the following:

  • Commodity applications such as email (50% of cloud users)
  • File storage (39%)
  • Web and video conferencing (36% and 32%)
  • Online learning (34%)

Developing a Cloud Strategy

Eight industries that were surveyed as part of the CDW Cloud Computing Tracking Poll back in March 2011 were—small businesses, medium businesses, large businesses, the Federal government, State and Local governments, healthcare, higher education and K-12 public schools. The poll discovered conclusions specific to each of the eight industries. It also included 150 individuals from each industry who acknowledged themselves as knowledgeable with the current uses and future plans of cloud application usage within their respective organization.

Although there are various hurdles to consider prior to adoption, primarily they can be divided into four segments:

1. Adoption Strategy

Despite having a number as high as 84% of organizations using at least one cloud-based application, only 25% of them have an organization wide adoption strategy and recognize themselves as cloud users. Just over a third has a formal plan for cloud adoption.

2. ROI Considerations

Approximately 75% were noted to have cost reductions upon migrating applications to a cloud platform.

3. Security

One of the, if not the primary obstacle, holding both current and potential users back is security. However, quite a number of users, including those who are currently using cloud applications, have yet to realize the full potential of security applications available.

4. Future spending

It is necessary for organizations to discover what future hardware and software acquisitions can be migrated into a cloud ecosystem.

Cloud Computing Now

A lot can happen in five years—this is especially true for the cloud industry. Currently, this study does not discuss in depth the difference between cloud computing and SaaS. However, it is likely that SaaS could be included in the study as it did define cloud computing as a “model for enabling convenient, on-demand access to a shared pool of configurable computing resources.”

With this in mind, along with the recent Forrester research on IT spending, it is highly likely that the data CDW has outlined will be significantly different five years from now.

According to Forrester, a record number of organizations will be investing in SaaS technologies, which broadly, is a subset of cloud computing. The data includes a finding that 25% of enterprises examined have a adopted a new cloud technology this year, with 14% using IaaS, 8% using PaaS, and 6% using business-process-as-a-service.

Does Your Organization Have a Cloud Migration Strategy?

In the end, the research was able to provide some thought provoking data. It was able to show that many companies are already leveraging the cloud without even knowing it.

Regardless of the potential ROI and efficiency gains offered by cloud computing, a significant number of companies have yet to seize the opportunity to leverage the scalability and efficiency of modern cloud applications.

Aside from this, according to the research, many companies find themselves without a coherent company wide strategy for dealing with cloud adoption. This is important to note because it is no secret a lack of planning can lead to disastrous results—with results like these needing a lot of financial and organizational efforts to fix.

If your organization is one of those lacking a coherent and comprehensive cloud adoption strategy, contact the Cloud accelerator experts at Nubifer to help guide the way. Nubifer partners with the leading vendors in order to provide unbiased cloud application architecture diagrams, white papers, security and compliance risk analysis and migration consulting services.


Developing Cloud Applications: Pattern Usage and Workload Modeling

For enterprise companies today, the process of determining one or more common application usage profiles for use in cloud platform performance testing is known as ‘application workload modeling’. Cloud application workload modeling can be accomplished in a myriad of ways, and is a critical piece to properly planning, developing and implementing successful cloud solution technologies.

Some General Best Practices when Developing Cloud Applications.

  • Understand your application usage patterns. New business processes are prime candidates for building out such apps. Silo-ed departmental initiatives often evolve into organizational best practices that get adopted by the entire enterprise, and because most of the programs are developed organically from the ground up, they can leverage the interoperability of the cloud and be scaled depending on demand. This also allows the app to be discontinued with minimal cost if the initiative isn’t deemed efficient or necessary to the organization.

  • Develop and Deploy Your Application. Creating a plan and sequence of key metric drivers help you keep your cloud deployment efforts on track. Start small, grow fast is a common mantra of many start-ups (including ours), the overwhelming majority of which are intimidated by the significant cost of on-premise infrastructure.
  1. Define and Identify the objectives
  2. Document and Identify primary usage scenarios
  3. Develop and Determine navigation paths for key scenarios
  4. Design and Determine individual user data and variances
  5. Determine the likely-hood of such scenarios
  6. Identify peak target load levels
  7. Prepare and Deploy the new cloud solution
  • Monitor Spiked Usage Patterns for “Common Utility Apps”. Within every organization, large or small, there’s at least one program or application that receives spiked usage during a certain time of the year, quarter or month. One example of this pattern is related to corporate tax software, as this app is lightly used for many months, but becomes a highly leveraged application during the end of the fiscal year tax calculation process. Another example is Human Resource Information Systems (HRIS) and the periodic need for employees to subscribe to new company health plans, insurance plans, etc. Other examples include e-commerce websites like Ebay and Buy.com which experience this “peak load” requirement during holiday or special sales seasons.

The common thread across all of these types of “on-demand” cloud apps is that their usage rate is relatively standard or predictable most of the time, but become the most demanded of resources periodically. Utilizing a scalable cloud solution approach in this manner enables greater cost savings and ensures high availability of your enterprise business systems.

Application Load and Scalability, and Dynamically Reacting to Peak Load

As it is most often associated with consumer-facing web apps, unpredictable load occurs when an inordinate amount of traffic is directed toward your site, and the app is subsequently unable to meet this demand—causing the entire website to return a load error message. Nubifer has noticed sudden spikes in traffic when organizations launch fresh marketing campaigns, or receive extensive back-linking from prominent authority sites. Apps and sites eminently susceptible to these load spikes are ideal candidates for the cloud, and the most prominent advantage of this methodolgy is the auto-scale or on-demand capability.

Monitoring, a Primary Key to Any Successful Cloud Deployment

Your cloud platform monitors the patterns of Internet traffic and the utilization of the infrastructure, adding additional server resources if the traffic crosses your preset threshold. The extra servers that are added can be safely deactivated once the traffic subsides and the environment isn’t so demanding. This creates an extremely cost-efficient use case for leveraging a cloud platform for app and site hosting.

To the contrary of unpredictable load occurrences, e-commerce sites commonly experience predictable spikes in traffic. For instance, when Amazon launches pre-ordering for the next novel for Oprah’s book club, they prepare their infrastructure to handle these peak loads. Organizations of this size typically have a ballpark budget figure of the infrastructure cost because of its inherent predictability. There are many occurrences in the public sector that experience predictable bursts as well, such as electoral results and the examination of the latest census reports.

Understanding Application Usage Pattern Trends

Within your business, these patterns are manifested during a virtual company meeting or initiation of a compulsory online training for all employees, but the primary difference between this pattern of usage and the first is that there may not be a periodic recurrence of this particular pattern or spike in resource demand.

It’s paramount that your IT personnel remain cognizant of these peak load times, whether they are predictable or not, as this is a key element for effectively leveraging a cloud solution that offers support and business intelligence data regarding peak load and latency issues.

How We Have Evolved to Solve for Peak Load and Usage Monitoring

Nubifer has solved these business scenarios by developing a robust set of tools and monitoring applications for private and public clouds, named Nubifer Cloud:Link. To learn more about Cloud:Link and Nubifer’s approach to enterprise cloud monitoring visit CloudLink.pro

Two Kinds of Cloud Agility

CIO.com’s Bernard Golden defines cloud agility and provides examples of how cloud computing fosters business agility in the following article.

Although agility is commonly described as a key benefit of cloud computing, there are two types of agility that are real, but one of them packs more of a punch.

First, however, it is important to define cloud agility. Cloud agility is tied to the rapid provisioning of computer resources. In typical IT shops, new compute instances or storage can take weeks (or even months!), but the same provisioning process takes just minutes in cloud environments.

Work is able to commence at a rapid pace due to the dramatic shortening of the provisioning timeframe. For example, in a cloud environment submitting a request for computing resources and waiting anxiously for a fulfillment response via email does not happen. In this way, agility can be defined as “the power of moving quickly and easily; nimbleness,” and in his way it is clear how this rapid provisioning is commonly referred to advancing agility.

It is at this point that the definition of agility becomes confusing, as people often conflate both engineering resource availability and business response to changing conditions or opportunity under agility.

While both types of agility are useful, business response to changing conditions or opportunity will prove to be the more compelling type of agility. It will also come to be seen as the real agility associated with cloud computing.

The issue with this type of agility, however, is that it is a local optimization, meaning that it makes a portion of internal IT processes more agile. However this doesn’t necessarily shorten the overall application supply chain, which extends from initial prototype to production rollout.

It is, in fact, very common for cloud agility to enable developers and QA to begin their work more quickly, but for the overall delivery time to stay the same, stretched by slow handover to operations, extended shakedown time in the new production environment and poor coordination with release to the business units.

Additionally, if cloud computing comes to be seen as an internal IT optimization, with little effect on the timeliness of compute capability rolling out into mainline business processes, IT potentially may never receive the business unit support it requires to fund the shift to cloud computing. What may happen, is that cloud computing will end up like virtualization, in which in many organizations remains at 20 or 30 percent penetration, unable to gather the funding necessary to support wider implementation. Necessary funding will probably never materialize if the move to cloud computing is presented as “helps our programmers program faster.”

Now, for the second type of agility, which affects how quickly business units can roll out new offerings. This type of agility does not suffer the same problems that the first one does. Funding will not be an issue if business units can see a direct correlation between cloud computing and stealing a march on the competition. Funding is never an issue when the business benefit is clear.

The following three examples show the kind of business agility fostered by cloud computing in the world of journalism:

1. The Daily Telegraph broke  a story about a scandal regarding Members of Parliament expenses which was a huge cause celebre featuring examples of MPs seeking reimbursement of for building a duck house and other equally outrageous claims. As can be imagined, the number of expense forms was huge, and overtaxed the resources of the Telegraph available to review and analyze them. The Telegraph loaded the documents in Google Docs and allowed readers to browse them at their own will. CIO of the Telegraph Media Group, Toby Wright, used this example during a presentation at the Cloud Computing World Forum and pointed out how interesting it was to see several hundred people clicking through the spreadsheets at once.

2. The Daily Telegraph’s competitor, the Guardian, of course featured its own response to the expenses scandal. The Guardian quickly wrote an application to let people examine individual claims and identify ones that should be examined more closely. As a result, more questionable claims surfaced more quickly and allowed the situation to heat up. Simon Willison of the Guardian said of the agility that cloud computing offers, “I am working at the Guardian because I am interested in the opportunity to build rapid prototypes that go live: apps that live for two or three days.” Essentially, the agility of cloud computing enables quick rollout of short-lived applications to support the Guardian’s core business: delivery of news and insight.

3. Now, for an example from the United States. The Washington Post took static pdf files of former First Lady Hillary Clinton’s schedule and used Amazon Web Services to transform them into a searchable document format. The Washington Post then placed the documents into a database and put a simple graphic interface in place to allow members of the public to be able to click through them as well–once again, crowds-ourcing the analysis of documents to accelerate analysis.

It can be argued that these examples don’t prove the overall point of how cloud computing improves business agility–they are media businesses, after all, not “real” businesses that deal with physical objects and can’t be satisfied with a centralized publication site. This point doesn’t take into account that modern economies are shifting to become more IT-infused and digital data is becoming a key part of every business offering. The ability to turn out applications associated with the foundation business offering will be a critical differentiator in the future economy.

Customers get more value and the vendor gets competitive advantage due to this ability to surround a physical product or service with supporting applications. In order to win in the future, it is important to know how to take advantage of cloud computing to speed delivery of complimentary applications into the marketplace. As companies battle it out in the marketplace, they will be at a disadvantage if they fail to optimize the application delivery supply chain.

It is a mistake to view cloud computing as a technology that helps IT do its job quicker, and internal IT agility is necessary but not sufficient for the future. It will be more important to link the application of cloud computing to business agility, speeding business innovation to the marketplace. In summary, both types of agility are good but the latter should be the aim of cloud computing efforts.

Updated User Policy Management for Google Apps

Google has released a series of new features granting administrators more controls to manage Google Apps within their organizations, including new data migration tools, SSL enforcement capabilities, multi-domain support and the ability to tailor Google Apps with over 100 applications from the recently-introduced Google Apps Marketplace. On July 20 Google announced one of the most-requested features from administrators: User Policy Management.

With User Policy Management, administrators can segment their users into organizational units and control which applications are enabled or disabled for each group.  Take a manufacturing firm, for example. The company might want to give their office workers access to Google Talk, but not their production line employees, and this is possible with User Policy Management.

Additionally, organizations can use this functionality to test applications with pilot users before making them available on a larger scale. Associate Vice President for Computer Services at Temple University Sheri Stahler says, “Using the new User Policy Management feature in Google Apps, we’re able to test out new applications like Google Wave with a subset of users to decide how we should roll our new functionality more broadly.”

Customers can transition to Google Apps from on-premise environments with User Policy Management, as it grants them the ability to toggle services on or off for groups of users. A business can enable just the collaboration tools like Google Docs and Google sites for users who have yet to move off old on-premises messaging solutions, for example.

These settings can be managed by administrators on the ‘Organizations & Users’ tab in the ‘Next Generation’ control panel. On balance, organizations can mirror their existing LDAP organizational schema using Google Apps Directory Sync or programmatically assign users to organizational units using the Google Apps Provisioning API.

Premier and Educational edition users can begin using User Policy Management for Google Apps at no additional charge.