Archive for February, 2010

Google’s Power Play

Seeking to keep its large data centers supplied with power, Google’s Google Energy subsidiary has asked the Federal Energy Regulatory Commission for the right to purchase and re-sell electricity to consumers. A vast amount of electricity is required for Google’s cloud computing model, which includes its Google Apps collaboration applications and its popular search engine, and by becoming a player in the energy game Google Energy feels it will be able to contain the cost of energy for Google at the very least.

Google is all too aware of its enormous consumption of power, as the leading search provider with the desire to expand its purview online via other Web services. Google Energy’s request to buy and resell electricity to consumers was made on December 23, 2009 and asked to be approved by February 23, 2010. obtained the subsidiary’s application to the Federal Energy Regulatory Commission (FERC). Google’s request is a common one among companies that consume a tremendous amount of power, such as Safeway grocery store chains and Wal-Mart retail, to name a few.

Google has thousands of inexpensive, thin rack-mount computers and other servers stashed in large facilities scattered across the globe. Working in parallel, these servers route search engine requests and queries for data from the company’s Google Apps to the next available computers and send the data back to consumers’ PCs and mobile devices. A large amount of energy, and thus a large sum of money, is required for the cloud computing model, and in its application to FERC Google stated that by playing the energy game it can “contain and manage the cost of energy for Google.”

In a statement to, a Google spokesperson said, “Google is interested in procuring more renewable energy as part of our carbon neutrality commitment, and the ability to buy and sell energy on the wholesale market could give us more flexibility in doing so. We made this filing so we can have more flexibility in producing power for Google’s own operations, including our data centers. This FERC authority would improve our ability to hedge out purchases of energy and incorporate renewable into our energy portfolio.”

Google Energy guru Bill Weihl described the company’s objective in layman’s terms during a January 7 interview with the New York Times. “One [motivation] is that we use a moderate amount of energy ourselves: we have a lot of servers, and we have 22,000 employees around the world with office buildings that consume a lot of energy. So we use energy and we care about the cost of that, we care about the environmental impact of it, and we care about the reliability of it,” said the Google Energy czar.

While some might argue that Google’s consumption of power is far more than “moderate,” due to its rather large cloud computing footprint, there are companies out there that consume more energy and are not taking measures to account for it. Also during his interview with the Times, Weihl described Google’s intentions to profit from alternative energy, saying, “We’d be delighted if some of this stuff actually made money, obviously; it is not our goal not to make money. All else being equal, we’d like to makes as much money as we can, but the principle goal is to have a big impact for good.”

Google has invested about $45 million in alternative energy over the past few years, with some of that money going toward eSolar and BrightSource. (Both companies are building towers that capture sunlight to be used as a power source.) Thus while Google’s power plans can be deems capitalistic, they are nonetheless altruistic as well. For mroe information on Google’s Cloud offerings, contact a Nubifer representative today.


Survey Reveals Developers Concentrating on Hybrid Cloud in 2010

According to a survey of application developers conducted by Evans Data, over 60 percent of IT shops polled have plans to adopt a hybrid cloud model in 2010. The results for the poll, released on January 12, 2010, indicate that 61 percent of over 400 participating developers stated that some portion of their companies’ IT resources will transition into the public cloud within the next year.

The hybrid cloud is set to dominate the IT landscape in 2010 because of those surveyed, over 87 percent of the developers said that half or less of their resources will move. A statement obtained by quotes CEO of Evans Data Janel Garvin as saying, “The hybrid Cloud presents a very reasonable model, which is easy to assimilate and provides a gateway to Cloud computing without the need to commit all resources or surrender all control and security to an outside vendor. Security and government compliance are primary obstacles to public cloud adoption, but a hybrid model allows for selective implementation so these barriers can be avoided.”

Evans Data conducted its survey over November and December of last year as a way to examine timelines for public and private cloud adoption, ways in which to collaborate and develop within the cloud, obstacles and benefits of cloud development, architectures and tools for cloud, development, virtualization in the private data center and other aspects of cloud computing. The survey also concluded that 64 percent of developers surveyed expect their clod apps to venture into mobile devices in the near future as well.

Additional information about the future of cloud computing revealed by Evans Data’s poll revealed that the preferred database for use in the public cloud is MySQL, preferred by over 55 percent of developers. Following by Microsoft and IBM, VMware was also revealed to be the preferred hypervisor vendor or user in a virtualized private cloud. To learn more please visit

Maximizing Effectiveness in the Cloud

At its most basic, the cloud is a nebulous infrastructure owned and operated by an outside party that accepts and runs workloads created by customers. When thinking about the cloud in this way, the basic question concerning cloud computing becomes, “Can I run all of my applications in the cloud?” If you answer “no” to that question, then ask yourself, “What divisions of my data can safely be run in the cloud?” When assessing how to include cloud computing in your architecture, one way to maximize your effectiveness in the cloud is to see how you can effectively complement your existing architectures.

The current cloud tools strive to manage provisioning and a level of mobility management, with security and audit capabilities on the horizon, in addition to the ability to move the same virtual machine in and out of the cloud. This is where virtualization, a new data center which includes a range of challenges for traditional data center management tools, comes into play. Identity, mobility and data separation are a few obvious sues for virtualization.

1.       Identity

Server identity becomes crucial when you can make 20 identical copies of an existing server and then distribute them around the environment with just a click of a mouse. In this way, the traditional identity based on physicality doesn’t measure up.

2.       Mobility

While physical servers are stationary, VMs are designed to be mobile, and tracking and tracing them throughout their life cycles is an important part of maintaining and proving control and compliance.

3.       Data separation

Resources are shared between host servers and the virtual servers running on them, thus portions of the host’s hardware (like the processor and memory) are allocated to each virtual server. There have not been any breaches of isolation between virtual servers yet, but this may not last.

These challenges are highlighted by cloud governance. While these three issues are currently managed and controlled by someone outside of the IT department, additional challenges that are specific to the cloud now exist. Some of them include life cycle management, access control, integrity and cloud-created VMS.

1.       Life cycle management

How is a workload’s life cycle managed once it has been transferred to the cloud?

2.       Access control

Who was given access to the application and its data while it was in the cloud?

3.       Integrity

Did its integrity remain while it was in the cloud, or was it altered?

4.       Cloud-created VMS

Clouds generate their own workloads and subsequently transfer them into the data center. These so-called “virtual appliances” are being downloaded into data centers each day and identity, integrity and configuration need to be managed and controlled there.

Cloud computing has the potential to increase the flexibility and responsiveness of your IT organization and there are things you can do to be pragmatic about the evolution of cloud computing. They include understanding what is needed in the cloud, gaining experience with “internal clouds” and testing external clouds.

1.       Understanding that is needed to play in the cloud

The term “internal clouds” has resulted from the use of virtualization in the data center. It is important to discuss with auditors how virtualization is impacting their requirements and new requirements and new policies may subsequently be added to your internal audit checklists.

2.       Gaining experience with “internal clouds”

It is important to be able to efficiently implement and enforce the policies with the right automation and control systems. It becomes easier to practice that in the cloud once you have established what you need internally.

3.       Testing external clouds

The use of low-priority workloads help provide a better understanding of what is needed for life cycle management as well as establish what role external cloud infrastructures may play in your overall business architecture.

Essentially, you must be able to manage, control and audit your own internal virtual environment in order to be able to do so with an external cloud environment. Please visit to learn more on maximizing officing effectiveness in the cloud.

The Arrival of Ubiquitous Computing

Among other things, one of the “ah ha” moments taken from this year’s CES (the world’s largest consumer technology tradeshow) was the arrival of ubiquitous computing. Formerly a purely academic concept, the data, voice, device and display convergence is now more relevant than ever. Ubiquitous convergence in consumer technology on enterprise software is poised to impact those highly involved in the field of cloud computing as well as the average consumer in the near future.

Industry prognosticators are now predicting that consumers will begin to expect the ubiquitous experience in practically everything they use on a daily basis, from their car to small household items. Take those that grew up in the digital world and will soon be entering the workforce; they will expect instant gratification when it comes to work and play and everything in between. For example, Apple made the Smartphone popular and a “must-have” item for non-enterprise consumers with its iPhone. The consumer-driven mobile phone revolution will likely seep into other areas as well, with consumers increasingly starting to expect to have a similar experience as with an iPhone in software. Due to this trend, many enterprise software vendors are now making mobile a greater priority than before, and in turn staying ahead of the curve will mean anticipating more and more ubiquitous convergence.

What Does Ubiquitous Computing Mean for ISVs?

CES showcased a wide range of new interface and display technology, such as a multi-touch screen by 3M, a screen with haptic feedback, pico projector and the list goes on. A cheap projector and a camera can combine to make virtually any surface into an interface or display, which will allow consumers to interact with software in innovative, unimaginable and unanticipated ways, thus putting ISVs to the task of supporting these new interfaces and displays. This gives ISVs the opportunity to differentiate their offering by leveraging rather than submitting to this new trend in technology.

The Combination of Location-based Apps and Geotagging

Both Google’s Favorite Places and Nokia’s Point and Find seek to organize and essentially own the information about places and objects using QR codes. The QR codes are generally easy to generate and have flexible and extensible structure to hold useful information, while the QR code readers are the devices—such as a camera phone with a working data connection—that most of us own already. When geotagging is combined with augmented reality that is already propelling the innovation in location-based apps, there is the potential for ample innovation. Smarter supply chain, sustainable product life cycle management and efficient manufacturing are all possible outcomes from the combination of location-based applications and geotagging.

The Evolution of 3D

While 3D simply adds a certain “cool” factor to playing video games or watching movies, 3D is poised to make the transition from merely a novelty into something useful. Although simply replicating 3D analog in the digital world won’t make software better, adding a third dimension could aid those looking at 2D. One way that 3D technology can be more effective is by using it in conjunction with complementing technology like multi-touch interface, to provide 3D accordances, and with location-based and mapping technology to manage objects in 3D analog world.

Rendering Technology to Outpace Non-Graphics Computation Technology

As shown by Toshiba’s TV with cell processors and ATI and nVidia’s graphic cards, the investment into rendering hardware complements the innovation in display elements (like LED, energy-efficient technology, etc). Hi-quality graphics at all former factors are being delivered via the combination of faster processors and sophisticated software. So far, enterprise software ISVs have been focusing on algorithmic computation of large volumes of data to design various solutions, and rendering computation technology lagged non-graphics data computation technology. Now rendering computation has caught up with non-graphics data and will outpace non-graphics data computation in the near future. This will allow for the creation of software that can crunch large volumes of data and leverage high-quality graphics without any lag, that delivers striking user experiences as well as realtime analytics and analysis.  For more information, please visit

Scaling Storage and Analysis of Data Using Distributed Data Grids

One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid. This method is predicted to have important applications in cloud computing over the next couple years, and eWeek Knowledge Center contributor William L. Bain describes ways in which a distributed data grid can be used to implement powerful, Java-based applications for parallel data analysis.

In current Information Age, companies must store and analyze a large amount of business data. Companies that have the ability to efficiently search data for important patterns will have a competitive edge over others. An e-commerce Web site, for example, needs to be able to monitor online shopping carts in order to see which products are selling faster than others. Another example is a financial services company, which needs to hone its equity trading strategy as it optimizes its response to rapidly changing market conditions.

Businesses facing these challenges have turned to distributed data grids (also called distributed caches) in order to scale their ability to manage rapidly changing data and sort through data to identify patterns and trends that require a quick response. A few key advantages are offered by distributed data grids.

Distributed data grids store memory instead of on a disk for quick access. Additionally, they run seamlessly across various servers to scale performance. Lastly, they provide a quick, easy-to-use platform for running “what if” analyses on the data they store. They can take performance to a level unable to be matches by stand-alone database serves by breaking the sequential bottleneck.

Three simple steps for building a fast, scalable data storage and analysis solution:

1. Store rapidly changing business data directly in a distributed data grid rather than on a database server

Distributed data grids are designed to plug directly into the business logic of today’s enterprise application and services. They match the in-memory view of data already used by business logic by storing data as collections of objects rather than relational database tables. Because of this, distributed data grids are easy to integrate into existing applications using simple APIs (which are available for most modern languages like Java, C# and C++).

Distributed data grids run on server farms, thus their storage capacity and throughput scale just by adding more grid servers. A distributed data grid’s ability to store and quickly access large quantities of data can expand beyond a stand-alone database server when hosted on a large server farm or in the cloud.

2. Integrate the distributed data grid with database servers in an overall storage strategy

Distributed data grids are used to complement, not replace data servers, which are the authoritative repositories for transactional data and long-term storage. With an e-commerce Web site, for example, a distributed data grid would hold shopping carts to efficiently manage a large workload of online shopping traffic. A back-end database server would meanwhile store completed transactions, inventory and customer records.

Carefully separating application code used for business logic from other code used for data access is an important factor to integrating a distributed data grid into an enterprise application’s overall strategy. Distributed data grids naturally fit into business logic, which manages data as objects. This code is where rapid access to data is required and also where distributed data grids provide the greatest benefit. The data access layer, in contract, usually focuses on converting objects into a relational form for storage in database servers (or vice versa).

A distributed data grid can be integrated with a database server so that it can automatically access data from the database server if it is missing from the distributed data grid. This is incredibly useful for certain types of data such as product or customer information (stored in the database server and retrieved when needed by the application). Most types of rapidly changing, business logic data, however, can be stored solely in a distributed data grid without ever being written out to a database server.

3. Analyze grid-based data by using simple analysis codes as well as the MapReduce programming pattern

After a collection of objects, such as a Web site’s shopping carts, has been hosted in a distributed data grid, it is important to be able to scan this data for patterns and trends. Researchers have developed a two-step method called MapReduce for analyzing large volumes of data in parallel.

As the first step, each object in the collection is analyzed for a pattern of interest by writing and running a simple algorithm that assesses each object one at a time. This algorithm is run in parallel on all objects to analyze all of the data quickly. The results that were generated by running this algorithm are next combined to determine an overall result (which will hopefully identify an important trend).

Take an e-commerce developer, for example. The developer could write a simple code which analyzes each shopping cart to rate which product categories are generating the most interest. This code could be run on all shopping carts throughout the day in order to identify important shopping trends.

Using this MapReduce programming pattern, distributed data grids offer an ideal platform for analyzing data. Distributed data grids store data as memory-based objects, and thus the analysis code is easy to write and debug as a simple “in-memory” code. Programmers don’t need to learn parallel programming techniques nor understand how the grid works. Distributed data grids also provide the infrastructure needed to automatically run this analysis code on all grid servers in parallel and then combine the results. By using a distributed data grid, the net result is that the application developer can easily and quickly harness the full scalability of the grid to quickly discover data patterns and trends that are important to the success of an enterprise. For more information, please visit

Answers to Your Questions on Cloud Connectors

Jeffrey Schwartz and Michael Desmond, both editors of Redmond Developer News, recently sat down with corporate vice president of Microsoft’s Connected Systems Division, Robert Wahbe, at the recent Microsoft Professional Developers Conference (PDC) to talk about Microsoft Azure and its potential impact on the developer ecosystem at Microsoft. Responsible for managing Microsoft’s engineering teams that deliver the company’s Web services and modeling platforms, Wahbe is a major advocate of the Azure Services Platform and offers insight into how to build applications that exist within the world of Software-as-a-Service, or as Microsoft calls it, Software plus Services (S + S).

When asked how much of Windows Azure is based on Hyper-V and how much is an entirely new set of technologies, Wahbe answered, “Windows Azure is a natural evolution of our platform. We think it’s going to have a long-term radical impact with customers, partners and developers, but it’s a natural evolution.” Wahbe continued to explain how Azure brings current technologies (i.e. the server, desktop, etc.) into the cloud and is fundamentally built out of Windows Server 2008 and .NET Framework.

Wahbe also referenced the PDC keynote of Microsoft’s chief software architect, Ray Ozzie, in which Ozzie discussed how most applications are not initially created with the idea of scale-out. Explained Wahbe, expanding upon Ozzie’s points, “The notion of stateless front-ends being able to scale out, both across the data center and across data centers requires that you make sure you have the right architectural base. Microsoft will be trying hard to make sure we have the patterns and practices available to developers to get those models [so that they] can be brought onto the premises.”

As an example, Wahbe created a hypothetical situation in which Visual Studio and .NET Framework can be used to build an ASP.NET app, which in turn can either be deployed locally or to Windows Azure. The only extra step taken when deploying to Windows Azure is to specify additional metadata, such as what kind of SLA you are looking for or how many instances you are going to run on. As explained by Wahbe, the Metadata is an .XML file and as an example of an executable model, Microsoft is easily able to understand that model. “You can write those models in ‘Oslo’ using the DSL written in ‘M,’ targeting Windows Azure in those models,” concludes Wahbe.

Wahbe answered a firm “yes” when asked if there is a natural fit for application developed in Oslo, saying that it works because Oslo is “about helping you write applications more productively,” also adding that you can write any kind of application—including cloud. Although new challenges undoubtedly face development shops, the basic process of writing and deploying code remains the same. According to Wahbe, Microsoft Azure simply provides a new deployment target at a basic level.

As for the differences, developers are going to need to learn a new set of services. An example used by Wahbe is if two businesses were going to connect through a business-to-business messaging app; technology like Windows Communication Foundation can make this as easy process. With the integration of Microsoft Azure, questions about the pros and cons of using the Azure platform and the service bus (which is part of .NET services) will have to be evaluated. Azure “provides you with an out-of-the-box, Internet-scale, pub-sub solution that traverses firewalls,” according to Wahbe. And what could be bad about that?

When asked if developers should expect new development interfaces or plug-ins to Visual Studio, Wahbe answered, “You’re going to see some very natural extensions of what’s in Visual Studio today. For example, you’ll see new project types. I wouldn’t call that a new tool … I’d call it a fairly natural extension to the existing tools.” Additionally, Wahbe expressed Microsoft’s desire to deliver tools to developers as soon as possible. “We want to get a CTP [community technology preview] out early and engage in that conversation. Now we can get this thing out broadly, get the feedback, and I think for me, that’s the most powerful way to develop a platform,” explained Wahbe of the importance of developers’ using and subsequently critiquing Azure.

When asked about the possibility of competitors like Amazon and Google gaining early share due to the ambiguous time frame of Azure, Wahbe’s responded serenely, “The place to start with Amazon is [that] they’re a partner. So they’ve licensed Windows, they’ve licensed SQL, and we have shared partners. What Amazon is doing, like traditional hosters, is they’re taking a lot of the complexity out for our mutual customers around hardware. The heavy lifting that a developer has to do to tale that and then build a scale-out service in the cloud and across data centers—that’s left to the developer.” Wahbe detailed how Microsoft has base computing and base storage—the foundation of Windows Azure—as well as higher-level services such as the database in the cloud. According to Wahbe, developers no longer have to build an Internet-scale pub-sub system, nor do they have to find a new way to do social networking and contacts nor have reporting services created themselves.

In discussing the impact that cloud connecting will have on the cost of development and the management of development processes, Wahbe said, “We think we’re removing complexities out of all layers of the stack by doing this in the cloud for you … we’ll automatically do all of the configuration so you can get load-balancing across all of your instances. We’ll make sure that the data is replicated both for efficiency and also for reliability, both across an individual data center and across multiple data centers. So we think that be doing that, you can now focus much more on what your app is and less on all that application infrastructure.” Wahbe predicts that it will be simpler for developers to build applications with the adoption of Microsoft Azure. For more information on Cloud Connectors, contact a Nubifer representative today.

Nubifer Cloud:Link

Nubifer Cloud:Link monitors your enterprise systems in real-time and strengthens interoperability with disparate owned and leased SaaS systems. When building enterprise mash-ups, custom addresses and custom source codes are created by engineers to bridge the white space, also known as electronic hand-shakes, between the various enterprise applications within your organization. By utilizing Nubifer Cloud:Link, you gain a real-time and historic view of system-based interactions.

Cloud:Link is designed and configured via robust administrative tools to monitor custom enterprise mash-ups and deliver real-time notifications, warning and performance metrics of your separated yet interconnected business systems. Cloud:Link offers the technology and functionality to help your company monitor and audit your enterprise system configurations.

Powerful components of Cloud:Link make managing enterprise grade mash-ups simple and easy.

  • Cloud:Link inter-operates with other analytic engines including popular tracking engines (eg: Google Analytics)
  • RIA (Rich Internet Applications): reporting, graphs and charts
  • WEB API handles secure key param calls
  • Verb- and Action-based scripting language powered by “Verbal Script”
  • XML Schema Reporting capabilities
  • Runs on-premise, as an installed solution, or in the cloud as a SaaS offering
  • Client-side recording technology tracks and stores ‘x’ and ‘y’ coordinate usage of enterprise screens for compliance, legal and regulatory play back
  • Graphical snapshots of hot maps show historical views of user interaction and image hit state selections
  • Creates a method for large systems to employ “data and session playback” technologies of system-generated and user-generated interaction sessions in a meaningful and reproducible way

Cloud:Link monitors and reports enterprise system handshakes, configurations, connections and latency reports in real time. Additionally, Cloud:Link rolls the data view up to your IT staff and system stakeholders via rich dashboards of charts and performance metrics. Cloud:Link also has a robust and scalable analytic data repository that keeps an eye on the connection points of enterprise applications, and audits things like “valid ssl cert warnings or pending expirations”, “mid to high latency warnings”, “ip logging”, “custom gateway SSO (Single Sign-On) landing page monitoring” among many other tracking features.

also leverages Google Analytics by way of Cloud:Link extended AP,  which can complete parallel calls to your Google Analytics account API, and send data, logs, analytic summaries, and physical click and interface points by the end users to any third party provider or data store for use in your own systems.

On the server side, Cloud:Link is a server-based application you can install or subscribe to as a service. Data points and Machine-to-Machine interaction is tracked at every point during a systems interaction. The Cloud:Link monitor can track remote systems without being embedded or adopted by the networked system, however, if your company chooses to leverage the Cloud:Link API for URI Mashup Tracking, you can see even more detailed real time reports of system interoperability and up-time.

On the client side, leverage Cloud:Link’s browser plug-in within your enterprise to extend your analytic reach into the interactions by your end-users. This approach is particularly powerful when tracking large systems being used by all types of users. Given the proper installation and setup, your company can leverage robust “Session Playback” of human interaction with your owned and leased corporate business systems.

Nubifer Inc. focuses on interoperability in the enterprise. Disparate applications operating in independent roles and duties need unified index management, Single Sign-On performance tracking, and application integration monitoring.

  • User Admin logs in and sees a dashboard with default reporting widgets configurable by the admin user
  • “My Reports” (Saved Wizard generated reports) and can be setup to auto send reports to key stake holders in your IT or Operations group
  • Logs (Raw log review in Text Area, exportable to csv, or API post to remote FTP account)
  • Users (Connecting known vs. unknown connecting IP’s)
  • Systems (URI lists of SSO (Single Sign-On)paths to your SaaS and on Premise Apps) – An Enterprise Schematic Map of your On-Prem and Cloud-Hosted Applications

At the core of Nubifer’s products are Nubifer Cloud:Portal, Nubifer Cloud:Link, and Nubifer Cloud:Connector, which offer machine-to-machine real time analytics, tracking and playback of machine to machine interaction for human viewers using Rich Internet Application Components to view on customize-able dashboards. Nubifer Cloud:Link enables large publicly traded or heavily regulated companies to follow compliance laws, regulations, such as SOX, SaS70, HL7/HPPA, and mitigate the risk of not knowing how your systems are interacting on a day to day basis.

Currently Cloud:Link is hosted on, and compatible with:

  • Microsoft® Windows Azure™ Platform
  • Amazon® EC3
  • Google® App Engine
  • On-Premise Hosted

To learn more about Cloud:Link technology please contact or visit to find out how you can begin using the various features offered by Nubifer Cloud:Link.