Wednesday, March 4, 2015

Cloud Computing in Oil & Gas Industry

Cloud Computing in Oil & Gas Industry

The oil and gas industry has a rich tradition of embracing new computing models. The industry has responded to, and in many cases driven, every major IT trend of the past 30 years. New platforms, from mainframe computers and super-computers to workstations and distributed systems, were adopted rapidly by oil and gas companies. More recently, high-performance computing, virtualization, advanced visualization and GPU processing are being driven, and adopted, by the upstream sector. The clear benefits of the virtually unlimited computing and storage capacities; ease of deployment; and small, upfront capital costs that public Clouds offer; are too compelling for a processing-hungry, data-rich industry to ignore. Cloud infrastructure, for example, takes the storage processing load off the shoulders of operators, who must manage vast amounts of data—from seismic acquisition, SCADA systems, well logs, surface and subsurface sensors, production meters, contracts, etc.—to derive value. In addition, Cloud technology is a natural fit for an industry that has a globally dispersed workforce, and where joint ventures by multiple companies are a common business model.

In the same way as the personal computer displaced the mainframe, putting computing power in the hands of those with a desktop, so cloud computing is displacing the notion of limited computing power, and indeed the desktop itself, placing ‘infinite computing power’ into the hands of anyone - on demand, in real time, as needed, anywhere, anyhow, and from any device.

As Oil & Gas projects are becoming larger and more complex, so are the business and design challenges. Preventing cost and schedule blowouts of capital projects by improving the Front End Loading process and improving return on capital by finding ways to drive out waste across the asset lifecycle through better design are just two of the ways this power could deliver dividends for the industry. Cloud-enabled, on-demand, real-time access to intelligent 3D models is transforming analysis and simulation.
Organizations adopt cloud for various reasons:
  • Cost reduction by leveraging the economies of scale beyond the four walls of the data centre
  • Reduce capital expenditures and provision IT resources (license, storage, computing, bandwidth) on a per need basis
  • IT agility to respond faster to changing business needs
  • 100 per cent resource utilization
There are various deployment models that can be used for cloud technology. Organizations can choose to go with different deployment models for different types of applications and data depending on their business requirements and other considerations. The most important things to consider when evaluating cloud technology and finalizing what type of cloud technology to use for different applications and data are described below:
  1. Organizations need to evaluate the current situation of their IT and analyse how cloud technology can impact organizational dynamics. Overall, organizations should make sure their contracts with cloud providers are sound, and they understand what they are paying for.
  2. Asset ownership versus asset control will also bring about some challenges. Organizations will need different skillsets from their IT staff to be able to successfully manage the cloud environment.
  3. Organizations need to answer questions about cost effectiveness, security, data privacy and compliance, system availability and performance.
  4. Organizations need to evaluate their business requirements before making a decision about moving to the cloud.
  5. Organizations need to understand the players in the cloud market.        
Cloud technology is maturing to overcome organizations’ security and privacy concerns and will stabilize further in the next couple of years. Gartner research shows that big data will reach the (Hype Cycle) plateau of productivity within the next five to ten years; and cloud security frameworks are on the rise. Similar to investing in security solutions for legacy infrastructure, organizations need to make investments to secure their data in the cloud. Organizations will need new technologies and solutions to monitor and manage cloud services. These security solutions for the cloud are not overly expensive as they benefit from the scale of cloud operations. Organizations should research the security policies and standards offered by cloud providers and ensure that the standards meet the needs of the organization, have the ability to have audit trails, and be compliant with local and federal regulations and policies. As mentioned earlier, organizations can also stand-up cloud technology in-house which can store their IP data, to ensure more control and visibility into this data and the infrastructure hosting this data. Irrespective of the type of cloud solution, it is wise to invest in security solutions to manage these services.

One thing that all Oil & Gas companies seem to have in common is that with regards to their IT requirements, much of what they need tends to end with the words “as a service.” These include areas like infrastructure as a service, database as a service, and monitoring/alerting as a service. The “as a service” approach to using cloud-based resources permeates the industry and IT teams at energy companies are always looking for ways to make everything more self-service accessible, more efficient, more standardized, and ultimately more compliant. This is, after all, an extremely competitive industry where even small improvements can have an enormous financial impact. This focus on compliance is intense throughout the industry, and it is equally prevalent in the evaluation of any new technology before it is adopted, as well as during the decision making processes about how that technology will be used.

One recent example of this process took place with a large Oil & Gas Services Provider based in Texas that supports oilfields and equipment with over 50,000 employees spread across the globe. This organization had outsourced most of their datacentre services and had infrastructure spread across a series of owned and leased regional facilities. The key complaint from the IT team was this: “The outsourcing has only solved the ‘who does the work’ problem, but it has failed to make deploying anything happen more efficiently or faster.” After some initial discussions, it was found that they were experiencing deployment times for a single server of 6-8 weeks, even though the environment was already heavily virtualized on VMware. Also, each datacentre location had its own processes and unique steps for making server requests or other basic functions.

They adopted a solution based on Microsoft System Centre and Windows Server that would leverage the strengths of their on-premise resources, and virtualize the most common and time-consuming workloads. In a two week timeframe, the organization was able to build the notification, approval, and provisioning process, and templates were created for deploying virtual machines to Hyper-V using System Centre. The result was the ability to provision Virtual Machines in 1-2 hours and keep better records of provisioning steps, approvals, and configuration data – all of which is vital in such a heavily regulated industry. Building on the success of the initial project, this solution was expanded and deployed to eight datacentres and now handles all provisioning requests.

The success of this system has shown the value of scenarios that draw upon even more Hybrid Cloud functions. This organization has already started a project to enable Database as a Service functionality, as well as Infrastructure as a Service with Windows Azure that allows their users to request VM’s and databases hosted either on-premises or in Windows Azure. This service-based approach has been enthusiastically received because it gives the IT department (and the entire company) better control over their infrastructure and faster results – whereas outsourcing had previously solved only the labour requirements. A Hybrid environment is not only a viable solution for a large, geographically dispersed organization. In highly regulated industry like Oil & Gas, the size of the organization doesn’t matter – all the same procedures and processes still need to be followed.

The Oil and Gas industry has changed dramatically over the last 20 years. With Geo political forces creating a highly volatile, rapidly fluctuating crude oil and gas market, the competition for depleting resources continues to grow. Main business drivers include lowering operating costs and increase finding and recovery rates. The shareholders are pressuring companies for a return on their investments that is commensurate with other long-term investment strategies. Advanced and innovative technology can help in reducing uncertainty and increasing success of exploration and production. There is often too much complex information to assimilate and understanding the time needed to make quick and accurate decision. Process efficiency and real time information is key for decision making and to automatically monitor wells and fields, with preventative measures to avoid production downtime.       
                                 
In conclusion, it is important to understand that cloud technology necessitates a paradigm shift in organizations and the way they operate daily. Moving to the cloud can take significant time and effort. Organizations therefore need to develop a Cloud Strategy before investing in the technology. Gartner research shows that the most benefits of the cloud are achieved when organizations focus on a very specific strategy and look to cloud based technologies to accelerate their performance.

Oil & Gas Industry can definitely benefit from the Cloud Technologies especially with their requirement of ever growing infrastructure. From our point of view, what would benefit them most would be a mixture of private and public clouds. With home-grown applications deployed in a private cloud, additional processing power can be achieved by utilizing/adding a public cloud for computing purposes alone. With this mechanism, security can be thoroughly defined since applications are accessible only through the private cloud. To make the cloud infrastructure cost effective, it can utilize the Linux-based cloud systems which can be added to supporting public. Managers and employees are given a venue to communicate regularly; even those who are in faraway remote sites or oil rig stations. Cloud computing allows them to access the company database wherever they are, preventing any possible communication breakdown.

1 comment:

  1. You mentioned “cloud-enabled, on-demand, real-time access to intelligent 3D models” as key for the oil & gas industry. That’s exactly the type of customer need that drove the development of NVIDIA’s GRID. NVIDIA purposely engineered GRID and its new GPUs to be truly virtualizable to serve multiple concurrent users without any performance or latency penalty.

    With the emphasis on cloud computing and VMIs, companies needed a solution for high quality computer graphics in a virtualized environment. Prior to GRID, the alternative was GPU Sharing or Pass-through but there were often performance issues.

    With the industry’s widespread use of 3D models as well as the industry’s embrace of cloud computing and virtualization, NVIDIA GRID could be ideal for the oil and gas industries’ computing needs.

    This white paper - http://bit.ly/1IMEBRr - NVIDIA GRID: Graphics Accelerated VDI with the Visual Performance of a Workstation - explains the power and design of NVIDIA GRID in more depth.

    Jeff Rutherford, commenting on behalf of IDG, NVIDIA, VMware, and Dell

    ReplyDelete