PostHole
Compose Login
You are browsing us.zone2 in read-only mode. Log in to participate.
rss-bridge 2024-08-28T23:14:00+00:00

SE Radio 631: Abhay Paroha on Cloud Migration for Oil and Gas Operations

Abhay Paroha, an engineering leader with more than 15 years' experience in leading product dev teams, joins SE Radio's Kanchan Shringi to talk about cloud migration for oil and gas production operations. They discuss Abhay's experiences in building a cloud foundation layer that includes a canonical data model for storing bi-temporal data. They further delve into his teams' learnings from using Kubernetes for microservices, the transition from Java to Scala, and use of Akka streaming, along with tips for ensuring reliable operations. Brought to you by IEEE Computer Society and IEEE Software magazine.


Abhay Paroha, an engineering leader with more than 15 years’ experience in leading product dev teams, joins SE Radio’s Kanchan Shringi to talk about cloud migration for oil and gas production operations. They discuss Abhay’s experiences in building a cloud foundation layer that includes a canonical data model for storing bi-temporal data. They further delve into his teams’ learnings from using Kubernetes for microservices, the transition from Java to Scala, and use of Akka streaming, along with tips for ensuring reliable operations. Brought to you by IEEE Computer Society and IEEE Software magazine.



Show Notes

Related Episodes

  • 623: Michael J. Freedman on TimescaleDB

Patents

Contact Info


Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.

Kanchan Shringi 00:00:18 Hello everyone. Welcome to this episode of Software Engineering Radio. Our guest today is Abhay Paroha. Abhay is an engineering leader with over 15 years of experience leading product dev teams on building scalable systems. Abhay has built up his expertise in cloud architecture and data integration with the focus on reliability and DevOps. In this episode, we are talking with Abhay on his experience building a cloud foundation for oil and gas production operations. It’s so good to have you on today, Anhay. Welcome to the show. Is there anything you’d like to add to your bio before we get started?

Abhay Paroha 00:00:57 Thank you Kanchan for the nice introduction. I guess you covered everything, but I would like to start this talk with a brief introduction about oil and gas industry. So oil and gas industry covers basically two kind of companies. One is the oil and gas company and second one is the companies which are providing services to our gas company. So, I’m from the domain where we provide services, and I am from the upstream service company where we provide all kind of services including hardware and drilling the oil wells and software, whatever needed to explore and extract the hydrocarbons from beneath the earth.

Kanchan Shringi 00:01:38 Can we start with a business goal on the move to the cloud? What outcome was the business looking for?

Abhay Paroha 00:01:46 Sure. So I am from upstream production domain. So in any upstream company there could be different segment, it could be drilling or wireline or production assurance. I am from production assurance and our goal was to use the existing infrastructure, whatever available and expand over that infrastructure to build and deliver the innovative solution to the client. So what that means, existing infrastructure means before moving to the Cloud, we were using some production data management software. Those are mostly on-prem based desktop application and we were storing the data in some SQL server or Oracle and basically developing some reports on those desktop app. But these apps were limited in terms of scalability because we were not able to run workflows like where we need a lot of historical data involved to run some machine learning workflows. So we decided to move this data to the Cloud so that we can run advanced workflow where we can see the historical records maybe more than 25 years of data of any well and we can develop new apps like give the recommendation about any well in real time or do the well surveillance or do the forecasting and there are some other things like doing the digital twins operation for any hardware or equipment installed in the field.

Abhay Paroha 00:03:25 So to achieve these kind of advanced workflow, the first need was to have this data ready and collect it on any Cloud platform so that we can expand it using machine learning workflows or some recommendation agent. What was our main business goal?

Kanchan Shringi 00:03:42 Abhay Iím sorry, what is digital twins operation?

Abhay Paroha 00:03:45 So digital twins is an important concept for any upstream or midstream operation where we basically model any physical process or physical equipment. So whenever there is some equipment installed in the field where oil is producing or there is any production facility, so we can create a model of that physical equipment in Cloud. So whenever we are ingesting data in the Cloud and we want to represent the physical equipment as a digital entity so that we can understand what is the equipment behavior when some incident happen. So digital twin basically help us to model these kind of equipment behavior in distal world.

Kanchan Shringi 00:04:34 Can you give us an idea of the size of the data that we’re talking about here?

Abhay Paroha 00:04:39 In a typical small oil and gas company, maybe that companies owning let’s say 500 wells and any oil well can have the oil flowing maybe last 20 years or maybe there are some wells who are flowing or producing the oil maybe last a hundred years as well. So the data volume depends on the life of the oil well and the number of oil wells owned by any oil and gas company. So for example, let’s say if there is major national oil company who has more than 10,000 wells and because national oil companies they are producing oil since long so they may have data for more than 50 years. And second aspect of this volume is related to the frequency. So in any oil field the frequency of data depends on the kind of operation we are performing. It could be a second base frequency data as well and it could be weekly, daily, monthly, half yearly or yearly. These different aspects defines the volume of the data, but the bare minimum could be a second base frequency data coming for more than a hundred properties from any oil well. And a simple multiplication will give you a million or billion of data points streaming from any field

Kanchan Shringi 00:06:01 For second base frequency data with the million or billion of data points, I do expect that it’s really critical to have your Cloud deployment close to where the sensors are. What was your strategy in picking Cloud regions that you’re deploying to?

Abhay Paroha 00:06:19 Yeah, whenever we are doing any commercial agreement with any client, these clients are mostly oil and gas. The companies, whenever they discuss the requirement, they always come up with the data residency aspect. So for example, let’s say some client in Middle East and Asia, they are not very much comfortable in sending their data outside Middle East, but there are some clients like in South America they are still okay to have their data in USA. Based on the client requirement, we just set a clear expectation. For example, let’s say we always want to have the Cloud project or the Cloud cluster closer to the actual data source because it helps us to ingest data as fast as we can. But if there is some limitation from client side and if they are okay to with the ingest latency then we basically we still try like which reason is close to that particular next client.

[...]


Original source

Reply