Under Pressure to PERForm?

A simple way to use reservoir pressure to predict and improve well performance

John Kalfayan
JohnKalfayan
17 min readJul 30, 2020

--

Global over-supply of crude and decreased demand — due to the COVID 19 pandemic — has led to a collapse in oil prices, and in turn, another massive round of layoffs. In times like these, engineers in the energy industry are under a tremendous amount of pressure to do more with less. If we as an industry don’t start doing things differently, I fear that the shale boom may have been the beginning of the end of the industry (that I love dearly) as we know it.

That is why I am writing this post, to show my fellow engineers a quick and easy workflow they can use to leverage their existing data with some free tools, to help make better and more informed decisions. This workflow is not a magic pill that will instantly drop your costs or double your production but my hope is that the ideas that I present may help you to start thinking differently about how you prioritize your data collection and you can leverage the (absurd amount of) data that we already have sitting and collecting dust.

In this post I will take a sample set of wells (from public data) and show you, using reservoir pressure data (visually), how to quickly gain insights into your well spacing, completions design, and production performance.

Table of Contents

· Reservoir Pressure data
·
The Data
·
Power BI
·
Map Setup
·
Analyzing the data in Kepler
·
Conclusion

Reservoir Pressure data

There are a variety of ways to get pore pressure data, but for the purpose of this workflow I’m going to focus on Diagnostic Fracture Injection Tests (DFITs). In my decade of experience this is one of the simplest, most cost-effective, and reliable (when performed correctly) ways to get good reservoir data. This is a data set that most people probably already have access to, but few ever utilize to its fullest potential.

Example DFIT

What is a DFIT? DFIT stands for Diagnostic Fracture Injection Test also known as a minifrac (there are some nuances between the two, but generally speaking, their objective is the same). It’s a type of well test performed before you stimulate a well. The test is relatively simple, you open a toe-sleeve or TCP a well so that it is open to the formation, then you inject a small volume of fluid (5–40 bbls) at a steady rate (3–15 bpm) to ensure that you’ve created a fracture (see plot above). The entire time you are performing this test you should be monitoring injection rate (via an inline flow meter — DO NOT measure rate off of pump strokes) and surface pressure data with a high-resolution, high-accuracy, thermally compensated gauge. When you are done injecting you shut the well in and monitor the pressure fall-off (There’s lots of debate around injection volumes, rates, and fluids and even how you perform the analysis, but that’s a topic for another time). Ideally, you’ll monitor the fall-off for long enough that you get into a pseudo-linear or pseudo-radial flow regime; allowing you to extract some very valuable reservoir data.

Ex. After-Closure Analysis Plot and Outputs

The test itself isn’t valuable until you analyze it (see previous links for more information on that) which results in some very cheap, useful, and accurate (when tests are performed correctly) reservoir data. The main outputs of a DFIT analysis are closure stress, permeability, and reservoir pressure. In this instance we will be discussing reservoir pressure, but you can do this exact same workflow with closure and perm and extract quite a bit of additional value as well.

VERY IMPORTANT — READ ME

If you are doing any type of well tests that result in useful reservoir values and information, (please please please), do not let that extremely valuable data out to pasture to die in a pdf or power point. Make it a part of your workflow to take that data and put it in an existing or new database (or even spreadsheet -gross) where it can be related back to the well and its geographic data. As an industry we have a ton of data — but a majority of that data is sitting in a corner and dying because it’s in a power point or pdf (I’m looking at you post-job treatment reports) where it can’t be utilized. THIS IS A TERRIBLE PRACTICE AND WE HAVE GOT TO STOP!

Something as simple as putting this data into a database or spreadsheet so that you can actually use it will open up a tremendous amount of new information and insights to you from data that you already have! Ok, I’ll get off my soapbox and get back to the workflow.

Below is everything we will utilize in this write up:

  • Sample data set available to download if you’d like to try this with me
  • Kepler.gl(MAKE SURE TO OPEN IN a CHROME browser — some have run into issues when trying to use edge or firefox) an open-source tool written by Uber that…

“is a data-agnostic, high-performance web-based application for visual exploration of large-scale geolocation data sets.”

In non-tech terms this means — most of Uber’s data (Uber “fulfills 40 million rides per month”) is geospatial data, so they have written this tool (and made it open source — HOORAY!) that allows them to visualize large-scale, geographic data quickly and easily. Check out this link to learn more.

The Data

Since I’d be fired and banned for life from the industry if I shared any real customer data, I pulled some public data from the state of Texas for a few counties in the Permian basin as the foundation for the data set. The specifics of that raw data are below:

  • Counties: Andrews, Ector, Loving, Winkler, Midland
  • Dates: IP Dates from 1/1/2016 to 12/31/2019
  • Header information: State, Lease Name ( I removed well names/API numbers), Orientation (H,V,D), County, Status, Phase, Spud Date, IP Date, Frac Date, Vertical Depth, Measured Depth, Lateral length, Permitted Depth, IP oil/gas/water/boe, latitude, & longitude

If you don’t care about how I generated the reservoir pressure numbers (because in reality those numbers are fake and you’ll end up using your real data anyway), please skip ahead by clicking Map Setup.

Power BI

I then pulled the data into Power BI for some further transformation. Since I have to generate the pore pressure data (sorry, but it is what it is) I used some DAX code (language used in Power BI) in Power BI to help generate it for each of the wells in the sample set. I tried to put as much logic as I could into this without getting really crazy but again this is just an example scenario. In your scenario, all you would have to do is pull in your well information with the lat/long values and populate a reservoir pressure column from your DFIT, RTA/PTA, or Shut-in analysis.

For the sake of time, we won’t go into much detail about the actual code I used in Power BI, but I will outline the logic behind it. As we know, reservoirs are a finite source of pressure and once you begin to drain that pressure (produce your wells) it take a lot to get it back. Therefore, in order to calculate the reservoir pressure column in the data set I did a few things.

Transformations & Calculations

I created a “Rank1" column that uses some filtering and the RANKX function to look at the wells based on the lease name and IP date. The logic of the function basically says if they have the same lease name, rank them from low to high based off the IP date. So if there were multiple wells on the same lease, the well with the earliest (oldest) IP date would be ranked 1 then the next oldest would be 2, and so on and so forth. The rationale behind this is that the oldest (earliest) wells should have the highest reservoir pressure and over time the assumption is that this pressure is declining the longer into the production life-cycle those wells go. Is this a perfect way of doing this? Absolutely not, but it allows us to differentiate enough to prove the point. (You’re going to be using your real-world pressures anyway, right?)

Power BI — BBL/Ft Calculated Column

Next I created a “BBL/Ft” column. Since this is public data which is notorious for being error filled, before I create the “BBL/Ft” column I filtered out wells that have a lateral length of 0 or less. Now we create the “BBL/Ft” calculated that takes the IP BOE and divides it by lateral length as a way to help normalize this data (as we all know, frac design, perf schedules, and lateral lengths have all changed throughout the years). Don’t worry, we can do more filtering in Kepler later. I also created a “BH lat” and “BH long” column that are calculated based off well bearing (in this case 0 because I don’t have that information), surface lat/longs, and lateral length. I used this code to calculate these columns (If you use this code make sure to convert your degrees lat/long values into Radians first).

Power BI PorePressGen Code

Then I created a column I called “PorePressGen” that uses an embedded IF statement and the RANDBETWEEN function to actually randomly generates the pore pressure value based off the rank. The higher (1 being highest) the rank, the higher the pore pressure value. The RANDBETWEEN function allows you to generate a random number between 2 numbers so this allowed me to bound the high and low values of the random numbers that were generated.

Finally, I created another column called “Pore Press Round” that uses the MROUND function to round the “PorePressGen” column to the nearest 750 psi. I did this to help bin or group the data so the legend would look a lot cleaner.

Now that the data has been cleansed and transformed I just went to the table view in Power BI and right clicked and copy/pasted it into a new excel sheet and stored it as a CSV.

Make sure you save it as a CSV or if you have it, GeoJson, as these are the only two formats that Kepler can ingest.

If y’all would like more posts on these different Power BI tools and how to use them, make sure to comment below and let me know!

Map Setup

Grid map of reservoir pressure

Now that the data has been cleansed and transformed we are ready to pull it into Kepler. Make sure you save your data as a CSV or if you have it, GeoJson, as these are the only two formats that Kepler can ingest. Here’s the link again to the cleansed data set we are importing into Kepler.

Open Kepler (IMPORTANT — MAKE SURE TO OPEN IN a CHROME browser — some have run into issues when trying to use edge or firefox) and import your cleansed CSV file.

Once you have done this it will show up in the “Layers” tab as seen below “Keplr_Clean.csv”.

Map Setup

Your first step is to add a new layer, so click the green “add layer” button. As you can see there are a lot of different layer type options to choose from. For this instance we will be using the GRID type. But this data set is just as valuable if you go with the heatmap or the point type (more on these later) as well.

We can now pick the data we want to display in this layer and start to customize it. Obviously your lat and long values go into the “lat” and “long” fields. Next, click on the 3 dots next to the “Color” item. Anytime you see the “3 Dots” this means that you can expand/collapse that menu by clicking on them. Make sure you explore all the menus for each option as there are a lot of customization options.

Colors

Since we want to view this data based off the pore pressure we will select “PorePressGen” or “Pore Press Round” for the “Color Based On” field. We will leave the “Aggregate by” field set to average but this is another option you can use to play with. Keep in mind that this data will be aggregated over the area, not the specific individual well values. You can also set your color scale by selecting from the options in the Color Scale setting. For this data set the quantize (more info on the difference here) option gives a cleaner distribution so that’s what I used. You can also refine this for the grid map by changing the Grid Size settings under the Radius option.

If you want to map the individual well values instead of the aggregates, during the initial map setup just choose the points or heatmap type.

Finally, click on the color bar and select the color options that you prefer. Note that you can customize the number of steps, reverse the palette colors, or even create your own custom colors.

Now that you’ve got your data in the map and setup, zoom out and head over to Midland, TX on the map itself and you should something similar to the map below. To skip directly to the data analysis section just click Analyzing the data in Kepler , otherwise we will now quickly review some of the other options and settings that Kepler has to offer.

Kepler Settings & Options Overview

Filters

At the top of the “Layers” tool bar on the left side of the screen click the filter Icon, then click “add filter”. You can filter by county, date, status, depth, lat, long, and any other field that we have in the data set. I also love that Kepler gives you the data filter in a histogram format so you can actually see where there are outliers or how skewed the data might be.

Tool-tip & Base Map

If you click the arrow/pointer with the circle, this pulls up the tool-tip options and allows you select what values are displayed when you hover over the data points. The last menu option (slider bars) allows you to customize the base map. You can select light, dark, turn on/off streets, city names, etc. You can even create your own style in Mapbox and import it to be used here as well.

Legend

To turn on the legend, go to the menu in the top right corner and click on the list icon seen below. To change the legend scale, go back to the Layer options and under the “Color” menu you can change the color scale from Quantile to Quantize; further, if you click on the actual color scale, you can select the number of “steps” or bins you want the colors broken down into.

Analyzing the data in Kepler

Now that we’ve got everything setup, we can start looking at what the data is telling us. As you can see below, I’ve circled an area that has a very low average pore pressure relative to its neighbors. In theory, this would be a really bad place to drill a new well as the data is showing us that it is already severely depleted and it would be easy to hypothesize that a new well in this area might have poorer production than its predecessors. Additionally, this would also indicate that there’s a very high likelihood that any new wells that were to be hydraulically fractured here would have pretty severe frac communication or interference. Lets say you wanted to consolidate the data so that you only had wells between specific depths or target zones, this is where Kepler shines, just go to the filter option and filter to your desired depths or formation/zone name.

This same technique can be applied to the closure stress or permeability results from our DFIT data as well. Mapping out the permeability across the field using this same technique could allow you to identify some high or low perm streaks that you might want to focus on or avoid completely.

And there you have it. In a few easy steps we have now quickly significantly enhanced the value of our DFIT data to get a much better and bigger picture of our reservoir. If you want to learn about the other types of maps we can use and how to leverage this data in different ways, continue reading. If you are good for now, please click Conclusion to skip ahead to the end.

3D Map

Want to add another component to this data set, like say average IP or average bbl/ft? Turn on and expand the “height” option from the left menu and set the “Height based on” field as “BBL/Ft”. Then in the top right icon menu, select the “3D Map” option (looks like a cube just above the legend option). Now if you right-click and HOLD and you can start exploring the data in 3 dimensions instead of just 2.

Now that we’ve got our base map setup, go ahead and turn off the 3D Map option by toggling that option again and turn off the “height” option from the left layer menu. We are now setup to start exploring the other ways to visualize this data. Below are some other map types and ways you can visualize this data geographically.

Point Map

Now that we’ve looked at the Grid map, lets take a more detailed look using the point map. To do this, just go to the “Basic” menu option and change it from “Grid” to “Point”. This will save all your previous settings and just change the view of the map.

As you can see we now have even more clarity into the data as it plots every single well location instead of aggregating them into a grid.

Time-lapse

Another great feature is the time/animation function that Kepler has built in. As long as you have a timestamp with your data set, you can use this to animate/visualize your data over time. All you have to do is go to “filter” and select the “timestamp” value and a playback window will appear at the bottom of the screen that will animate the data over the timestamp that you’ve selected as you can see below.

Heat Map

If we want to view this data as a heat map, add a new layer by clicking the green “Add Layer” button and select the “heat map” type. Populate the lat/long fields and change the “Weight” option to “Pore Press Round” and now you’ll see a layered heat map on top of your wells point map. Play with the radius setting to increase/decrease the radius of influence of your heat map. This is yet another way to quickly and easily identify areas of depletion.

Line Map

Since we are talking about oil and gas wells with laterals, we know that the surface lat/longs are not a true representation of the bottom hole locations. Don’t worry, there’s a layer for that. Click the green “Add Layer” button at the bottom of the “Layers” menu. Now select the “line” type of map. Populate the source lat/long fields with the lat/long fields you have been using but for the “Target Lat/Long” fields use the “BH lat/long” fields in the data set. This will allow you to display the well’s trajectory in a 2D/top down view. You can even go into the color options and color the lines by the “pore press” value to make it stand out even more.

Don’t like this view or want to turn it off? No problem, just click the eye icon next to the layer you want to turn off and it will hide it.

Sharing, Saving, & Exporting

Now that you’ve played with Kepler and hopefully discovered some rather interesting information about your wells, you’re obviously going to want to share this with your manager, co-worker, or the rest of the world on IG. To do this, click on the box with arrow in the top corner of the left menu bar.

Kepler lets you export via:

  • Export Image — set screen ration, resolution, and turn on/off legend
  • Export Raw — CSV filtered or unfiltered
  • Export Map — html or json format. Note- you must have a mapbox token to do this (don’t worry, it’s very easy to do). Just follow this link and sign up for a free mapbox account (if you don’t already have one). When you have one, login and click on the “tokens” menu at the top. Just copy the token and paste it into Kepler and you’re all set.
  • Share Map URL — Kepler uses existing cloud storage options, Dropbox or Carto, as was to store and then share your maps. Just sign into whichever account you prefer, upload it within Kepler, and it’ll spit out a shareable link to a fully interactive map you can send to anyone!

Conclusion

If you’ve reached this point — I want to say a sincere thank you for taking valuable time out of your day to read what I’ve put together. I hope that you’ve found something in this post useful or valuable. I’m not a data scientist nor do I enjoy (nor am I good at) programming, but as an engineer I do love finding simple and easy ways to explore data to gain new insights and help solve problems. We live in a time where we are constantly pressured to do more with less (money, time, people) so I wanted to share this workflow with with you because it’s such a simple and very useful way to do just that.

You don’t always need the sexiest, most-expensive technology, to be a child-prodigy programmer, or the latest industry trend (insert investor friendly tech buzzword here)

to make more informed decisions. There are a countless number of simple (and free!) workflows and tools out there that we, as an industry, have to start leveraging to adapt to this “low” pricing environment. Hopefully, this will be one (of many) that I get to share with you that you can incorporate into your existing workflow. With the storm clouds looming over our industry and the unconventional boom, we have to start doing things differently if we wish to survive. #EvolveOrDie

Most sincerely,

John Kalfayan

I sincerely thank you, again, for taking the time to read what I have to say. If you enjoyed this please give it a clap, share, follow or recommend it.

If you have any questions, comments, feedback — PLEASE, PLEASE, PLEASE comment below!

--

--

John Kalfayan
JohnKalfayan

Father, engineer, tech/data enthusiast disrupting the how you utilize your data at the edge. Data|Tech|Energy Sports|Hunting|Cars|Business|Crypto