Altair is a python library that mainly focuses on declarative visualization. This library has a syntax that is concise enough to perform various visualizations.
One of the significant difference between Altair and other visualization libraries is that Altair has one-liner coding properties, which does not require many coding lines to display various visualizations.
Dimensionality reduction is a common step for data processing. This process is useful for feature engineering or data visualization. Too many features in a dataset can complicate data visualization and analysis afterwards. Therefore, dimensionality reduction is needed to overcome this problem.
Dimensionality reduction does not automatically reduce the existing features. This method will first summarize all features in a dataset into several components according to the algorithm we use. There are two different methods in dimensionality reduction: Linear (Principal Component Analysis) or Non-Linear (Manifold Learning). In this article we will implement and compare these two methods using well log data.
Principal Component Analysis is an unsupervised method that performs linear transformation to reduce the dimension of our data. …
“In today’s analytics world, building machine learning models has become relatively easy (thanks to more robust and flexible tools and algorithms), but still the fundamental concepts are very confusing. One of such concepts is Hypothesis Testing.”
Essentially, hypothesis testing is a statistical method that performs the test of an assumption, so that the results can be declared accepted or rejected. Hypothesis testing is part of inferential statistics.
Hypothesis can be divided into two parts:
In this article, we use facies data which is derived from Well Log Data. …
In brief, stationarity is a condition that shows whether the data has a constant mean and variance in each location. Stationarity is widely used in time series function, nevertheless we also need to know its application in terms of spatial data estimation.
There are 2 important things quoted from one of the Michael Pyrcz lecture courses:
To investigate/assess stationarity in spatial data, practically we can use two ways of visualizations, either by using a trend plot or a variogram plot. …
DEM (Digital Elevation Map) is a data / image that has X, Y, Z coordinates, which are used to represent topography digitally. In order to extract valuable information from this data / image, we can perform several operations such as visualization, enhancement, and manipulation. These operations called Image processing. This article will introduce how to display topographic data from DEM and visualize it with various method in Python.
In this case study, the downloaded data from Badan Informasi Geospasial has a .TIFF format which is then converted into a .JPG format. Although .TIFF …
Gradient descent is an algorithm used to find the local minima value from a function. Local Minima can be defined as the lowest point of a particular function. This algorithm can be applied to various parametric models, such as linear regression.
By using this algorithm, we can determine the model (regression line) that best suits with existing data. In this article, we will discuss gradient descent algorithms to determine the linear regression model between DT vs NPHI well data using PyTorch library.
DT is a well logging parameter that measures the transit time of rocks along the borehole (which can be associated with seismic P-wave velocity), while NPHI is a well logging parameter that measures hydrogen index found in rocks along the borehole, which indirectly represents rock porosity (the more porous the rock, the more hydrogen content and the higher the hydrogen index). …
In this article, we would like to revisit the prediction result from the previous article (“Porosity Estimation Based on Well Parameters Using Support Vector Machine (Part 1)” ). This section will re-assess the result using quantitative method, and also try to use other predictor and compare the accuracy of the result.
The purpose of this step is to make a quantitative approach that calculates the correlation between all parameters (well log parameters other than porosity) and predicted parameters (porosity), since at the previous article the prediction parameters are only determined based on approximation.
corrmap = df_com.corr()sns.heatmap(corrmap,annot=True)plt.show()
Based on the correlation matrix above, we can determine the parameters which have high correlation with porosity (NPHI), either it is close to -1 (inversely proportional) or 1 (directly proportional). The “yellow circle” in above picture are the parameters used for predicting porosity . …
In this article we will try to estimate porosity based on well parameters using one of the machine learning methods, namely SVM (Support Vector Machine).
Estimation of well parameters, commonly use kriging as a method. Then why we choose SVM? actually we just want to try it anyway, whether this method (which does not use “spatial parameters” and looks simpler) can give good results or not.
There are 3 well data in total. one well is used for “learning data” and two other are used for “testing data”. Because this SVM does not include any spatial parameters, all of these well used in this study are close to each other (2.5km …
Pada tulisan ini kita akan mencoba untuk mengestimasi porositas berdasarkan parameter sumur menggunakan salah satu metode machine learning, yaitu SVM (Support Vector Machine).
Estimasi parameter sumur, umumnya menggunakan kriging sebagai alat bantunya. Lalu mengapa memilih SVM? sebenarnya ingin mencoba aja sih, apakah cara yang tidak menggunakan “parameter spasial” dan terlihat lebih simpel ini dapat memberikan hasil yang cukup baik atau tidak.
Data sumur yang digunakan merupakan data dari project terdahulu. Total ada 3 sumur, 1 sumur digunakan untuk “learning data” dan 2 sumur lainnya digunakan untuk “testing data”. Karena SVM ini tidak memasukkan parameter spasial, maka dipastikan bahwa sumur yang digunakan dalam penelitian ini berjarak dekat satu sama lain (2.5km …
Target saya di swim leg sebenarnya tidak muluk-muluk, yang penting tidak lebih dari 57 menit. Berarti masih di kisaran pace 2:xx/100 m. Khawatir jika saya menargetkan di bawah 50 menit, di bike leg saya bisa kehabisan tenaga di awal2.
Tidak banyak nutrisi yang diasup dalam leg ini, hanya 1 buah SIS gel sebelum berenang dan tentunya sarapan kue 3 buah dari hotel jam setengah 4 pagi. Saya merasa nutrisi ini cukup untuk mengisi energi saya.
Swim leg berlangsung tentram, saya menyelesaikan dengan gaya bebas, sighting masih bisa dilakukan meskipun rute counterclockwise (karena saya aliran gaya bebas nafas kanan). Meskipun sedikit berbelok-belok namun masih dalam kendali. …