Solving Plane Stress Problem by using Physics Informed Neural Network

Alekh Sinha
DataSeries
Published in
3 min readNov 24, 2020

I recently came across a wonderful paper titled as ‘Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations’ which applies a neural network for solving differential equation. This has opened up a whole new box of opportunities. With the aid of this, neural network can be applied even to those field which are not traditionally data driven like Solid Mechanics. It depends more on differential equation and less on history data. Some substantial developments has been made based on this paper. One such development is SciANN- A keras wrapper for scientific computation.(https://deepai.org/publication/sciann-a-keras-wrapper-for-scientific-computations-and-physics-informed-deep-learning-using-artificial-neural-networks)

Now let’s understand some of the concept of this paper. The idea is to convert differential equation into optimization problem by rearranging the equation and then using that equation for building cost function for the neural network. The differential equations terms are generated by using auto differentiation feature of the neural network.

Let us take example of plane stress problem to understand the implementation of this concept.

Problem Definition

In the above equation-

· D is material matrix which is based on material used so its value is known

· As seen from figure, If u is known then other values can automatically calculated

· In this case bx ,by=0

· Ux=uy=0 at x=0

· Force =F at x=L

Now let us discretize our domain into collocation point Nf (Points where we want solution of differential equation) and Nb (Boundary condition)where we know exact solution. Let us define one more term yb(value of the exact solution).Now Objective of optimization is to find such a function(u=F(Nf)) which minimizes- eq1+eq2 (Refer Figure) and also which satisfies the boundary

So cost function = abs(eq1+eq2 +F(Nb)-yb). Here F(Nb)-yb) is imposition of boundary condition. This equation means that at Nb points(points where solution is known), output of the function should be equal to yb (value of exact solution)

Now this cost function can be easily minimized by a neural network. We can use any architecture. Reference paper (SciANN)has dealt with different architecture and compared them. For this example I have used 8 hidden layer. Each having 20 neurons with x and y as input and ux and uy as output.

One might be wondering how to implement these in Python. It can be implemented by using automatic differentiation feature (Gradient tape). Below is a code snippet to understand better

with tf.GradientTape(persistent=True) as tape:
# Watching the two inputs we’ll need later, x and t
tape.watch(x)
tape.watch(y)
tape.watch(x_c)#######Force Boundary
tape.watch(y_c)########Force Boundary
# Packing together the inputs
Xtemp = tf.concat([x, y], axis=1)
Xtemp_c = tf.concat([x_c, y_c], axis=1)
# Getting the prediction
u = self.u_model(Xtemp)
u_c = self.u_model(Xtemp_c)
ux=u[:,0:1]
uy=u[:,1:2]
########################
uxc=u_c[:,0:1]
uyc=u_c[:,1:2]
ub=self.u_model(X_B)#########Bottom fixation Boundary
uxb =ub[:,0:1]
uyb =ub[:,1:2]
# Deriving INSIDE the tape
u_x = tape.gradient(ux, x)
u_y = tape.gradient(uy, y)
u_xc = tape.gradient(uxc, x_c)
u_yc = tape.gradient(uyc, y_c)
###################################################
zeta_x=u_x
zeta_y=u_y
zeta_xy_1=tape.gradient(ux,y)
zeta_xy_2=tape.gradient(uy,x)
#####################################
zeta_xc=u_xc
zeta_yc=u_yc
zeta_xy_1c=tape.gradient(uxc,y_c)
zeta_xy_2c=tape.gradient(uyc,x_c)
#zeta_xy=ux+uy
#print(zeta_xy)
zeta_xy=zeta_xy_1+zeta_xy_2
zeta_xyc=zeta_xy_1c+zeta_xy_2c
#zeta_xy=zeta_x+zeta_y
#####################################
zeta=tf.concat([zeta_x,zeta_y,zeta_xy],axis=1)
zeta_c=tf.concat([zeta_xc,zeta_yc,zeta_xyc],axis=1)
sigma=tf.matmul(self.d,tf.transpose(zeta))
sigma_c=tf.matmul(self.d,tf.transpose(zeta_c))
sigma_xx=tf.transpose(sigma[0,:])
sigma_yy=tf.transpose(sigma[1,:])
txy=tf.transpose(sigma[2,:])
#####################################
sigma_xx_c=tf.transpose(sigma_c[0,:])
sigma_yy_c=tf.transpose(sigma_c[1,:])
txy_c=tf.transpose(sigma_c[2,:])
#########################################
sigma_xx_x = tape.gradient(sigma_xx, x)
sigma_yy_y = tape.gradient(sigma_yy, y)
sigma_txy_x = tape.gradient(txy,x)
sigma_txy_y = tape.gradient(txy,y)
# Letting the tape go
del tape
return (tf.reduce_mean(tf.math.abs(sigma_xx_x+sigma_txy_y))+tf.reduce_mean(tf.math.abs(sigma_yy_y+sigma_txy_x))+tf.reduce_mean(tf.math.abs(uxb))+*tf.reduce_mean(tf.math.abs(uyb))
+tf.math.abs(tf.reduce_mean(sigma_xx_c)-(self.bx)/(1.111*50)))####Force/area

As one can observe in ‘return statement’ all the equations are summed up together.

Results

Comparison of Abaqus solver with physics informed neural network

Conclusion

The results are not exactly matching with abaqus solver (fem solver) so this codes needs to be fine tuned for better application

Further more investigation needs to be done to incorporate material non linearity

--

--