ChatGPT has lazy and dirty precisions when it comes to Math!

Freedom Preetham
Autonomous Agents
Published in
2 min readFeb 24

--

A day in the life of a lazy programmer for perturbative tests!!! Let’s summon the glorified calculator, ala, ChatGPT (Or Wolfram Alpha)! I kinda used to cheat sometimes with Wolfram on lazy days. But now ChatGPT is taking over for more straightforward tests.

I asked ChatGPT to give me two orthogonal vectors of dimension 20. I got the vectors.

These are the vectors I got from ChatGPT:

v1 = [0.88201679, -0.05758137, -0.42121971, -0.18530611, -0.09016195, -0.19128234, -0.0379655 , 0.13143978, 0.21425105, 0.2246096 , -0.12504853, -0.12579444, -0.06630113, -0.13681961, -0.05066335, 0.07013917, 0.05883755, -0.01472815, -0.14948907, -0.03506877]

v2 = [-0.08950484, 0.28038171, -0.08425878, -0.00887197, 0.02787388, -0.08632868, 0.26158391, 0.0090745 , -0.05498919, -0.10617688, 0.12980132, -0.19480344, 0.10695802, -0.05316702, -0.2302231 , 0.00729124, -0.05123817, 0.0780862 , 0.0232098 , 0.02447112]

and the inner product of these vectors is -0.07618303227161559

CAUTION: The ONLY problem is that these two vectors result in -0.07 as a dot product and hence not good enough for what I am working on as a precision for orthogonality. -0.07 is technically closer to -0.1, which is very, very, very far away from zero!

So getting lazy with ChatGPT is kinda dangerous for precision or accuracy.

If I had to do this myself, I would have done:
# Generate a random vector of dimension n
v1 = np.random.rand(n)
# Generate a random vector of dimension n that is orthogonal to v1
v2 = np.random.rand(n)
v2 -= v1 * np.dot(v2, v1) / np.dot(v1, v1)
v2 /= np.linalg.norm(v2)

With my code above, you get:
v1= [0.23246931 0.69022428 0.37562544 0.66935077 0.1359036 0.96530615 0.18023099 0.46983999 0.12821443 0.80855697 0.38360875 0.85697724 0.53610547 0.02802696 0.45083544 0.71838969 0.02250539 0.6686959 0.9551404 0.74936729]

v2 = [-0.02449058 0.2893198 0.21850898 -0.0676647 0.28934451 0.06491934 0.0541631 0.05861308 0.46143552 -0.05489555 0.32269301 -0.2848697 0.26762545 0.29952854 -0.196236 0.01960195 0.23055467 -0.28461804 -0.18155913 0.02208541]

Whose inner product is 2.220446049250313e-16.

The way to interpret that is a zero point followed by 16 zeroes and 2. This is very close to zero. This is more useful than what ChatGPT gave me.

If I had to generate something far more precise and extend it to an Orthogonal Matrix, then I have to first understand the following steps:

  1. Generate a random matrix of size NxN with elements from a standard normal distribution.
  2. Use QR decomposition to obtain an orthogonal matrix Q.
  3. Generate a random vector of size n with elements from a standard normal distribution.
  4. Compute the projection of v onto the orthogonal complement of Q
  5. Normalize w to obtain a unit vector.

Your programming skills will deteriorate in a year if you rely on ChatGPT. One will forget the intuition behind orthogonality by skipping the above five steps. One will probably not even code!

Sigh! I have resolved to stay away from ChatGPT for these reasons.

What started as a lazy day turned out into another ChatGPT blog!!

--

--