Can You Trust GPT — When It’s Off-Topic and Wrong?
I will be demo’ing some AI and Cybersecurity over the next few weeks. So, let’s see if GPT actually has superficial knowledge or not. Let’s ask a fairly trivial question of:
The answer looks to be well written — although rather lacking any type of style. It is bland and just looks like a standard Stack Overflow answer. But, while it seems authoritative, it is poorly described:
In ECC, the public key is a point on an elliptic curve, and the private key is a random number. To encrypt a message, the sender computes a point on the elliptic curve using the receiver’s public key and then uses this point to derive a shared secret key. The sender then uses this shared secret key to encrypt the message, which can only be decrypted by the receiver, who knows the corresponding private key.
For a simple answer, it is acceptable but misses a good deal — as the short answers it gives tend to generalise things. And when it comes to why the security is better, it can only say that the keys are smaller — and which goes off-topic :