AI/ML: Future of Technology

Totally_Not_A_Haxxer
21 min readMar 30, 2023

--

Introduction

With the new rising development and training of AI and ML models it seems as if people are becoming a bit too worried about the over taking of AI in the tech field and industry or really the world. So In today’s article I will be discussing the topic and question everyone seems to ask me and that is “Will AI/ML take over and ruin jobs for everyone”. This article will not really be fully statistical and this is a pre made prediction based on the authors (Totally_Not_A_Haxxer) experience with using AI and ML models both personally and within the job world. In this article we will be dissecting it into questions and sections where each section is its own unique question that I have answered or had to answer personally within server’s. Before we hop into these questions and start trying to give evidence of specific use case’s and examples lets go ahead and break down some terms so the reader ( you ) can chomp this article down alot easier.

Terms and Definitions

ML → Machine Learning : ML stands for Machine Learning, which is a subfield of artificial intelligence (AI) that involves building algorithms and models that can learn and make predictions or decisions based on data.

AI → Artificial Intelligence : which refers to the development of computer systems that can perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing

GW → Ghost Writer : A ghost writer refers to someone or a persons who writes articles anonymously under another name for a specific organization or company name. For example, writing an article about AI that they post will not have their real name or online aliases or be associated to their name.

NLPM → Natural Language Processing Model : Often refers to a machine learning technology such as GPT which specializes in natural language processing and understanding.

TL → Transfer learning : Transfer learning is a method that typically involves using a currently trained machine learning model to train another model on its own. This is typically done to increase the times and the speeds during the training process as the training process conducted by a much more higher and intelligent model means that the results can be calculated and the training can be done a lot quicker.

Artificial Intelligence and Machine Learning Questions

This section will break down again into multiple questions which will all be answered and have formulated opinions. Fair warning, these questions are all formulated based on the authors experience within the industry and using ML and AI technologies vs not using them. Before going further into the article I would first like to discuss my experiences within the field, my personal understanding of AI and how it works, my respect for it and how I believe people should use it. Explaining my personal bias towards AI/ML with experience may guide you into understanding fully what this article is about, so, without further to say let’s get into it.

Authors personal experience using AI/ML

For starters, for those who may not know me to the extent of my page or social media I am a GH ( Ghost Writer ) for multiple companies and organizations, I am also a developer and security researcher who works as a contractor for multiple groups, open sourced organizations and companies across the world. I have about nearly just about 2 years of experience working in the field and have had a few months of experience working with ML and AI technologies to produce content which gives me the amazing opportunity to learn more and expand my logical processing capabilities. So, lets talk beliefs. Coming from personal experience I have a singular line I will not cross that becomes to the proper use of AI and ML vs the abuse of AI and ML. As explained above I have experience contracting as a GH for multiple companies across the world and almost half of them wanted me to start using utilities like jasper.AI and even GPT to generate sustainable articles which they labeled as a way to take the workload off my chest of generating 30,000 word articles. You may immediately think “wow you are getting paid to do nothing” well that factor actually is not true and here is why. When using ML and AI technologies like Jasper and GPT you still have to KNOW prompting and KNOW writing and KNOW human emotion to edit and read over a 40,000 word article and change about 70% of it. This is also where my line starts to get defined, in the case of these companies and my field of work it is understandable why they want me to use AI and ML models because no human wants to generate or write 12 40,000 word articles split into series’s of information because it is much easier to use a NLPM to generate that data but it still does not take away as much work as you might think. Despite people like me generating these articles for work it still takes so much editing to be able to make that article stand out more and to make that article much more emotional and understandable for people to learn and read. Now, when it comes to other fields this is where AI kinda comes into clash with my belief. Based on my experience I would very well say that there is a way to use AI and a way to not use AI. For example, consider the following situation.

Bob is studying for mathematics class, but can not understand or finish a question on his quiz. This question involves a topic that Bob has forgotten to study about due to not being able to fully understand said question.

If bob wanted to learn off of something like GPT, bob would make the best predictable answer based on his knowledge and once he gets the test and gets his results he would take the question and ask GPT to break down each part of mathematics based on a specific syntax of learning. This is an example of Bob using GPT ethically and using it respectfully aka not abusing the system to spit out information. Now, if Bob was to copy and paste his question into GPT and tell him to solve it for him without him learning anything and just copy and pasting the answers into the chat then he would be very well abusing the model to give him answers that most likely arent even right. Personally I am one to say that if someone wants to use AI/ML models then they should use it respectfully as a learning resource when it comes to personal projects, arts etc but when it comes to work this depends on the field. In the example of a GH then it would make sense for them to use a model to generate a specific article and generate conclusions or introductions for that article but if they were to just up and say oh just generate me so and so article for my personal blog then again its just abusing it. My personal belife towards how AI should be used is a bit wild and a bit different as there is a fine line that is hard to cross. In the context of programming and technology I think you would get more out of the model if you actually use it to learn rather than to just solve a programming question for you and copy and paste it into your code then change a few variables. I hope this section gives you a good understanding of where I am going with this and where the article is heading due to this personal bias ( which every human has ).

Note: The author’s bias will not affect the general research thrown into this article and it is important that you ( the reader ) stay completely open minded when reading these thoughts and understand that this prediction of the future or stance of where AI and ML technologies are going is based on purely hundreds of hours of experience working with AI and ML models both for workloads ( as the company assigns the author to use ) and personal education in terms of quantum mechanics, mathematics, UI and UX design as well as other fields such as programming and conceptual design.

Now let us finally get to the questions everyone seems to ask me from day to day about the future of AI and ML technologies.

Question #1 — Based on your experience, are programmers in danger?

Short answer no, the development and cyber security jobs or really any technological job is not in danger. Here is the long answer for that. I have been working with AI and ML models like Watson, Jasper, GPT, OAI ( Open AI ) and even working with libraries like tensorflow and speachrecongnition in python, that being said I have spent hundreds of hours utilizing AI and ML models for personal and buisness related tasks. Given my experience and the current popular sets of AI and ML technologies like Jasper and GPT3–4 I still do not think anyone is in danger because even if you can manage to get some good code out of it, then it will most likely be outdated or not really work by the time the model is trained. GPT4 has already been proven it is drastically much more advanced than GPT3, but why is that exactly? GPT4 went through a different phase in training which was not in used in previous models. Unlike previous GPT models, the developers and authors of the GPT model decided to use the GPT3 ML model to train the GPT4 model and this is commonly known as TL ( Transfer Learning ). Because they used amn already advanced model to train another more newer model not only did the training process go by faster but it was executed and done alot easier than the previous training method. Despite these changes this data 4 years down the line will still be done and outdated, for example, in the case of GPT if you tell it to generate a program or line of code or set of code instructions there is a highly written chance even with the newer model that code may fail in the further year or so due to the amount of changes language’s go through. Despite GPT4 still being much more advanced the bias in the algorithm is still very very new and still frequent if you can get it to work a specific way. This is why programmers are not in danger mainly because the most popular models despite being able to generate code does not mean they will always be accurate and be perfect simply because humans are not perfect adn well models use models that were made by humans. The following text was a secondary answer I had mentioned in a discord server following this same idea.

I have multiple jobs both as a contractor and as a contributor to some pretty decently technical companies and most of my jobs are programming, code security / code review and writing. You will see where the writing job comes into play with this and the statement in general. So, with AI/ML models becoming stronger and much more extensive I think that sure some jobs may be taken away but I do not fear that simply because from my experience I am already as a human being 30,000 times better than an ML model. The only thing and reason ML models and AI models are hyped up is because they are fast in response time. Take my writing job for example, I generally personally will write 11,000 word articles as a limited amount and the most I have ever written was 25,000 words in a single day. Models such as GPT3 and Jasper AI are all models and technologies the few companies I contract for as a writer make me use to from the personal experience there is no difference than an AI/ML model and a human other than speed and emotion. When you think of programming alot of people want results fast because it makes sense for production but in all honesty with the way code generation is going ( horribly ) it will not be able to start taking over jobs for the next few years at least 10–20. I say this because models and language processing AI/ML models take years upon years to train and require manual assistance during the training process and constant maintenance from the teams that develop them. Jasper AI is so fucking garbage at generating text I end up having to rewrtite 90% of the 4,000 word conclusions it will write for the articles the company makes me write. I have had my job’s and current jobs utilize AI/ML models for the entire bit of it and I camn tell you purely from past experience and current experience it takes more work to generate programs and articles than to write them yourself ( if you are a good writer ) for example the following text document is a prompt I had to use in GPT to generate the most BASIC article which I had to edit at least 5 times just to get it to work.

TEXT DOCUMENT NOT SHOWN DUE TO PRIVACY REASONS

In conclusion : There is a possibility of it becoming a thing in the future but I doubt they will take 90% of the jobs because I know programmers that work faster than most models like GPT can including myself. If you have a good developer then most will develop projects in under an hour that a model would take 15 days to generate. Its just slow right now again it is possible but I do not fear it from the thousands of hours I have spent and worked with models like GPT, JASP and other language processing models

This statement kind of clears up and backs everything I was saying, even with models that specialize in specific topics like jasperAI for writing they are never perfect and it still takes hours to fine tune the results which in some cases is better than a few days of writing for authors or a few months teaching for teachers. Given my personal experience I still to this day despite the hype know that currently it wont take jobs, even then it will take people and programmers to still maintain it.

Question #2 — Do you think AI/ML models can pose a useful set for workers?

This one is a yes and no article for me, personally speaking and generally speaking yes I do think it can depending on the job but if you have a specific job that requires much more logical processing such as development then no I do not think it will pose a useful environment or setting for general workers. Some jobs NEED to be automated, after all that is why technology was developed and that is why it was made to prevent the dangerous or horrid conditions humans would have to work in back then same here it just automates a very tedious task that is not worth it. Despite this pro there is a con to this as well and that is the factor that code being generated is still a very very flawed concept and still has alot of kinks that the developers and people who write and develop those sets or frameworks need to work out both in the math and the general computer science that goes behind it. If we were to use GPT3 and GPT4 for every bit of code generation despite it being a NLPM then we will find so many vulnerabilities and so much bugs. When the developers and authors of GPT4 showcased it, there was a lot shown by the model of what it can do now in terms of advanced learning. However, this still does not lead out the factor of how safe and how riddled the backend development or even frontend development is with the code that was generated. Sure, the code was nice and everything that they showcased was nice but lets take it into a bigger aspect. Should we really trust an AI / ML model that is NLPM to generate entire code base’s dedicated to just backend development? I will admit from personal usage the GPT set of NLPM’s is truly amazing and it works extremely well with understanding the questions I had asked it especially when it came to quantum mechanics and quantum mathematics. If you ask it to even write you and explain to you how the mathematics work behind a specific quantum function then it will truly do well. For example, one I asked a while back when I first started studying quantum mechanics was “ mathematically explain a quantum wave function “ of course the prompt was alot deeper and detailed but here is the response it gave.

In quantum mechanics, the state of a quantum system is described by a wave function, denoted by the symbol Ψ. The wave function is a mathematical function that describes the probability amplitude of finding a particle in a particular location in space and time.

The wave function Ψ is a complex-valued function, meaning it has both a real and imaginary part. In mathematical notation, the wave function can be written as:

Ψ(x, t) = A(x, t) e^(iφ(x, t))

Where:

x is the position of the particle

t is the time

A(x, t) is the amplitude of the wave function

φ(x, t) is the phase of the wave function

i is the imaginary unit, defined as the square root of -1.

The amplitude A(x, t) represents the probability density of finding the particle at position x and time t. The phase φ(x, t) is a complex-valued function that determines the interference pattern of the wave function, which describes the behavior of the particle.

The wave function satisfies the Schrödinger equation, a partial differential equation that describes the evolution of the wave function over time. The Schrödinger equation is given by:

iħ ∂Ψ/∂t = H Ψ

Where:

i is the imaginary unit

ħ is the reduced Planck constant

∂Ψ/∂t is the partial derivative of the wave function with respect to time

H is the Hamiltonian operator, which describes the total energy of the quantum system.

The Schrödinger equation is used to calculate the evolution of the wave function over time, and thus the probability of finding a particle in a particular state at a given time. The wave function and the Schrödinger equation are fundamental concepts in quantum mechanics and are used to explain many phenomena, including the behavior of electrons in atoms and the properties of superconductors.an example of a simple wave function:

Ψ(x, t) = A sin(kx — ωt)

Where:

A is the amplitude of the wave

k is the wave number, related to the wavelength λ by the equation k = 2π/λ

x is the position of the particle

ω is the angular frequency, related to the frequency f by the equation ω = 2πf

t is the time.

This wave function represents a sinusoidal wave that propagates through space and time. The amplitude A determines the maximum displacement of the wave, while the wave number k and angular frequency ω determine the wavelength and frequency of the wave, respectively. The wave function satisfies the Schrödinger equation, and can be used to calculate the probability density of finding a particle in a particular location at a given time. However, it’s important to note that this example is a simplified version of a wave function, and that more complex wave functions are typically used to describe quantum mechanical systems.

Anyone who has a base understanding of mathematics and quantum computers can easily understand this and this explanation especially with the example was really well done. However, this is where the set of GPT NLPM’s start to fail and its implementation. If you ask it to explain QKD in simple terms then ask it to implement an example of it using the BB84 protocol in a language like C++ without third party libraries then it will continue to over and over fail to implement it and continue to just all around crash and keep crashing even if you tell it to continue from the last line. This is where ML becomes a problem, even if ML is used or trained for specific reasons like code generation, language processing, image processing, computer vision etc the model still should be used to educate humans. In a sense, GPT is a faster google and that is all it is. This is where people often get confused with saying that oh GPT is great for jobs because it really is not and if someone plans to use GPT as a actual execution for a job then their company within the next year will be extremely hit hard with not only vulnerabilities but bugs and the company will loose money FAST. However, if the company tells their employees to say use it educationally and leads a lot of problems with it and teaches the proper and ethical use case’s then the company and the employees might actually benefit from it.

Question #3 — Do you think the popularity will die down

Yes, like everything AI and ML related it’s popularity will sink constantly after a year or so and become less used than it was supposed to be. Let us all be honest here, AI and ML is not perfect BUT it will be used in so many things soon enough but will it get so popular to the point everyone uses it and everyone uses it to dictate their everyday life? No. There may be that one small group of people that try to make AI / ML overly hyped. For example, all these people the second something AI related comes out overly hypes it up and freaks everyone out, this is also why I think that really jobs will not be taken away because the hype gets people so scared because they do not understand this technology. Like some dude on Reddit yesterday prior to writing this article was talking about how they are using GPT to test human neurons- but- no GPT is a NLP it was not designed or even specializes in that field and that is not how NLPM’s work. The people that hop on social media and make videos about how AI and ML models will take over the world are the very reason that people will loose interest and not care, they all hop on the same social media high horse and get knocked down once they get their views and likes for making a GPT related video and that is all it is to most people. If people were truly concerned about it then we would see the amount of technological classes being taken and people registering skyrocket because someone who thinks they are in danger will secure their technological spot in the world as someone who develops these technologies. Honestly speaking yes the popularity will die down, kids will continue to use it to cheat on their exams and get kicked out of college, people will use it to dictate their lives and realize where they went wrong and then stop using it. This is the same thing that happened with Jasper and other AI / ML like models. Its the same process on repeat, meanwhile people that do not understand the technologies panic the people who understand it will have a calmer mindset and just keep going on their way. Can we say for absolute certainty that AI and ML WONT take over jobs? According to Peter Unger no because absolute certainty does not exist but generally speaking outside of theory then No we still can not be for certain but we can make accurate predictions based on a set of pre- trained data and knowledge about how these technologies work.

Question #4 — How will GPT and other models affect the lives of people?

Well with social media and the way people are using GPT as a scare tactic for political interest I can assume it will affect people as it has in a negative way. If we look at all the talks on GPT you will notice that 34% of the videos out there are all about abusing it, the other 50% is about how GPT will take over jobs and that last little 16% is about how to use it properly and how GPT works which is buried down by all of the mass amount of videos that are shown as hatred towards the model and the developers. Currently AI and ML is already abused like Jasper but the rising popularity of GPT and how mindblowing it became to people it is becoming abused so much more. I mean there are articles after articles after articles explaining how students are getting caught cheating, getting kicked out of class, getting kicked out of the military and so much more for using this model. So, If people do not learn anything soon then AI and ML if it is abused will result in a negative outcome due to a little phenomenon called “karma”. The idea that someone will abuse a model and then get fired or get kicked out of school only makes sense where as people who use it properly and use it as a learning resource will not really get that issue. So honestly speaking based on what I the author has seen and the experience I have then it will affect you negatively.

Question #5 — Do you think AI and Ml is a good learning resource?

YES! YES I DO! From the hundreds possibly thousands of hours I have spent using ML and AI models if anything I have learned 20 times more than what I ever would learn in half the time surfing on google for the same answer. I can not complain about AI because I use it to teach me things, for example If I am struggling with why there is an error in my code I do not ask it to fix it for me I rather ask it to give me a description of what is wrong, how I can fix it, better practices and why so and so fix may have been the long and more annoying road to go down. The way I personally use AI and the many others I know is a better way of using it because you can learn from it, it is that exactly → ethical usage. If you use something properly you will get wanted results. For example lets make a question for GPT to answer for us.

Why is the binary number or set of binary numbers 010110 resulting in 4168?

In order for us to properly learn from the process we can reformat the question like so arithmetic

Why is the binary number or set of binary number 010110 resulting in the answer of 4168? Can you explain this using binary arithmetic and draw a basic diagram. Deep explain this as well so I can grab a base understanding of the binary representation.

Asking it this will yield…

Sure! In order to understand how the binary number 010110 corresponds to the decimal number 4168, we need to first understand how binary arithmetic works.

Binary arithmetic is a system of arithmetic that uses only two digits, 0 and 1, to represent numbers. Each digit in a binary number is called a bit, and the position of each bit represents a different power of 2. The rightmost bit represents 2⁰ (which equals 1), the next bit to the left represents 2¹ (which equals 2), the next bit to the left represents 2² (which equals 4), and so on.

So, let’s break down the binary number 010110:

The rightmost bit is 0, which represents 2⁰ (which equals 1)

The next bit to the left is 1, which represents 2¹ (which equals 2)

The next bit to the left is 1 again, which represents 2² (which equals 4)

The next bit to the left is 0, which represents 2³ (which equals 8)

The next bit to the left is 1, which represents 2⁴ (which equals 16)

The leftmost bit is 0 again, which represents 2⁵ (which equals 32)

To convert this binary number to decimal, we simply add up the values represented by each bit:

1 x 2¹ + 1 x 2² + 0 x 2³ + 1 x 2⁴ + 0 x 2⁵ = 2 + 4 + 0 + 16 + 0 = 22

Therefore, the decimal representation of the binary number 010110 is 22.

However, you mentioned that the binary number 010110 resulted in the answer of 4168. This is not correct, as 4168 is not the decimal representation of the binary number 010110. It’s possible that there was a mistake in the conversion or calculation process.

As for a diagram, we can represent the binary number 010110 using a positional notation system, where each position represents a power of 2:

0 1 0 1 1 0

2⁵ 2⁴ 2³ 2² 2¹ 2⁰

The value of each bit can be calculated by multiplying the bit by the corresponding power of 2, and then adding up the results:

(0 x 2⁵) + (1 x 2⁴) + (0 x 2³) + (1 x 2²) + (1 x 2¹) + (0 x 2⁰) = 22

I hope this explanation and diagram helps clarify how binary arithmetic works and how to convert binary numbers to decimal.

This is a great example of how we can use ML and AI technologies to learn! When I was working with my programming language SkyLine I needed to add binary calculations and general mathematics with binary numbers as the point and objective of the language just required that. So, I asked the GPT set of models to better explain how it worked and the concept of implementation. With a good understanding of how the concept worked I was easily able to implement the direct implementation of binary numbers in the language which was pretty cool especially what it taught me along the way. Doing this on paper using external resources as well also show how much it worked and what sense it made.

Conclusion

The AI and ML world is bound to affect us at some point but the result can be good or it can be bad, in the end it depends on how humans want to work with it and how hard they are willing to use it and at what state they want to use it in. The whole reason humans are overly worrying about this is because of their view points towards it and how we are just abusing it and not using it correctly. If humans took the chance and made the effort to learn from them then they may actually learn something new from the model they are using and not over worry about the world and how it is going to get controlled by AI and ML models. Open your mind up to new idea’s but rather explore those new idea’s and investigate them! I hope you guys enjoyed this article, it was a bit weird for me to release but it was nice to actually write and I feel sharing my opinion given my experience was worth it.

Until the next article, I appreciate the view and read of the article if you made it this far

~ Totally_Not_A_Haxxer OUT!

If you want to keep up with me make sure to follow me on other social media forms!

https://instagram.com/Totally_Not_A_Haxxer

--

--

Totally_Not_A_Haxxer

Cyber Security Educator, Developer, Social media manager, Author, youth education, content creation, engineering, ui/ux, RE