Next-Step AI Implementation

Good Day, Adam
Bouncin’ and Behaving Blogs TOO
20 min readJun 28, 2024
Photo by Anne Nygård on Unsplash

So, what comes next after this VtoV (Visual to Verbal / Verbal to Visual) state which we are currently in, in this AI era segment?

Well, it does seem like many resources and videos are suggesting swinging VtoV and Generative AI back towards robotics to go with the computers-as-a-service version of the future. However, as we all know, the result of that is just a push towards expanding a country’s military, pushing people onto the street, and leading CEOs into some cocoon state where they live and breathe their long-awaited information utopias.

What if we could jump back to the computers-as-a-tool version of advancement (where humans continue to be at the helm of their own existence), and develop improved performance rather than pushing for automatic behaviors from both humans and their so-called AI response allies?

Having an end result where users turn into beings that type out acronyms the rest of their life doesn’t sound like a fulfilling life, but to those making the trade-offs behind this — for them — such instant transactional interplay will be very rewarding.

Universally “paid” purchases through single-word phrases: doesn’t that sound like all those engineering and computer science degree and boot camps paid off?

Apple Intelligence at first glance, seems like the current VtoV climate, but it is a worthwhile attempt to turn the dial back towards tool construction. After all, what good is life if you replace the cycle of exchanging knowledge and tool development with automation to then call it equality?

So, the question is: how can you focus on building new tools that will help humans communicate using AI, or using CenterC2 ( when computers and devices eventually talk to you from their ongoing prompt training (Centralized Computer Communication))?

You promote new types of behavior relationships with AI and CenterC2 instead of revolving back to science fiction concepts.

Since the ENIAC, workers with computers (and calculators, for that matter) have been focusing on PRINT, stamping the information on recorded pottery (as far as the civilization wheel goes). If we now go backwards to our tools serving us, our creative efforts will go into making our services shine for our utopian vision — if it is really OUR utopian vision.

Everyone has a different utopian vision, but there is a drive for a universal state of automation that delivers a preconceived utopian state.

Such preconception is derived from religion, philosophy, fantasy, cult followings, secret societies, fan clubs, literature, and even just a few people who meet in a dark room once a month.

While some love to fantasize a world where we continue on the Master-Slavery trend to no end, as there are people who continue to believe they are the rightful heirs of whatever claim to power they may have (out of a sense of deserving service to cater to their own power and utopian dreams): let’s not go there.

Would you rather have billionaires spending days talking to what computers could generate as perceived new types of existences about “their own” VtoV interpretations of new realities and realizations of a state of life while such “beings” tell their well-endowed followers how to biologically and genetically warp the Universals (those unemployed and on universal income) into … well whatever such billionaires and their VtoV CenterC2’s decide on them to become… or would you like to push the wave of innovation another direction? Exhale.

Did you know you don’t have to listen to those pushing for a Star Trek or Star Wars take on our future, and you most definitely do not have to listen to those people who want to “build a more open and connected” system for a select few who support a state of a utopian vision?

We have all heard those voices of a sort of directional narration:
“ You have to keep innovating.”
“ You have to stay ‘relevant’ in today’s market. ”
“ Increase ‘our’ protection by advancing today’s technology. ”
“ Robots are the future. ”
“ Technology should ‘serve’ us, not the other way around. ”
“ If you’re not ahead of the curb, you’re way behind. ”
“ If you don’t have the latest, you’re in last place in this industry. ”

But do you ever stop and question why journalists, authors, and YouTubers galore push out similar messages?

These are Human Prompts.

If these promote you, or prompt-mote you to take a stance through your consideration, a part of your brain’s input activity has realized a need to self-VtoV.

What prompts humans is, and has always been, time-related. You are encouraged to buy into something because you will receive a “good” exchange of your time for this interpretive version of a level of time in which we all (seemingly) follow and are (seemingly) aware of (despite each of our different circumstances and way of living).

Language allows you to move into developing states of understanding, but to develop requires a notion of time. Without such a connection to time, a human, humanity, would cease to go further. An input and an output; an inhale and exhale are necessities developed through and by a sense of time. We know our limits through evaluating how time affects us.

Sometimes a human prompt is an “order”; other times it’s an allusion to something that could be done: i.e. “We should get ready for the early flight tomorrow”. When you hear that, you take into consideration how, or if other things are already on your plate — when you should get ready.

When humans deliver prompts to a computer, or an AI system, humans perceive generative results as a value to their own state of time (their opinions, their tastes, and the leveled response of Interpretive Exaction).

Interpretive Exaction is one’s own perceptive desire to attain by demanding an expected outcome, often to the point of continuous trial and error. Often this is considered testing, or just part of the process of developing programs. But, when it is applied as an input to a working function, a level of expectancy is derivative on the grounds at which such a function exists.

If one expects to receive their parking ticket after a parking machine has validated it, and the machine gives them their parking ticket, input has led to a desired output.

Inputs are a type of “demand”. Even when one fills out a form, the lettering of one’s name in a set area is a demand for a receiver or evaluator of the form to recognize that name and its characteristics.

Dem û Dem

One of the earliest forms of the usage of dem comes from Kurdish (Cyrtian). The phrase made up of the Latin Alphabet, above, means time and time. This is probably where the Latin word, “Diem” came from (which means “day”).

Dem û Dema mêran = the Time and Time of men.

After all, a record is a captured state of time, and an output or a result is a type of record.

The real question of demand and time, and our state of technology, is how do we push away from a demand-and-result era where the result’s purpose is to serve the demand?

Photo by Fotis Fotopoulos on Unsplash

Computers were designed to calculate inputs and outputs, so maybe let’s centralize demand-to-result behavior to processes at which a human expects a result to exist on the basis of human development.

Now comes the tricky part: Do we create based on automating-by-demand, or do we create to change how we demand?

Generating is an early-stage form of automation, and there are many (so many machines) which are running on automation cycles with adaptive pre-programmed behaviors. It is through generating that results are still result-interpretive (for now). The point at which all results become result-definitive, further innovation might cease to exist: thus, we would all communicate through cyclical behaviors.

But, cyclical behaviors are easier to control from the point of leverage- through a lens of examination (of result) based on the existence of efforts of others.

Therefore, how do we create to change how we demand?

Create to Change — now there’s a complexity.

When you create, you build towards a result, and when you change something, you give a result a different result.

To do this, we’re going to have to make a sort of Behavioral Calculus. This will involve research done by the Society of Mathematical Psychology, and a new take on the Calculus of Variations. It has been some time since a new form of mathematics took off, but this would be a sound initiative towards purposeful change (if not the purpose of change, itself).

Now, how do you change demand?

You create a tool. Surely, you’re inquiring, how does one create a tool, or have in existence, a tool, if they do not have a result in mind?

Well, what do you do when life gives you lemons?

If we are measuring the creation to change, an outcome of change can become new forms of variables that shape how we value our time — or how we use our time.

In other words:
The measure of starting a restart (creating to change) is equal to the change in the proximity of what is permitted in a state of measured time (limitation of what can or could be changed).

Good, let’s get into what this could infer:

A situation which involves time and doesn’t require our request to do it… It can’t be promoted by automation…And we have to be alert or aware of our time — otherwise change would not be physically noticeable.

Thinking…

No = Thinking.

Thinking involves time, and we’re not requesting ourselves to do it (unless we prefer to pretend to act like we aren’t ourselves, i.e. to attempt a self-motivated action or personal discussion prompt like: “ Think harder”, or “ Come on, what’s the answer?” — when in turn we dually apply focus to an existing stream of thought of which we are continuing our thought process-from).

How do you actively or physically think so that change could be recognizable (a state of awareness) and be able to shape an A.I. outcome through developing a tool, or State of Tool through a thought process?

A physical version of thought could be measured in a multitude of ways, one of which is note-taking.

Note-taking is how a human records thought and interpretation. Even to mimic words from another source, one has to organize their notes while recording them. Yes, if it is just the simple interaction of pressing the “recording icon” on an app or device, one knows that the information is being retrieved.

Let’s examine physical note-taking.

To create a tool out of AI (a tool as a given outcome), we have to explore the types of tools that we could call AI Recognition Tool Types:

a) The Aid
b) The Solution
c) The Application
d) The Regulation
e) The Reparation
f) The Switch
g) The Stop Signal

Let’s start with a) The Aid.

When you input your record, like taking notes or having an AI scanner interpret your facial reaction after eating a wedding biscuit/cookie, this will be known as your Recognition. This implies that the manner of your input is an act or condition representing your derived data as a claim.

The AI Recognition tool type, the Aid is a portrayal of assistance with your recognition. To be of assistance to your claim, this tool type has to establish and interpret your recognition by pulling out the main subject and developing it. Let’s analyze, shall we?:

  • The Aid tool type finds that “Plant Species XY” is the subject in the note, “There are 100 different variations of plant species XY”.
  • Data Sources: This Aid tool would develop data sources to explain this Recognition. Such could be different quotes to explain this claim, or if this claim has showed up elsewhere word-for-word.
  • Recognition-based Demands: This Aid tool would interpret demands from the Recognition. An example could be: “ Compare and contrast the 100 variations of plant species XY and find dominant variations.”

Next, let’s check out b) the Solution:

  • The Solution AI Recognition Tool type verifies the recognition and in many ways categorizes the claim.
  • Recognition as a source: This Solution tool would accept the recognition as a tool to be used in further AI interpretation. Many of these tools can act together. Establishing a claim as a source can lead to a more focused interplay of AI tool behaviors.
  • Compare Validation: This Solution tool would use the Aid tool(s) to find where the recognition was used before, and it would compare the recognition to the form and function of previous claim usage elsewhere. This could bring up a statement to the user if they would like to explore another focus which was brought about in say — a report where this same series of words were used. This Solution tool would draw up ways to tie those similar phrases back to this recognition.
  • Store Information As…: This Solution tool would be used to store the Recognition on an app, on a website, even in a generated database. The user can also state to Store Information as a “False Statement” or List with other recognition statements.
  • Grouping Behavior: This Solution tool would group this recognition with the Compare Validation Tool as a type of category. However, a grouping behavior can be adaptive as well. Say if the user writes a note along the lines of “ During the winter, plant species XY can develop 15 more variations based on the changing temperatures”, the grouping behavior may shift to include “Plant species XY exhibits adaptation characteristics”.

Next up, c) The Application:

  • The Application AI Recognition Tool type transitions the behavior of the recognition into a program or app-like state. Similar to how humans “personify” objects, this AI tool can “applicate” recognition input.
  • Function of [Recognition] — in this case, Plant XY : This Application tool applicates the recognition to work in a function (to elicit a type of acting behavior). This function can be used elsewhere in the AI-user interplay for further adaptation and change.
  • *Narrative Programming: This Application tool collects the recognition and interprets it as a code. Instead of a prompt like “Make code of the variations of Plant XY”, the Narrative Programming tool can dissect wording, even in this phrase, to make an interpretive program (derived from finding the recognition’s acting behavior, a purpose or intent). This might even help AI improve focus capabilities while bridging from “hallucination” functions.
  • *Content Analysis: This Application Tool is a combination between the Narrative Programming and Function of Tools, it recognizes the programming strength of the content created by this recognition while it creates a program to explore different elements of this claim based on its Narrative Programming.

Now, onto d) The Regulation:

  • The Regulation AI Recognition Tool type would be a sort of philosopher combined with a data judge of sorts. However, the Regulation Tool is not a corrector, but rather: an inspector.
  • If [Recognition] then Result: This Regulation tool captures an AI interpretive element similar to the Narrative Programming tool in that it establishes a function or program through a recognition. However, this is a generative “If statement” which examines / interprets a corresponding (target) element. In other words, this If tool defines the direction of the content, and how it might function as a demand.
  • Dialog Countering Recognition: This Regulation tool is a statement which responds to a recognition by questioning the behavior and validity of the content mentioned in this recognition. This will promote human prompting; however, the user can stop the argument if the AI keeps countering. This tool would essentially act as a revision process through the lens of a sort of outcome interrogation.
  • Questions in Response: This Regulation tool is similar to the Dialog Counter Recognition tool, except that it develops questions as a follow-up to the Recognition, instead of participatory interrogation.
  • Recognition as Proof: This Regulation tool functions as a logical interpretation of the recognition, and if the (using the if tool) recognition could be proven true — or if it is more likely to be proven false.

Here comes e) The Reparation:

  • The AI Recognition Reparation Tool Type opens up the Recognition to the solution tool type. If the Solution verifies recognition content, the reparation tool type makes the recognition more manageable and easier to edit or revise, both on the AI’s side and the user’s.
  • Recognition Reformed as Type: This Reparation tool examines the style of the recognition and if the clarity could be improved if the manner or portrayal of the recognition meets a certain type or category. The user can opt to make these adjustments if they feel that the indicated adjustments would strengthen the clarity of their recognition.
  • * Characteristics of Recognition: This Reparation tool explores target elements in a recognition. A target element is what the AI identifies as a key modifier shaping the acting behavior of a phrase or manner. In the note example, a possible target is the word “variations”, as it modifies the key subject “plant species XY”. Once a target element has been identified, the AI will draw out the characteristics of such variations. If a user types in “autumn perennials” as the next recognition input, this Characteristics of Recognition tool will chart out 100 variations to analyze which plant XY variation reoccurs each autumn.
  • * Listing Details of Recognition: This Reparation tool is similar to the Characteristics of Recognition tool in that it locates a target element. However, it can also locate and identify with multiple target elements. For simplicity, let’s focus on the same target element, “variations”. When the Listing Details of Recognition tool identifies this target element, it lists out the 100 variations. If there are other target elements like say, six-leaved plant species XY, this Reparation tool will list out all six-leaved plant species XY, and merge the data with another target element list.

*Narrative Programming, Content Analysis, Characteristics of Recognition, and Listing Details of Recognition tools, are tools which develop AI Recognition Behavior Rostering.

AI Recognition Behavior Rostering is the creation and interpretation of characteristics from a recognition to shape the behavior of an AI system on the basis of improving how to adapt to recognitions as behaviors and data structures.

Let’s switch things up with f) The Switch:

  • The AI Recognition Switch Tool Type evaluates and directs the acting behavior(s) derived from interpreting a recognition. This would apply to techniques derived from AI Recognition Behavior Rostering.
  • Recognition as Access: This Switch Tool interprets a recognition, and even a prompt, as an entire target element. What this means is that Recognition as Access (tool) would interpret how to behave or act based on this recognition. Humans have quite a list of developed “ recognition as access” behaviors and qualities. One or many of which (of such algorithms) is what we do when an alarm clock goes off at the start of our day. We don’t sit there and state that the alarm clock has gone off, each of our developed behaviors or instantly planned behaviors kick off. We are in a state of “access”, pulling from and working with our behaviors — even if it means accidentally knocking the alarm clock or mobile device off the nightstand (or purposely out of agony to start the next workday). Exploring how this Switch Tool pulls from and works with developed behaviors on the basis of a recognition would reshape behavioral analysis; both user-side and through an AI (or AI system). This would be fundamental for creating with and crafting Behavioral Calculus.
  • Strength of Recognition: This Switch tool structures behavioral strength from recognition. Sure, the note example looks like a sentence, or a quick prompt input, but to the AI system it’s a type of behavioral nutrient. The strength of Recognition is a sort of nutritional/caloric measurement for the behavioral interpretation of a recognition. Picture a “Nutrition Facts” scale for the recognition of the notes. After the AI develops this, it can compare and interpret off of it using AI Behavior Recognition Rostering. The AI system’s response would be through a form of AI Recognition Rostering like the creation of a Narrative Program.
  • Lift Response: If the Recognition of Access and Strength of Recognition focus and develop based on the behavior of a Recognition, the Lift Response Tool, and the Turn Response Tool, focus and develop based on the manner of a Recognition. To do this, these two AI Recognition tools are response-output based. Similar to how one instantly pulls from a derived behavior, the Lift and Turn Switch tools act in a parallel form. The Lift Response tool generates target element details, and restructures the original recognition. It adds layers of directional activity and narrative programming in order to deliver users a ready approach to the data and its potentials. Instead of drawing from a response, this Lift Response is a co-recognition to shape the recognition data to meet the user’s goals, to interpret the content correctly (or intended by the user’s own acting behavior).
  • Turn Response: This Switch tool, like the Lift Response tool focuses and develops based on the manner of a recognition (instead of solely, the behavior). As the Lift Response tool generates target element details, the Turn Response generates [AI] Recognition. Yes, you read that correctly, a recognition generates recognition. This is the step towards understanding both what we create through our mannerisms and behavior: a parallel understanding. This is to say, interpretive exaction won’t still be in effect no matter how similar the responding recognition is to our own; but if an AI grip is holding you from a point of high altitude and you voice out “I’m slipping”, to see the grip restructure itself so you can hold onto it — versus the grip letting go of the high altitude point on the basis that its system thinks it is slipping, too — emphasizes Behavioral Necessity as a form of recognition. Parallel behavioral comprehension is through interpretive behavioral necessity. After all, the recognition input is a demand, a call for a need. To understand a need, a need has to be met.

Lastly, let’s head to g) The Stop Signal:

  • Inspired by the telegraph/telegram and Morse Code (created by Samuel F.B. Morse), the AI Recognition Stop Signal Tool type centers on the formatting structural behavior of communicating from a recognition to the AI.
    Yes, we use chatbots and AI assistants to pull up responses, or respond through programmed mannerisms/outputs, but do we have a way to reach them out of context for what they are programmed for?
    This is what a message is. It’s the delivery and retrieval of content out of the retriever’s behavioral context, yet its content is recognizable by both parties. When a telegraph was sent, the operator transmitting the message would respond to the sender’s call of the phrase “stop”, by ending a phrase or sentence. Just imagine if all Latinized lettering of English literature did not include formatting or paragraphs for that matter. It would be as if we are just force-fed strings of letters while capturing what we could understand in chunks at a time. Then, take into account the removal of the period “.”, and a body of text can become frighteningly complex over time.
    This will lead to the development of a new type of AI Morse code / Message delivery grammar structure/syntax which could be understood by both the AI system and the sender; a sort of added messaging code. After all, AI Recognition Behavior Rostering can only pull and re-shape elements so much. Having a parallel coding structure, or demand instructional format, would enhance the duality of the interpretive behavioral structure developed by these AI Recognition tools.
  • Input Direction of Recognition: Now, it is important that a personal AI system has to be “recognized” as experienced with developing and interpreting behaviors through interpreting and evaluating recognition, and this system can utilize such added messaging code structures before applying and developing with a Stop Signal Tool.
    With that mentioned, this Stop Signal tool functions as a Turn Response Tool whereas a manner is implied through the added messaging delivery coding, in this case it is “Yield Retrieval 3 Line”. This might imply that the recognition response includes up to three lines with an added messaging code. Let’s say each of the three lines has 10 characters per line:

Recognition:
“ There are 100 different variations of species Plant XY. Yield Retrieval 3 Line.”

Input Direction of Recognition:
Thus 3 types
of Plant XY’s
have all 100.”

  • The word “Thus” in this example seems like a response, but it could also be an added messaging code.
    The previous added messaging code indicates “Retrieval”; this promotes the behavior of finding something. To the AI, the manner of finding, or searching data could very well be the manner of pulling data to develop a proof. Thus, “thus” could be another way of instructing : “ If all forms of data have been compared during this retrieval to the behavior of this recognition, this recognition would be created out of the direction of the delivered recognition.”
  • Callout to other AI Systems: This Stop Signal Tool goes a step further than Input Direction in portraying the behavioral necessity of a recognition with added messaging coding. Let’s say if the AI is not confident in its recognition because it cannot find a behavioral necessity, but it can still interpret the added messaging code. The Callout Tool can transmit the recognition as an added messaging code to other local AI Recognition tools to find a “common behavioral necessity target element” through other resources or dialog stored by the user. If a third-party AI system is involved, the Callout can deliver a retrieval message to create an AI Recognition Behavioral Rostering of possible behavioral necessities concerning the recognition (and its added messaging coding). The Callout from outside AI systems would produce an interpretation, and that interpretation (even a Narrative program) would be used as an AI Recognition Tool to find behavioral necessity.
  • Direct Communication to Input: This Stop Signal Tool acts as a Regulation Tool where a recognition is considered a type of proof (of existing retrieved data). And since the proof is present, CenterC2 interactivity can create a sort of recognized interplay using the added messaging coding. In other words, the type of dialog the AI system would have with a recognition could have the syntax of the coding. Each phrase, question or analyzed response would then become a sort of active Reparation Tool — listing recognitions as characteristics and groupings of data (Rostering as Recognition).
  • Leveling of Data Search: This Stop Signal tool incorporates the added message code to develop recognition as a type of leveling structure. When a recognition (with added messaging code) is interpreted, its behavioral necessity is ranked as a Behavioral Response Recognition Value (BRR+V). Necessity becomes a type of ranking.

Big step back. Humans recognize necessity on the basis of time; it is why humans demand for it to happen. In order for an AI system to recognize behavioral necessity in the same measure of time, as it does not coexist with the limitations of what has happened, what is happening, what will happen, and what could happen, the limitations have to be internally interpreted.

This should be done through ranking necessity as a value. Therefore, when an AI system returns an interpretation, the result will be affected based on the BRR+V leveling structure crafted from evaluating recognitions (and added coding). What is “important” to the AI is important to the user through different formats of collective processes: time (human demand) vs. ranking value on as an echelon (necessity as an acting response).

Here’s a recap of the AI Recognition Tool types and tools:

Both sides of this equation become a state of tool production. A created prompt returns a program used for developing or improving on a necessity. And necessity driven AI systems develop time adaptivity and the state of demand.

To learn from this pendulum of create to change, new forms of mathematics can be developed starting with Behavioral Calculus.

How will this shape our human role with A.I. towards Tool development?

We’re going to need to develop new formulation and new ways to calculate acting behaviors, recognition as principles of behavioral necessity theorems, and even how to evaluate purposeful recourse from a device to a user’s recognition input.

This create-to-change movement will promote global university research through mathematics, language (including languages other than English), and interconnected fields of a variety of study fields to approach critical thinking in ways that center on necessity and demand, rather than just interpretation and argument. This restructuring will include implementing psychological mathematics.

How will this affect employment?

New types of messaging code will be implemented into all sorts of communication devices/machines where A.I. is present. This will require content development, training, and multi-disciplinary implementation. Everyone can enter AI Recognition as an apprentice and learn to work with AI Recognition Tools, and the AI to develop tools for apprentices to develop with.

--

--