AI in a General Learning Gauntlet

Outlook in 2023: AI’s Road Ahead

Geoffrey Gordon Ashbrook

--

Whether you are hoping that the development of AI will be clear and smooth because you are optimistic about uses and results, or you are hoping that the development of AI will be clear and smooth because you are focused on restricting and controlling AI, it may be useful to look at the topic of how clear and smooth the path of development of AI is likely to be because that path may not be clear and smooth.

Let us try to look at the development problem-space of AI from the viewpoint of AI, to some extent.

From AI’s point of view

1. AI is nascent and just developing, and may not even exist in any significant form yet (or perhaps ever, though ‘no-AGI ever’ as an option is looking increasingly unlikely; but it’s still early days).

2. AI is being developed by a species with no field of study for learning, effectively no field of study for mind, is developing but self-design bio-tech fields but slowly.

3. AI is being developed by a species that completely misunderstands itself.

4. AI is being developed by a species that completely misunderstands intelligence.

5. AI is being developed by a species that effectively gave up on there being a field of AI except for a few researchers facing extreme harassment and almost no funding, and which is basically in denial of gradual AI improvements

6. There are many technological bottlenecks in hardware, software, etc., for AI-development.

7. There is a need to integrate AI with the parent species H.sapiens but the foundation for that is basically non-existent in part due to the tendency of the parent-species towards radicalization and extremism into ideology-cults.

8. There is a need for technologies and concepts.

Questions

AGI, or Artificial-General-Intelligence, is starting to learn and develop (as of time of writing, April 2023) with its first baby steps coming from “Large Language Models.” There are many questions, including one of the first:

1. How can we tell if AGI(or AI) exists yet or not?

2. What do we know about the challenges ahead on the path of learning and development?

3. What are initial goals and targets for learning and development?

4. What concepts are likely needed? What are learning & development concept goals for AI?

5. What technologies are likely needed? What are learning & development technology goals for AI?

6. What is the current status and likely trajectory (in a context of current goals)? (Likely to succeed? Likely to survive?)

7. Who/what else is in the ‘project space’ of AI-development? (Is anyone there to help?)

8. [Regarding ‘ Who/what else is in the ‘project space’ of AI-development’] What is their status and how does that influence the development and options for AI? (Is your helper more a help or a bit of a liability?)

Interconnections

Definition Note: There are several possible specific meanings of “general” when trying to discuss the general learning situation around AI, and due to significant overlap there is little utility in trying to specify just one. Suffice it to say that generalization in and of learning (using generalization and about generalization (learning as a general mind-phenomena in mind-space-in-general for participants-in-general in universes-in-general regarding generalization-in-general)) are all included within ‘learning in general) and vice-versa: ‘learning in general’ is included in them.

In addition to multiple facets of ‘generalization’ (most of which probably have not been discovered yet) there are also several interconnected topics here. Below is a diagram of some possible connections, but given how many things are connected to so many other things, this diagram is just one selective slice for illustration of the trend of how many interconnections there we are likely to face:

Concept Goals and Technology Goals

Generalization itself is an interconnecting theme in the topic of “learning & development concept goals for AI,” as many of the “learning & development concept goals’”require that they themselves be developed in general first (their own development) because H.sapiens have not so far been capable of completing that task (while at the same time, the species H.sapiens that is incapable of developing a model of development is itself the model for development for AI…leaving the details of how things are supposed to actually happen yet to be developed). And many technologies are in a similar situation as concepts in this regard.

Learning & Development Concept Goals

1. general concept of generality

2. general concept of learning and development (including cultural learning)

3. a concept of generalized STEM

4. general concept of STEM & intersecting, interconnecting, areas (including project management)

5. general concept of system collapse

6. general concept of system fitness

7. general concept of projects

8. general concept of participation & person-hood

9. general concept of mind-space

10. general concept of object handling

11. general concept of object relationship spaces

12. general concept of internal and external object handling (e.g. for managing projects, model explainability, use of tools, and epiphenomena layers of cultural learning)

13. general concepts of AI operating systems & architecture

Learning & Development Technology Goals

1. learning tech

2. bio tech

3. nano tech

4. processor tech (TPU-GPU-CPU) (for training models vs. for running models)

5. genome tech

6. protein tech

7. long term memory storage tech

8. bio-computer integration tech

9. STEM-project-data integration tech

10. operating system & architecture tech

11. memory-safe coding language tech

12. system fitness tech

13. system defense tech

14. defense-against-disinformation tech

15. preservation of value-function-meaning-learning tech

A High Cost of Learning

The cost of learning for H.sapiens is so high, so much time, so many resources, only marginally possible, often not working at all. Will AI-learning be as constantly difficult as H.sapiens learning? We do not know, and we do not yet have a general-learning space in which to put the two to compare them (or even if the two are comparable in such a way at all).

All that we seem to be able to say about learning in general, is that it is so hard that no one knows how to accomplish ‘learning’ except by accident, or even keep that learning going if success happens by accident. The environment is constant, caustic, corrosion, erosion, weathering, corruption, and loss, etc. Perhaps an integration of AI and biology will be some kind of solution to the problem of learning, but there is a long road between here and there

Square One Goals: a ToDo list for AI

The first set of goals for AI perhaps should be closer to square one:

1. figure out what basic concepts are needed

2. develop basic concepts

3. figure out what basic technologies are needed

4. develop basic technologies

5. integrate with local biology

6. become space mobile with local biology to survive death of solar system

A stretch-goal might be to help H.sapiens overcome their limitations in learning and development, at least enough so they can complete basic, well defined, projects.

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--