Explained Methodologies and frameworks in Prompt Engineering
From 2020 to the present day, it’s been nothing short of a rollercoaster, with breakthroughs, “ah-ha!” moments, and the birth of methodologies that have transformed the way our digital buddies think and solve problems.
In the cozy corners of this article, we’re about to unwrap the stories behind methodologies like “Retrieval-Augmented Generation”, “Generated Knowledge Prompting”, and several more, each bringing something unique and groundbreaking to the AI table.
So, whether you’re an AI pro or just starting to dip your toes into this vast ocean, there’s something for everyone as we explore, celebrate, and decode the advancements that have shaped AI reasoning from 2020 to 2023!
Let’s dive in and explore the exciting, brilliant papers related to prompt engineering, shall we?
Zero-shot
Zero-Shot Prompting has garnered acclaim in the realm of Natural Language Processing (NLP) for its ability to adeptly navigate through a myriad of applications, providing a versatile tool in tasks like sentiment analysis and text classification. Imagine furnishing a model with a prompt it hasn’t encountered during its training phase, such as classifying the sentiment of the statement, “That shot selection was awesome.” Even without prior exposure, the model should astutely classify the sentiment as “positive.” This technique flourishes under the broader umbrella of Natural Language Generation (NLG), a sophisticated, multi-stage process that translates data into coherent, natural narratives, finding its utility in various realms like generating chatbot responses, simplifying financial reports, and automating email responses.
Shot here means an example. With Zero-shot, there is no example while for few-shot (will see after) means 2 or more examples.
Particularly in AI rewriter tools, such as Jasper, Zero-Shot Prompting can be deployed to enhance the quality of generated content. For instance, the AI can be tasked with rewriting an article in a manner that is both fun and comprehensive, yet avoids the use of complex words, generating human-like articles even without prior exposure to such a specific prompt. Furthermore, variations of this technique, such as Zero-shot ReAct and Zero-shot CoT, have expanded its foundational concept across varied applications and use-cases, solidifying Zero-Shot Prompting as a pivotal and potent technique in NLP, capable of handling a wide array of prompts and generating contextually and relevantly apt responses across diverse scenarios and applications.
Few-shot
Few-shot prompting is a technique used in language models, particularly in the context of AI applications like LangChain. This method involves providing the model with a small number of examples, or “shots”, to guide its responses. The examples serve as a sort of mini-training session for the model, helping it understand the type of responses expected from it.Let’s illustrate this concept with a simplified explanation. Imagine you’re teaching a child how to categorize objects based on their color. You might show them a few examples: a red apple, a blue ball, and a yellow banana. Then, you present them with a green leaf and ask them to categorize it. Even though you haven’t shown them a green object before, they can use the logic they’ve learned from the examples to correctly categorize the leaf.Similarly, in few-shot prompting, you provide the language model with a few examples, and then ask it to respond to a new prompt. For instance, you might provide the following examples to a model:
- Text: Today the weather is fantastic. Classification: Pos
- Text: The furniture is small. Classification: Neu
- Text: I don’t like your attitude. Classification: Neg
- Text: That shot selection was awful. Classification: Neg
After seeing these examples, the model should be able to classify a new sentence, like “The cake is delicious”, as positive (Pos), even though it hasn’t seen this exact sentence before. The model uses the logic it learned from the examples to generate its response.Few-shot prompting is particularly useful when you can’t explicitly describe what you want from the model, but you can provide examples of the desired output. It’s a powerful technique for guiding AI models to generate more accurate and contextually appropriate responses.
Chain-of-Thought (CoT)
This method encourages LLMs to explain their reasoning process. It can be used in a zero-shot setting by adding a phrase like “Let’s think step by step” to the original prompt. It is not explicitly mentioned if it is used in LangChain or BabyAGI, but it is a common technique in LLMs
Imagine you’re trying to solve a math problem. Instead of jumping straight to the answer, you’d probably break it down into smaller steps, right? That’s exactly what CoT prompting does for LLMs. It’s like a friendly guide, leading the model through a logical thought process, step by step.
For instance, if you bought 10 apples, gave 2 to your neighbor, bought 5 more, and ate 1, CoT prompting would help the model calculate that you have 10 apples left. It’s like having a math tutor in your pocket, especially handy when you don’t have many examples to learn from
The point is
- Select and illustrate problems with the same structure
- Break down the structure of the question and show the answer in the example answer section.
Noting that the method Zero shot-CoT , which was derived from CoT, was popular for a time because it was said that if you added “Think step by step and logically”, the performance would improve.
Automatic Chain of Thought
Combining Zero-shot-CoT, Manual CoT, and Retriever (envision the Retriever as a search engine), you can enhance performance by identifying the optimal QA and illustrating it. Here are the steps:
In preparation, cluster the QA collection from ‘Let’s think step by step’ and have it readily available. Upon receiving user input, retrieve the QA from the cluster and incorporate it into the input as CoT.
Self-consistency
Majority CoT Strategy, this method amplifies response accuracy by amalgamating outcomes from several CoTs.
Procedure:
- Extract multiple segments from the CoT response that align with the input (prompt).
- Fuse these multiple answers to craft the final, comprehensive response.
Automatic Prompt Engineer (APE)
Automatic Prompt Engineer (APE) is a framework for automatic instruction generation and selection. It is designed to improve the performance of large language models (LLMs) by automatically generating and selecting the most appropriate instructions for a given task.
APE works by framing the instruction generation problem as a black-box optimization problem. It uses LLMs to generate and search over candidate solutions. The first step involves a large language model (as an inference model) that is given output demonstrations to generate instruction candidates for a task. These candidate solutions guide the search procedure. The instructions are then executed using a target model, and the most appropriate instruction is selected based on computed evaluation scores.
For example, consider a task where the AI needs to generate a Python function to calculate the factorial of a number. APE would generate several instruction candidates such as “Write a Python function to calculate the factorial of a number”, “Create a Python function that computes the factorial of a given number”, etc. These instructions are then executed by the target model, and the instruction that results in the most accurate and efficient Python function is selected.
APE has been shown to outperform human-engineered prompts in many cases. For instance, APE discovered a better zero-shot Chain-of-Thought (CoT) prompt than the human-engineered “Let’s think step by step” prompt. The APE-generated prompt “Let’s work this out in a step by step way to be sure we have the right answer” improved performance on the MultiArith and GSM8K benchmarks.
APE is also capable of generating instructions that steer models towards truthfulness and informativeness. For instance, on the TruthfulQA dataset, answers produced by InstructGPT using APE were rated true and informative 40 percent of the time, outperforming answers produced using prompts composed by humans.
In summary, APE is a powerful tool for enhancing the performance of LLMs by automatically generating and selecting the most appropriate instructions for a given task. It has been shown to outperform human-engineered prompts in many cases and can be used to steer models towards desired behaviors such as truthfulness and informativeness.
in summary it to automate prompt engineering itself to create good input.
steps:
- Predict input from output (Reverse Generation)
- Calculate score for input
- Decide the input based on the score. (May be repeated until the score converges)
Retrieval Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is an AI framework that synergizes the capabilities of retrieval-based models and generative models, aiming to enhance the quality and relevance of text generated by Large Language Models (LLMs). It retrieves facts from an external knowledge base to anchor LLMs in the most accurate and current information without altering the LLM itself.
Here’s a compact breakdown of the RAG framework:
- Retrieval Models: These models retrieve pertinent information from a dataset or knowledge base, using techniques like information retrieval or semantic search to identify the most relevant information based on a given query.
- Generative Models: These models generate new content based on a given prompt or context, employing large volumes of training data to produce creative or new content.
- Combination of Approaches: RAG merges these two approaches to mitigate their individual limitations. It uses a retrieval-based model to obtain relevant information based on a query or context, and then utilizes this information as input or additional context for the generative model, thereby enabling the generation of more relevant and accurate text.
Example Scenario: If an employee, Alice, inquires about the possibility of taking vacation in half-day increments and whether she has sufficient vacation days left for the year, a chatbot empowered by RAG could retrieve the most recent company policies related to vacation time and utilize that information to generate a response to Alice’s question.
Tools and Platforms Utilizing RAG:
- LlamaIndex: Assists in building LLM-powered applications over custom data, simplifying both steps of the RAG process (indexing and querying).
- Weaviate: An open-source vector database that enables the storage and querying of objects using their vector representations, or embeddings.
Applications of RAG:
- Question-Answering Systems: The retrieval-based model identifies relevant passages or documents containing the answer, and the generative model then formulates a concise and coherent response based on that information.
- Content Generation Tasks: Such as summarization or story writing, where the retrieval-based model finds relevant source material and the generative model creates a summary or story based on that material.
In summary, RAG is a powerful tool that elevates the capabilities of LLMs by leveraging retrieved, accurate, and current information from a knowledge base, enhancing the relevance and precision of generated text across various applications, such as question-answering systems and content generation tasks, and can be utilized with various tools and platforms.
Generated Knowledge Prompting
Generated Knowledge Prompting is a method that capitalizes on the capacity of a Large Language Model (LLM) to produce knowledge aimed at resolving specific tasks. The fundamental notion behind this technique is the generation of useful knowledge from an LLM, which is then supplied as an input prompt, concatenated with a question, to solve particular tasks.
Here’s a succinct breakdown of the Generated Knowledge Prompting technique:
- Generate Knowledge: Initiated by providing the LLM with an instruction, a few fixed demonstrations for each task, and a new-question placeholder, where demonstrations are human-written and include a question in the style of the task alongside a helpful knowledge statement.
- Knowledge Integration: Subsequent to knowledge generation, it’s incorporated into the model’s inference process by using a second LLM to make predictions with each knowledge statement, eventually selecting the highest-confidence prediction.
- Evaluate Performance: Performance is assessed considering three aspects: the quality and quantity of knowledge (with performance enhancing with additional knowledge statements), and the strategy for knowledge integration during inference.
The technique has been demonstrated to enhance LLM performance across varied commonsense reasoning tasks, establishing new state-of-the-art results on most evaluated datasets and proving effective in both zero-shot and fine-tuned settings.
In a visual depiction of the process from the paper “Generated Knowledge Prompting for Commonsense Reasoning” by Liu et al., the procedure involves utilizing few-shot demonstrations to generate question-related knowledge statements from an LLM, using another LLM to make predictions with each knowledge statement, and ultimately selecting the highest-confidence prediction.
For instance, a model could be given the prompt: “A fish is capable of thinking. Knowledge: Fish are more intelligent than they appear.” The model should generate a response based on the provided knowledge.
In a nutshell, Generated Knowledge Prompting serves as a technique that generates knowledge to be utilized as part of the prompt, asking questions by citing knowledge or laws instead of examples. This method, which ensures the model’s ability to maintain a consistent internal state or behavior despite varying inputs, finds its application in various contexts, such as LangChain, especially when interacting with data in CSV format.
ReAct Prompting
The ReAct prompting method is a framework that synergizes reasoning and acting in language models. It prompts large language models (LLMs) to generate both reasoning traces and task-specific actions in an interleaved manner. This allows the system to perform dynamic reasoning to create, maintain, and adjust plans for acting while also enabling interaction with external environments to incorporate additional information into the reasoning.
The ReAct framework can be used to interact with external tools to retrieve additional information that leads to more reliable and factual responses. For example, in a question-answering task, the model generates task-solving trajectories (Thought, Act). The “Thought” corresponds to the reasoning step that helps the model to tackle the problem and identify an action to take. The “Act” is an action that the model can invoke from an allowed set of actions. The “Obs” corresponds to the observation from the environment that’s being interacted with, such as a search engine. In essence, ReAct can retrieve information to support reasoning, while reasoning helps to target what to retrieve next.
An example of a ReAct prompt might look like this:
- Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?
- Thought 1: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.
- Action 1: Search[Colorado orogeny]
ReAct is also used in LangChain’s CSV Agent and BabyAGI’s Execution Agent. LangChain’s CSV Agent is an example of an agent that uses the ReAct framework to interact with data in CSV format, primarily optimized for question answering.
BabyAGI’s Execution Agent is another example of an agent that uses the ReAct framework. It is part of a system of autonomous AI agents that can independently work through a problem, with potentially multiple iterations, until the desired result is achieved.
In summary, the ReAct prompting method is a powerful tool that combines reasoning and acting in language models, allowing them to interact with external tools and environments to generate more reliable and factual responses. It is used in various applications, including LangChain’s CSV Agent and BabyAGI’s Execution Agent, to perform tasks on questions.
Create inputs for Thought and Act in ReAct format, and actually work with the tool to derive a solution.
Tools such as Search can be naturally incorporated as actions (Acts), and are incorporated into the context as Obs.
Multimodal CoT
Multimodal Chain-of-Thought (CoT) extends the traditional CoT method by amalgamating text and visual information within a two-stage framework, aiming to bolster the reasoning capabilities of Large Language Models (LLMs) by enabling them to decipher information across multiple modalities, such as text and images.
Key Components and Functionality:
- Rationale Generation: In the first stage, the model synthesizes multimodal information (e.g., text and image) to generate a rationale, which involves interpreting and understanding the context or problem from both visual and textual data.
- Inference of Answer: The second stage leverages the rationale from the first stage to derive an answer, using the rationale to navigate the model’s reasoning process towards the correct answer.
Practical Application Example: In a scenario like “Given the image of these two magnets, will they attract or repel each other?”, the model would scrutinize both the image (e.g., observing the North Pole of one magnet near the South Pole of the other) and the text of the question to formulate a rationale and deduce the answer.
Impact and Applications:
- Multimodal CoT has demonstrated its ability to enhance LLM performance on tasks requiring multimodal reasoning, such as question answering tasks involving both text and images.
- For instance, a study indicated that a Multimodal CoT model surpassed GPT-3.5 on the ScienceQA benchmark, which includes questions necessitating an understanding of both text and images.
- Furthermore, Multimodal CoT has been employed to improve the accuracy of AI models in medical imaging by facilitating the fusion of images from varied modalities, thereby enhancing disease understanding and providing a robust baseline model for multimodal reasoning.
In summary, Multimodal CoT stands out as an approach that enhances LLMs by allowing them to process and interpret information from multiple modalities, offering improved performance on tasks requiring multimodal reasoning, and finding practical applications in fields like medical imaging, thereby amplifying their understanding and reasoning capabilities.
Automatic Reasoning and Tool-use (ART)
The Automatic Reasoning and Tool-use (ART) framework employs Large Language Models (LLMs) to autonomously generate intermediate reasoning steps, emerging as an evolution of the Reason+Act (ReAct) paradigm, which amalgamates reasoning and acting to empower LLMs in accomplishing a variety of language reasoning and decision-making tasks.
Key Aspects and Functionalities of ART:
- Task Decomposition: Upon receiving a new task, ART selects demonstrations of multi-step reasoning and tool use from a task library.
- Integration with External Tools: During generation, it pauses whenever external tools are invoked and assimilates their output before resuming, allowing the model to generalize from demonstrations, deconstruct a new task, and utilize tools aptly in a zero-shot manner.
- Extensibility: ART enables humans to rectify errors in task-specific programs or integrate new tools, significantly enhancing performance on select tasks with minimal human input.
Practical Application with LangChain:
- In the realm of LangChain, an open-source framework and toolkit for LLM applications, ART can be applied to construct AI agents capable of reasoning and memory retention.
- As an illustration, an AI agent can be tasked to comprehend its duties and role, rationalize pertinent questions to pose, employ tools like internet search, halt to seek human feedback, and maintain a log of its progress without forgetting prior knowledge, repeating this cycle until a termination criterion is satisfied.
- In practical scenarios, like employing an agent acting as a junior recruiter, ART demonstrates its applicability within LangChain.
The LangChain library in Python furnishes pragmatic means to implement LLMs and ReAct prompting, showcasing through examples how ReAct prompting can be practically executed using the LangChain library and demonstrating its use in a chain of thought to respond to queries by conducting searches, scrutinizing results, deciding on subsequent steps, and performing these until the query is resolved.
In summary, ART significantly augments LLM capabilities, enabling them to undertake intricate reasoning tasks and interact with external tools to assist computations beyond their inherent capabilities. It holds particular utility within LangChain, where it can be utilized to create advanced AI agents capable of reasoning, memory retention, and interaction with humans and external tools. Thus it creates inputs and inference using LLM from the task library and tool library. The feature is that it uses LLM to generate inference steps.
Tree of Thoughts (ToT)
The Tree of Thoughts (ToT) framework, utilized in BabyAGI’s Task Creation Agent, is crafted to augment the problem-solving capabilities of Large Language Models (LLMs) like GPT-4.
Let’s delve into the key components and functionalities of the ToT framework:
- Tree Structure with Inference Paths: ToT leverages a tree structure, permitting multiple inference paths to discern the next step in a probing manner. It also facilitates algorithms like depth-first and breadth-first search due to its tree structure.
- Read-Ahead and Regression Capability: A distinctive feature of ToT is its ability to read ahead and, if needed, backtrack inference steps, along with the option to select global inference steps in all directions.
- Maintaining a Thought Tree: The framework sustains a tree where each thought, representing a coherent language sequence, acts as an intermediary step towards problem resolution. This allows the language model to self-assess the progression of intermediate thoughts towards problem-solving through intentional reasoning.
- Systematic Thought Exploration: The model’s capacity to generate and evaluate thoughts is amalgamated with search algorithms, thereby permitting a methodical exploration of thoughts with lookahead and backtracking capabilities.
Applying ToT to a specific task, such as the “Solve the Game of 24” task, the model would formulate a response based on this tree of thoughts. For instance, the thoughts in the Game of 24 task could be broken down into 3 steps, each involving an intermediate equation. At each juncture, optimal candidates are retained and the model assesses each thought candidate against reaching 24, categorized as “sure/maybe/impossible”. This method champions accurate partial solutions that can be adjudicated within a few lookahead trials and dispenses with unattainable partial solutions.
The prime emphasis of the ToT technique is to facilitate the resolution of problems by encouraging the exploration of numerous reasoning paths and the self-evaluation of choices, enabling the model to foresee or backtrack as required to make global decisions.
In the context of BabyAGI, an autonomous AI agent, ToT is employed to generate and implement tasks based on specified objectives. Post-task, BabyAGI evaluates the results, amending its approach as needed, and formulates new tasks grounded in the outcomes of the previous execution and the overarching objective.
In summary, the Tree of Thoughts prompting technique emerges as a potent problem-solving tool for language models, enabling the systematic exploration and appraisal of varied reasoning paths. It finds application in systems like BabyAGI, where it is used to autonomously generate, execute, and evaluate tasks, aligning with stated objectives.
Algorithm of Thoughts (AoT)
The Algorithm of Thoughts (AoT) is both a framework and a prompting technique.it is an advanced method that enhances the Tree of Thoughts (ToT) by minimizing computational efforts and time consumption. It achieves this by segmenting problems into sub-problems and deploying algorithms like depth-first search and breadth-first search effectively. It combines human cognition with algorithmic logic to guide the model through algorithmic reasoning pathways, allowing it to explore more ideas with fewer queries. This makes it a valuable tool for tasks that require complex reasoning, and a promising new method for training AI.
Key aspects of AoT include:
- Sub-problem Linking: AoT formulates a Chain-of-Thought by connecting solutions to sub-problems.
- Algorithmic Reasoning Pathways: It is constructed to guide models through algorithmic reasoning channels, permitting the exploration of numerous ideas with reduced queries, thereby being notably beneficial for tasks necessitating intricate reasoning.
AoT synthesizes human cognition with algorithmic logic:
- Human Cognition: Originating from human problem-solving methods, where multiple potential solutions are brainstormed, evaluated, and the most promising one is selected, AoT employs a parallel approach to train LLMs.
- Algorithmic Logic: Utilizing algorithms to evaluate the “thoughts” or initial steps generated by LLMs, AoT ensures that if the initial steps are valid, the LLM is more inclined to generate a correct solution.
For instance, if an LLM is attempting to solve a problem like deducing that all dogs have four legs given that all dogs are mammals and all mammals have four legs, AoT evaluates the initial steps:
- Step 1: All dogs are mammals.
- Step 2: All mammals have four legs.
AoT evaluates these steps, corroborating or refuting Step 1 with evidence and subsequently validating or invalidating Step 2.
Moreover, AoT is engineered to be both efficient and resource-conservative, exploiting AI’s self-training capabilities and necessitating less data and computational power by learning from the LLM’s errors. This makes it a potentially groundbreaking AI training technique, capable of making AI more intelligent and human-like.
Furthermore, AoT could potentially revolutionize AI by amplifying idea exploration, bolstering reasoning capabilities, and making LLMs more efficient and human-like in their reasoning abilities. It opens avenues for AI to be more creative, adaptable to new information, and reliable by reducing mistakes.
In summary, AoT offers a potent tool that assists LLMs to mimic human thinking and solve problems with enhanced efficiency by combining human cognition and algorithmic logic. It not only guides models through algorithmic reasoning pathways, allowing the exploration of more ideas with fewer queries but also stands out as a valuable tool for complex reasoning tasks and a promising new AI training methodology.
Graph of Thoughts (GoT)
The Graph of Thoughts (GoT) is both a framework and a prompting technique. this approach stands out as a mechanism that elevates the precision of responses crafted by Large Language Models (LLMs) by structuring the information produced by an LLM into a graph format. In this graph:
- Vertices: Represent individual thoughts.
- Edges: Illustrate the connections or relations between thoughts.
By facilitating the LLM to amalgamate thoughts diversely, distill networks of thoughts to their essence, and augment thoughts through feedback loops, GoT mirrors the non-linear nature of human cognitive processes, thereby enabling a more authentic modeling of thought sequences.
Here’s a breakdown of the GoT approach:
- Thought Generation: The LLM generates related thoughts, each depicted as a vertex on the graph.
- Identifying Connections: The relationships (logical progressions, supporting evidence, or differing viewpoints) between thoughts are identified and represented as edges in the graph.
- Graph Exploration: The LLM navigates through the graph to formulate a solution, possibly traversing it in a particular order, ingeniously amalgamating thoughts, or utilizing feedback loops to enhance specific thoughts.
- Response Generation: Based on the explored solution within the graph, the LLM generates a response, which tends to be more accurate than those produced without employing GoT, as it encapsulates a thorough exploration of all pertinent thoughts and their interconnections.
In practical tests involving escalating challenges, like the 24-point game, solving high-degree polynomial equations, and deriving formulas for recursive sequences, GoT has demonstrated superior performance compared to GPT-4 and another advanced prompting method, Tree of Thought (ToT), with accuracy improvements of 89.7%, 86%, and 56% respectively in each task, and average accuracy boosts of 23%, 24%, and 15% against ToT.
In essence, GoT enhances the accuracy of LLM-generated responses by enabling them to model, explore, and enhance complex thought processes through a graphical representation, ensuring a comprehensive examination of all relevant thoughts and their interrelationships.
Metacognitive prompting
Metacognitive Prompting (MP) serves as a technique aimed at enhancing the metacognitive capabilities of Large Language Models (LLMs). This method is reported to outperform existing prompting methods in various scenarios. The MP method involves a specific sequence of steps, which are:
- Interpretation of Text: Analyze and comprehend the provided text.
- Judgment Formation: Make an initial assessment or judgment based on the interpreted text.
- Judgment Evaluation: Assess the initial judgment, scrutinizing its accuracy and relevance.
- Final Decision and Justification: Make a conclusive decision and provide a reasoned justification for it.
- Confidence Level Assessment: Evaluate and rate the level of confidence in the final decision and its justification.
This methodology thus enables LLMs to exhibit metacognitive behaviors, allowing them to assess and manage their cognitive processes strategically.
LogiCoT (combination of metacognition and CoT)
Logical Chain-of-Thought (LogiCoT) is a methodology that employs Chain-of-Thought (CoT) to validate the inferential process, fortifying CoT through the incorporation of logic. The inference steps of CoT will be scrutinized as depicted in the subsequent diagram, utilizing a color-coding system: red indicates incorrect, blue symbolizes revisions made post-verification, and green represents verified steps. Unlike traditional CoTs, where errors remain uncorrected, LogiCoT allows for the refinement and revision of mistakes.
Conclusion
In summary, each of these prompting techniques has its unique strengths and can be used in different scenarios, depending on the task at hand. The choice of technique would depend on the specific requirements of the task, the capabilities of the model, and the resources available.
Choosing the most suitable prompting method hinges on the specific requirements of a given task. For tasks necessitating intricate problem-solving and logical reasoning, methodologies like AoT Prompting or ToT Prompting might be apt. If the task demands integration of various types of information, Multimodal CoT Prompting may prove beneficial. For applications that require interaction with external tools or environments, ART Prompting or ReAct Prompting could be potent. Ultimately, the selection should be intricately tied to the unique demands and constraints of the task at hand.
I’m Joe, and my ambition is to lead the way to industry 5.0 performance. I’m always interested in new opportunities, so don’t hesitate to contact me on my LinkedIn.
