Weekly AI News — July 8th 2024

Apple may join OpenAI’s board, Adept joins Amazon, and Grok 2 in August

Fabio Chiusano
NLPlanet
3 min readJul 8, 2024

--

Solarpunk village - image by DALLE 3

Here are your weekly articles, guides, and news about NLP and AI chosen for you by NLPlanet!

😎 News From The Web

  • Apple’s Phil Schiller to reportedly join OpenAI’s board. Phil Schiller, App Store chief at Apple, may be appointed as an observer to OpenAI’s nonprofit board to deepen his understanding of AI as Apple integrates ChatGPT into their operating systems. He will participate in meetings without voting rights.
  • Adept joins Amazon. The team from Adept, including its co-founders, is integrating into Amazon’s AGI division, aiming to advance general intelligence efforts. Amazon has licensed Adept’s advanced multimodal agent technology and acquired select datasets.
  • Elon Musk: Grok 2 AI Arrives in August. Elon Musk has unveiled plans for Grok 2, a new AI model expected in August 2024, promising enhanced efficiency. His company anticipates an upgrade to Grok 3 by the end of the same year, utilizing cutting-edge Nvidia GPU technology.
  • YouTube now lets you request removal of AI-generated content that simulates your face or voice. YouTube’s revised privacy policy now enables users to request the removal of deepfake content replicating their likeness if it raises privacy issues, with certain considerations for content context and public interest.

📚 Guides From The Web

  • Why are most LLMs decoder-only?. Large language models often use a decoder-only architecture because it is efficient for generative pre-training and cost-effective, exhibiting strong zero-shot generalization. Although encoder-decoder models can excel in multitask finetuning, extensive training diminishes the performance difference, favoring decoder-only models for various applications.
  • AI scaling myths. The article challenges the belief that simply scaling up language models will result in artificial general intelligence, highlighting issues such as overhyped scaling laws, misconceptions about emergent abilities, and practical constraints like data scarcity and rising costs.
  • What is a “cognitive architecture”?. The article discusses the role of cognitive architecture in developing applications powered by LLMs, delineating the spectrum of autonomy from basic hardcoded scripts to sophisticated, self-governing agents, and highlights its importance in deploying LLM-enabled decision-making systems.
  • RAG chatbot using llama3. The article describes the development of a Retrieval-Augmented Generation chatbot powered by the llama3 language model, detailing the incorporation of external knowledge, the setup involving necessary libraries and dataset embedding, and the use of a faiss index for efficient information retrieval.

🔬 Interesting Papers and Repositories

  • Meta 3D Gen. Meta 3D Gen (3DGen) is an AI-driven pipeline that quickly generates detailed 3D models and textures from text descriptions, with capabilities for physically-based rendering and retexturing of assets.
  • GraphRAG: New tool for complex data discovery now on GitHub. Microsoft has released GraphRAG, an advanced retrieval-augmented generation tool on GitHub that outperforms traditional RAG systems. It employs a large language model to construct hierarchical knowledge graphs from texts, enhancing data comprehensiveness and diversity by emphasizing entity relationships.
  • One year of GPT4All. Nomic unveiled GPT4All 3.0, a major update with a new UI focused on privacy and accessibility. This version supports a wide range of LLMs across various OS and marks the one-year milestone of the project with notable community involvement.
  • Agentless: Demystifying LLM-based Software Engineering Agents. The article discusses an agentless approach for software development that can surpass traditional agent-based systems in cost-effectiveness and performance, as evidenced by the SWE-bench Lite benchmark, through a simple two-phase localization and repair process.
  • Summary of a Haystack: A Challenge to Long-Context LLMs and RAG Systems. The “Summary of a Haystack” (SummHay) task is established to test long-context language models and retrieval-augmented generation systems by evaluating their capacity to summarize and cite from documents with repeated specific insights.
  • landing-ai/vision-agent: Vision agent. Vision Agent is a tool that automates the generation of code for computer vision tasks based on natural language descriptions.

Thank you for reading! If you want to learn more about NLP, remember to follow NLPlanet. You can find us on LinkedIn, Twitter, Medium, and our Discord server!

--

--

Fabio Chiusano
NLPlanet

Freelance data scientist — Top Medium writer in Artificial Intelligence