For about two months, we dedicated Fridays to discussing AGI in our posts. We explored different approaches, methods, and systems that could potentially help achieve human-level reasoning and enhance overall AI systems' intelligence. Even now, nobody knows exactly which technology will be the key to AGI, and it’s interesting to analyze past and ongoing research to find what could work for more advanced AI. We’ll continue to investigate this area, but now we would like to summarize what we have already explored.

Here is our collection of 8+ researches that could be used to achieve (maybe!) AGI:

  1. Three Google DeepMind’s milestones:

    • Alpha Go, created to perfectly play the Go game, uses two neural networks combined with reinforcement learning to enhance AI reasoning and creativity. β†’ Read more

    • Alpha Fold, a tool for accurate protein interaction predictions, implements attention mechanisms to understand relationships of amino acid sequence and graph neural networks to represent proteins as networks. β†’ Read more

    • AlphaProof and AlphaGeometry 2 are created for advanced math reasoning. These legendary systems got silver medal at the International Math Olympiad thanks to the following approaches:

      • AlphaGeometry merges a neural language model for pattern recognition with a symbolic deduction engine for decision-making.

      • AlphaProf uses AlphaGo's reinforcement algorithm for math reasoning and translates natural problems into Lean, generating new math problems.

  1. Neuro-symbolic AI systems are hybrid architectures combining neural networks, which excel at pattern recognition and intuitive ideas, with symbolic reasoning methods that emphasize logic, rules, and structured knowledge. They mimic the human use of both logic and intuition in decision-making. β†’ Read more

  1. The LLM-empowered Autonomous Agent (LAA) demonstrates the synergy of symbolic and connectionist AI. It uses a neuro-vector-symbolic approach, combining neural networks, vector representations, and symbolic reasoning. Vector part of the system helps to handle large datasets and perform tasks like in-context learning. LAA is an example of more autonomous and intelligent system, that can handle a wide range of tasks. β†’ Read more

  1. Machine Psychology introduces four paradigms to understand LLM behaviors: heuristics and biases, social interactions, psychology of language, and learning. Researchers aim to use human psychology-inspired experiments to explore LLMs' deeper workings. β†’ Read more

  1. Self-Play Mutual Reasoning (rStar) involves two models discussing a problem to find a solution. The first model generates reasoning paths using Monte Carlo Tree Search (MCTS), and the second model verifies them. MCTS employs a rich set of human-like reasoning actions to offer multiple high-quality answers. β†’ Read more

  1. STRATEGIST is a method designed to help LLMs learn and improve strategic skills, especially in multi-agent games where they need to "outthink" other players. It uses Monte Carlo Tree Search (MCTS) to improve decisions, teaches the model to "think" on bi-levels (high-level strategy and low-level actions) and generates dialogues in games. β†’ Read more

  1. Here are 4 studies that discuss how good are LLMs at creating novel research ideas:

    • Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers β†’ Read more

    • SCIMON : Scientific Inspiration Machines Optimized for Novelty β†’ Read more

    • Can Large Language Models Unlock Novel Scientific Research Ideas? β†’ Read more

    • The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery β†’ Read more

  1. In a survey on attention heads, researchers compared LLMs’ function to human brain processes through four steps: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation. They also propose to apply psychological concepts to AI to enhance human-like thinking and behavior in machines. β†’ Read more

Reply

Avatar

or to participate

Keep Reading