• Turing Post
  • Posts
  • FOD#20: Intuitive and interactive AI that gently guides us toward AI succession

FOD#20: Intuitive and interactive AI that gently guides us toward AI succession

plus the curated list of the most relevant developments in the AI world

In Froth on the Daydream (FOD), I usually start by contemplating two topics that appear to reverberate throughout the week. Many – specifically, ML engineers – might ask why to ruminate on trends and risks. The answer is straightforward: they influence your future projects and research endeavors. The agenda matters, so let’s see what is promised to come next after Generative AI and who builds the agenda around it. (To go straight to the curated list of the most relevant developments in the AI world, please skip the next section)

Some of the linked articles might be behind the paywall. If you are our paid subscriber, let us know, we will send you a pdf.

Intuitive and interactive AI that gently guides us toward AI succession

I recently revisited Lex Fridman's podcast featuring Ilya Sutskever, an episode from three years ago. These were pre-ChatGPT days, before newsletters proliferated like digital mushrooms, and when I was busy incubating the Turing Post concept and launching TheSequence with J. Rodriguez to validate the idea (spoiler: it worked). Fridman and Sutskever discussed deep learning, its history, and how a blend of previous research, massive supervised data sets, and high-powered GPUs broke new ground. Sutskever's simple insight captivated me, prompting Lex to ask what would come next in breaking ML barriers. "I think when you start to see really dramatic economic impact," Sutskever opined. He was concerned that people who are outside of AI can no longer distinguish progress anymore without that.

This was 2020 – a lifetime ago, it seems.

Now, we see the economic impact of GenAI, and the conversations are already shifting to "what's next?" Apple conspicuously avoids even mentioning generative AI, focusing instead on the development of intuitive AI. Their aim is to seamlessly weave AI into user experiences, enhancing usability without becoming obtrusive. Mustafa Suleyman from Inflection AI posits that after the classification and generative phases, interactive AI is the next frontier. "AIs will be able to take action. You will just give it a general, high-level goal, and the AI will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs," he predicts.

It's riveting to hear from Rich Sutton in this context, the mastermind behind the original scientific papers on temporal-difference learning and policy-gradient algorithms. These were the tools wielded by DeepMind's AlphaZero to shatter human and computer records in Go and chess. Sutton underlines the economic advantage of AI that we achieved and urges for succession planning: "Technologically enhanced humans and then AIs would be our successors, and that need not be viewed as bad in any way.

What About Existential Risk (x-risk) Then?

Here's a thing about AI-doomers: they're too tightly connected to the effective altruism (EA) movement. It's a simple scheme that seems quite concerning: EA organizations and supporters have a lot of money. They heat up the discussion around x-risk through grants to researchers and investments in big players who get enough power to influence politics. The attention to AI's existential risks isn't organic. It's been deliberately fueled by organizations that make this their top concern. They're setting the agenda, and like I said, that guides researchers and practitioners like you to focus on it.

Why are they doing this? Because they're concentrating power in their own hands, believing they're chosen to decide for all of humanity. And why can they do it, they think? Because – look at the name – they're doing it altruistically.

That's a fallacy. They might not want money, but they do want power.

Politico has an article called "How Silicon Valley Doomers Are Shaping the UK's AI Plans." The main points are:

  • EA-linked folks have been put into UK government advisory roles, specifically in the UK's Foundation Model Taskforce. That signals a focus on frontier AI risks rather than regulating today's AI models. Critics worry about Big Tech and EA taking over the rules.

  • Critics say EA's focus on existential risk is alarmist, lacks evidence, and is too buddy-buddy with tech firms. They want the UK to focus on AI's near-term impact on society instead.

So, what we've got now is a bunch of powered-up, well-funded, insanely ambitious people who say they want to do good to humanity but are as far from people as you can get. And the governments? They're kinda lost and just talking to these same tech-empowered guys.

Because power talks to power. I think this approach has to change.

Additional reading:

News from The Usual Suspects

Nvidia

  • Known for its AI computing prowess, Nvidia is rapidly becoming a key venture investor in the AI sector. Recent investments include leading roles in funding rounds for Databricks (just raised $500 million, with valuation at $43 billion), Hugging Face, AI21 Labs, and others, extending its influence beyond computing to shaping the future of AI innovation.

  • Arm’s IPO: Not directly Nvidia but still connected, Arm raised $4.87 billion in its initial public offering. Since SoftBank's 2016 acquisition, Arm has evolved from merely offering mobile phone component designs to providing complete chip blueprints for major clients like Amazon and Microsoft. This shift aims to improve profitability and insulate against industry volatility. Led by industry veteran Haas, ex-Nvidia exec, Arm faces heightened political scrutiny due to the chip sector's central role in U.S.-China tensions.

Microsoft’s Textbooks

  • The paper "Textbooks Are All You Need II" serves as a reality check for those engrossed in the parameter arms race and could steer the focus toward smarter, not just bigger, AI systems. It introduces phi-1.5, a 1.3 billion parameter model trained on curated "textbook-quality" data. Contrary to the "bigger is better" paradigm, phi-1.5 performs similarly to models 5-10 times its size in tasks like natural language understanding. This shift underscores the value of data quality over quantity. Smaller models offer economic and environmental advantages, reducing energy consumption significantly. Phi-1.5's design also minimizes issues like toxicity and hallucinations, leaning toward more responsible AI. The research predicts that the creation of synthetic, high-quality datasets could become a focal point in AI development, offering a more sustainable and ethical approach to ML.

StabilityAI

  • Stable Audio introduces a latent diffusion model architecture for generating audio, improving upon the limitations of fixed-length output in traditional diffusion models. It incorporates text metadata, audio file duration, and start time for customized audio length and content. The model leverages downsampling and advanced diffusion sampling techniques, achieving remarkable speed—95 seconds of high-quality stereo audio generated in less than a second on an NVIDIA A100 GPU.

Google

Salesforce / Dreamforce

  • Salesforce rebrands Einstein 1 Data Cloud and boosts its AI assistant.

  • CEO Marc Benioff warns of AI risks while claiming his company sets the ethical standard.

Numenta (maybe becoming a usual suspect as well?)

  • Numenta, co-founded by Jeff Hawkins and Donna Dubinsky, has launched NuPIC, an AI platform grounded in neuroscience research, including Hawkins' "Thousand Brains Theory of Intelligence." Unlike traditional Large Language Models (LLMs) that rely on GPUs, NuPIC optimizes the deployment of LLMs on CPUs, partnering with Intel.

Everything sounds great, except the name: Nu Pic, seriously?

For your kids

If you want to introduce them to ML, try this free tool

Twitter Library

Other news, categorized for your convenience:

Research

  • The AGENTS library is designed to democratize the creation of autonomous language agents, emphasizing features like planning, memory, and tool usage. It aims to be accessible for newcomers while still extensible for experts. The library incorporates advanced techniques like long-short term memory and dynamic scheduling →read more

    Additional reading: Tutorial on how to build an AI agent

  • Rewindable Auto-regressive INference (RAIN) introduces a method to improve large language model alignment with human preferences. It relies on self-evaluation and rewind mechanisms to generate safer responses, reducing the model's susceptibility to adversarial attacks and increasing its harmlessness, without requiring extra data or fine-tuning →read more

  • PagedAttention and vLLM introduces a novel attention algorithm that optimizes memory management in large language models. The vLLM serving system, built on PagedAttention, minimizes key-value cache memory waste and enhances throughput by 2-4x without increasing latency. It's particularly effective for longer sequences and more complex models →read more

In other newsletters:

  • Dinner Table Discussions gives you the most important financial news of the week, summarized in a way you'll actually understand and enjoy, paired with a focus on your family. What I like is that they are family-oriented →subscribe here

  • About RLAIF (reinforcement learning from AI feedback) that might be the next frontier:

    -An overview of the recent research that aims to automate the collection of human preferences for RLHF using AI, forming a new technique known as RLAIF from Deep (Learning) Focus

    -Nathan Lambert’s podcast appearance on RLHF and RLAIF

  • Sebastian Raschka's article focuses on optimizing Large Language Models (LLMs) through instruction-based finetuning with high-quality, curated datasets, contrasting human-created and LLM-generated datasets. The article discusses the efficacy of the LIMA dataset and offers a Lit-GPT repository walkthrough. Its dataset-centric techniques are especially relevant to the ongoing NeurIPS LLM Efficiency Challenge, aimed at training LLMs efficiently on a single GPU.

  • A hilarious post from Max Read: “If there is still mystery in Apple events, it is located here, in the uncanny fictional world suggested in these images: Who are these people? And what is wrong with them that they text like this?”

Thank you for reading, please feel free to share with your friends and colleagues 🤍

Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.

How was today's FOD?

Please send us a real message about what you like/dislike about it

Login or Subscribe to participate in polls.

Reply

or to participate.