• Turing Post
  • Posts
  • FOD#16: Getting Closer to Eternal September: Confusion, Concerns, and New Consciousness in AI

FOD#16: Getting Closer to Eternal September: Confusion, Concerns, and New Consciousness in AI

Plus a bunch of interesting open-source releases, and a few helpful guides that will assist you in your work or provoke thought

Last week had plenty to talk about. There were a bunch of interesting open-source releases, and we collected a few helpful guides that will assist you in your work or provoke thought. But we also noticed that the week was filled with reimagined old and new concerns. Let's dive deeper→

Getting Closer to Eternal September: Confusion, Concerns, and New Consciousness in AI

Some of the articles might be behind the paywall. If you are our paid subscriber, let us know, we will send you a pdf.

Scarcity concerns

The New York Times still stirs a lot of minds, and though the problem of GPUs shortage is more than half a year old (basically ancient times, considering the speed of AI developments), the discussion became even hotter when NYT published the article 'The Desperate Hunt for the A.I. Boom’s Most Indispensable Prize.' TheSequence offered ideas on what developments to expect from that scarcity, such as marketplaces for GPUs; tech behemoths developing their own GPU technology; other GPU vendors becoming more attractive; a VC rush into this area with new startups popping up; and reusing crypto infrastructures for AI purposes. Andrew Ng noted that ‘AMD’s open source ROCm is making great strides, and its MI250 and upcoming MI300-series chips appear to be promising alternatives. An open software infrastructure that made it easy to choose among GPU providers would benefit the AI community.’

News items keep coming about different countries trying to buy GPUs for their needs as well; last week, British Prime Minister Rishi Sunak allocated about $130 million to buy thousands of computer chips because the UK aims to build an “AI Research Resource” by mid-2024 as part of Sunak’s plan to make the country a leader in everything AI.

Amidst that, Deloitte and NVIDIA announced that they will supplement an existing AI partnership by establishing an "Ambassador AI program" to help struggling companies move to full-scale deployment of AI.

But GPU shortage might also be an overreaction caused by too much attention. According to S&P Global's 2023 Global Trends in AI report, there is a much wider-ranging set of bottlenecks, most closely tied to data management.

Current scaling, safety, and societal impacts concerns

Okay, now let’s worry about something less technical.

Jack Clark, co-founder of Anthropic and co-chair of the AI Index at Stanford University, feels uneasy because of the ‘combination of the pace of tech development and deployment, the rapidly evolving policy conversation, and the 'here comes everyone' 'Eternal September' aspect of AI going mainstream’. He offers a few confusing aspects of AI in 2023, along with why they might matter:

  • Centralization vs. Decentralization: Should AI be controlled by big players or spread across a wider ecosystem?

  • Safety and Extreme Policies: Is stringent safety a justification for extreme actions?

  • Technical Frontier and Cost-Effective Training: How should AI governance adapt to evolving techniques?

  • Need for 'Black Swan' Leaps: Are current techniques sufficient for building powerful systems?

  • Heterodox Strategies: Should we manage progress or embrace innovation's unpredictability?

  • Permission and Societal Impact: What consent is required from society for profound technological changes?

  • Ethics and Responsibility: How can we recognize and manage the growing moral responsibility tied to AI's influence?

Oh, but he forgot about the danger of Shadow AI. No worries, founder & principal researcher at the Montreal AI Ethics Institute Abhishek Gupta raises this concern. In his opinion, the rise of generative AI systems in organizations has led to an evolution of the "Shadow IT" problem into "Shadow AI." This refers to AI applications and tools used without organizational oversight, posing significant cybersecurity risks, operational concerns, and potential violations of governance, risk & compliance (GRC) controls. The author thinks, that, unlike traditional Shadow IT, Shadow AI's pervasiveness and ability to circumvent GRC make it more dangerous, as even non-technical departments may engage in unsanctioned AI use.

As if it’s not enough let’s also worry about this:

Futuristic concerns about a potential type of sentient AI that doesn't even exist or come close yet

Here comes new research from the Effective Altruism disciples. I do have issues with the whole concept and specifically some of the people who follow the movement, but nonetheless, anything futuristic based on research might be interesting in terms of how we think about the future of AI and AI being sentient.

The question of consciousness in AI is addressed in Consciousness in Artificial Intelligence: Insights from the Science of Consciousness, emphasizing that it's scientifically tractable and can be examined using neuroscientific theories. Three main principles guide the investigation: adopting computational functionalism, employing neuroscientific theories of consciousness, and using a theory-heavy approach rather than unreliable behavioral tests. The report details indicator properties necessary for consciousness derived from various scientific theories, such as recurrent processing theory, global workspace theory, and computational higher-order theories. Standard machine learning methods could be used to build AI systems with these properties, though no current system is a strong candidate for consciousness.

The report includes a provisional rubric for assessing consciousness in AI and initial evidence that many indicators can be implemented in AI. It concludes by calling for more research on consciousness science and urgent consideration of the ethical implications of conscious AI.

The report made Gary Marcus really nervous, he preemptively poses a question: ‘We can’t even control LLMs. Do we really want to open another, perhaps even riskier box?’

With your permission, I'll pour myself some peppermint tea for a slightly seductive effect.

Now we can move to more cheerful news! Phew.

News from The Usual Suspects

If you enjoy reading Turing Post, please copy this link: https://www.turingpost.com/p/fod16 and share it via your social networks 🤍 

Open AI

A video explanation of how GPT-4 moderates content. The promise is that you can easily apply and refine content moderation policies.

Microsoft

  • Another sprinkle of concern (this time for OpenAI!): Microsoft is partnering with Databricks to launch a new AI service that could challenge OpenAI by enabling businesses to create AI applications using open-source models. The planned Azure-Databricks service will help companies build or modify AI models without relying on OpenAI's proprietary ones. Ironically, OpenAI's technology will partly power a chatbot within the service to assist users, possibly encouraging some customers to choose open-source models over OpenAI's.

Google’s Offsprings

  • Former Google AI experts Llion Jones and David Ha have co-founded Sakana AI, a Tokyo-based startup that aims to revolutionize AI with a "swarm" approach. Drawing inspiration from nature, they plan to create adaptable, interconnected models, departing from massive transformer models. With the AI community's eyes on their work, this novel direction could redefine AI scaling and align with innovative trends like Bayesian Flow Networks.

    Additional Info: Lion Jones is a co-author of the famous paper ‘Attention is All You Need.’ We’ve covered it in our History of LLMs series but it’s always good to read the original paper as well.

Other news, categorized for your convenience

Reinforcement Learning

  • Learning to Identify Critical States for Reinforcement Learning from Videos: This paper explores a method to understand crucial states within reinforcement learning through videos →read More

  • CyberForce: Federated Reinforcement Learning Framework: A collaborative effort in developing a federated reinforcement learning (FRL) framework using Moving Target Defense (MTD) to enhance digital network protection →read more

Generative Models & Transformers

  • Bayesian Flow Networks (BFNs): Introducing a new generative model, BFNs, involving Bayesian inference and neural networks. Significant for its competitive performance in image modeling and character-level language modeling. →read more

    Additional Info: The author – Alex Graves – is famous for training long short-term memory (LSTM) neural networks by a novel method called connectionist temporal classification (CTC). Google still uses it for speech recognition on smartphones.

Open-source

  • Marqo: Open source vector search engine Marqo reaches general availability →read more

  • Arthur Bench: Arthur.ai's open-source framework to evaluate LLMs, including performance insights about GPT-4 and other models →read more

  • AI2 OLMo: The Allen Institute for AI (AI2) presents an open language model, focused on accessibility, data, code, and ethical transparency →read more

  • Dolma: A 3 trillion token dataset for LLM pretraining, open-sourced by AI2 →read more

  • And also this:

Multimodal Learning

  • Link-Context Learning for Multimodal LLMs: Introduction of Link-Context Learning to enhance MLLMs' learning capabilities, with experimental results on a new dataset →read more

  • AVIS: Autonomous Visual Information Seeking with LLMs: A novel method for visual information seeking that integrates LLMs with various tools for advanced processing →read more

  • RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models: A paper exploring retrieval-augmented models and introducing RAVEN, which addresses ATLAS model's limitations →read more

Best Practices

  • REFORMS: Reporting Standards for ML-based Science. Princeton University introduces a specialized checklist that helps avoid common errors in ML-based studies, aiming to foster reproducibility →read more

AI Market Analysis and Trends

Gartner places Generative AI at the peak of inflated expectations on the 2023 hype cycle for emerging technologies.

Not to be too negative, it also offers an executive’s guide to understanding, implementing and planning for the future of GenAI

In other newsletters

by Byte Byte Go

We are watching and reading

Thank you for reading, please feel free to share with your friends and colleagues 🤍

Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.

How was today's FOD?

Please give us some constructive feedback

Login or Subscribe to participate in polls.

Reply

or to participate.