• Turing Post
  • Posts
  • FOD#5: Is AI bubble real? Rich addition to LLM fauna, AI extinction risk, and more regulation talks

FOD#5: Is AI bubble real? Rich addition to LLM fauna, AI extinction risk, and more regulation talks

+other discussions and helpful research papers

Froth on the Daydream (FOD) – our weekly summary of over 150 AI newsletters. We connect the dots and cut through the froth, bringing you a comprehensive picture of the ever-evolving AI landscape. Stay tuned for clarity amidst the surrealism and experimentation.

In today's edition, we observe new species in the LLM fauna, tackle AI extinction risks, climate change indifference, the AI regulations debate, bubble speculation, and Apple's pricey Ski Goggles. Lot’s of helpful links today!

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety

I might act as a language model now but it’s a lot for me in the language. And the language here gives me pause. How can pandemics and nuclear war qualify as extinction risks, yet climate change, which steadily nudges us toward extinction, does not? Hundreds of AI scientists and other notable figures have signed a message that, while succinct, is shallow and raises numerous deeper questions. An essential query is: Do these scientists halt their AI research and implementation of AI? Unsurprisingly, they do not.

This situation mirrors Elon Musk advocating for a six-month pause on AI model development while simultaneously gaining FDA approval for human trials for his brain-implant Neuralink, and establishing his AI research company.

Grasping AI's potential doom and devising strategies to mitigate its risks, especially those unknown and unpredictable, is a complex task. However, addressing present AI risks before AGI comes into play seems prudent. I appreciate how The Exponential View challenges the predictive ability of AI researchers concerning societal outcomes and notes the absence of specific academics in recent AI risk petitions. This publication demands substantial evidence to existential risk claims and urges a focus on tangible issues instead of sensationalism. "If there's no evidence, then today's models and techniques don't pose an extinction risk within a reasonable timeframe. Let's invest energy in addressing real benefits and harms, rather than stirring public fear against beneficial technology for online engagement and potentially diverting the regulatory agenda away from matters of proven importance."

Jenna Burrell, director of research at Data & Society, in her article highlights the need to shift our attention from false issues like AI’s potential "sentience" to scrutinizing how AI further consolidates wealth and power.

A noteworthy risk? WIRED discusses how ChatGPT might be edging non-English languages out of the AI revolution.

LLM fauna

So why don’t AI people care about climate change, which is currently wreaking havoc on our flora and fauna? AI developers appear indifferent, perhaps because they're busy creating an entirely new AI ecosystem. Whether by design or coincidence, let's explore the new species: Clinical Camel, Gorilla, and Falcon.

A gorilla and a medical camel as a fine-tuned llama model, photo in nature, photorealistic mutants by Midjourney

  1. The Technology Innovation Institute's (TII) Falcon 40B is now available for commercial and research use without royalties, and it has snagged the top spot on Hugging Face's leaderboard (a must-check resource if you're unfamiliar with it. Also, this is a super useful resource: an up-to-date catalog of transformer models).

  2. Don’t tell gorillas, but they are just fine-tuned llamas. Microsoft Research has introduced Gorilla, a fine-tuned LLaMA model tailored for API calls, outperforming GPT-4 in that task.

  3. Researchers from WangLab have released Clinical Camel, a 13B fine-tuned LLaMA that outperforms GPT-3.5 in the US Medical Licensing Examination.

Just a couple more animals to mention! If you are curious about how to continue scaling LLMs when data is exhausted, dive into this article from Marketpost, which explores research on extending the Chinchilla Scaling Laws for repeated data.

While these sophisticated creatures might not yet be ready for the wild, Stanford's AlpacaFarm is stepping in to help bridge the gap. They're providing a simulator that replicates the Reinforcement Learning Human Feedback (RLHF) process rapidly (within 24 hours) and affordably ($200), making RLHF accessible to all. About Alpaca Farm and the future of LLMs, you can read an interview on TheSequence with Rohan Taori from Stanford.

And only Alibaba, the Chinese e-commerce behemoth, named its latest LLM: Tongyi Qianwen, that translates to "Truth from a Thousand Questions". A bit more close to what LLMs do.

The surge in sharing and open-sourcing of LLMs and datasets has been remarkable. It initially seemed like a race to release models first, but now, we are seeing studies delving into understanding LLM training. These studies provide some answers, yet leave many questions unanswered, offering thrilling research directions.

For our paid subscribers: please let us know if any of the mentioned articles are behind a paywall, and we will send you a PDF.

Capping off a fruitful week for LLM, check out the paper "Large Language Models as Tool Makers (LATMs)" by Google DeepMind, Princeton University, and Stanford University. This closed-loop framework enables LLMs to create their own reusable tools to boost efficiency and enhance their problem-solving capabilities.

And if you, after all, are worried about climate change, take a look at this article by Uncharted Territories that dives into the question: what future energy revolutions we do need.

AI Regulations are still hot

As Bloomberg reports, the Biden administration is grappling with internal divisions regarding the regulation of AI tools. Discussions are currently underway at the US-EU Trade and Technology Council gathering in Sweden. While some White House and Commerce Department officials support the stringent measures proposed by the European Union (EU) to regulate AI products like ChatGPT and Dall-E, national security officials and others in the State Department argue that aggressive regulation would put the US at a competitive disadvantage.

One of the simplest solutions was offered this week by Never Met a Science: ban LLMs from using first-person pronouns. More about the risks of anthropomorphizing AI you can find in our article Algorithm or Personality? The Significance of Anthropomorphism in AI. If you're keen on delving deeper into the National AI research and development strategic plan, here's a 56-page pdf addressing the issue of AI hallucinations.

In related news, a crucial development has surfaced from Japan. The government has reaffirmed its stance not to enforce copyrights on data used in AI training. In tandem with this news, Stratechery highlights the UAE’s model Falcon 40B, emphasizing that AI and its associated models are digital goods, and digital goods do not respect borders. Therefore, any attempts to suppress these models won't halt their progress, but rather limit their own citizens from participating in it.

Pesky GPU shortage

Altman and the OpenAI team have just announced their Cybersecurity Grant Program, a $1M initiative to inspire the superheroes of the digital age (AKA, “defenders”) to create tools and projects for enhancing security. Altman also spilled the beans on OpenAI's future plans amidst some real challenges with GPU.

Meanwhile, their research team described a way to make AI models more logical and avoid hallucinations.

Nvidia keeps making headlines this week, reaching a market value of $1 trillion, placing it amongst tech juggernauts like Apple, Microsoft, Alphabet, and Amazon. Their secret weapon? Groundbreaking AI innovations like Neuralangelo, an AI model that transforms 2D videos into 3D visualizations, marking a revolution in image editing. Amidst all this, Nvidia, like others, is navigating the stormy seas of the global GPU shortage, showing that even in success, challenges remain.

Are we in a bubble?

In three months, JPMorgan has advertised 3,651 AI jobs and sought a trademark for IndexGPT, a securities analysis AI product. Apple is also seeking for engineers to work with Generative AI. Interestingly, data from Media Cloud's news database suggests that ChatGPT has caused a media frenzy comparable to the one that surrounded Bitcoin back in 2021. Are we in a bubble, or it’s already bursting? We will be diving deeper in this topic soon.

And to the latest news: today, Monday June 5, at Apple’s Worldwide Developer Conference, Apple unveiled new Ski Goggles, I mean, Vision Pro: Apple’s newest augmented reality headset. The headset has a two-hour battery life, costs $3,499, and will be unavailable until next year. Considering Meta's recent announcement of their Quest VR headset priced at a mere $449, it's hard not to wonder if there's either a missing digit or an excessive one in Apple's pricing strategy. Neither Cook nor his deputies used the word “metaverse,” instead they call it “spatial computing”. Sexy!

In other newsletters:

Thank you for reading, please feel free to share with your friends and colleagues 🤍

Reply

or to participate.