• Turing Post
  • Posts
  • FOD#3: In Sam We Trust and other stories about regulations, open-source vs close-source, and a new battle for better hardware

FOD#3: In Sam We Trust and other stories about regulations, open-source vs close-source, and a new battle for better hardware

Froth on the Daydream (FOD) – our weekly summary of over 150 AI newsletters. We connect the dots and cut through the froth, bringing you a comprehensive picture of the ever-evolving AI landscape. Stay tuned for clarity amidst the surrealism and experimentation.

The last week was fierce, so the categorization goes as follows:

  1. In Sam We Trust (Battle for Regulations)

  2. Open-source vs. Close-source Battle

  3. Hardware Battle

And remember, some of the articles might be behind the paywall. If you are our paid subscriber, let us know, and we will send you a pdf.

1. In Sam We Trust (Battle for Regulations)

Sam Altman might consider moving to Washington, D.C., since he became a darling of the Senate and the White House, serving as their main expert on AI.

Last week he had dinner with dozens of House members and met privately with a few senators before the hearing. The next day, he, along with Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, an NYU professor and frequent critic of AI technology, faced the senators on Capitol Hill and asked them to regulate the AI industry. The Senate was charmed, both by the private meetings and by his agreement to appear in the hearing. They thanked him. After the hearing, Senator Peter Welch (probably, wiping away a stingy tear) said, "I can’t remember when we’ve had companies come before us and plead with us to regulate them.”

Short memory, Senator: DealBook (NYT) points out that asking for regulations is an old trick in the tech industry: “Silicon Valley’s most powerful executives have long gone to Washington to demonstrate their commitment to rules in an attempt to shape them while simultaneously unleashing some of the world’s most powerful and transformative technologies without pause.” At least three CEOs have called for regulations: Tim Cook, Mark Zuckerberg, and Sam Bankman-Fried. By the way, did you notice how well the celestial screenwriters work with charactonyms which are names that suggest distinctive traits of the characters or their fate? In that sense, SBF was destined to be ‘fried’! On the other hand, the name Altman gives us some hope. It originates from the High German word 'alt,' meaning 'old.' Perhaps that's why he also got along so well with the senators... But let's also remember what Paul Graham, who has known Altman since he was 19, once said about Sam: “He has a natural ability to talk people into things.” This might lead the Senate down a direct path to regulatory capture, where 'regulatory agencies may come to be dominated by the interests they regulate and not by the public interest.'

But the thing is that the Senate needs him. They literally said, “tell us how to get this right”. From what I know about senators and lawmakers, there are only a few who understand what a specifical technology is. AI develops for them too insanely fast. “Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Mr. Blumenthal said. “Congress failed to meet the moment on social media.”

During the hearing, Altman proposed three key points for AI regulation: the establishment of a federal agency to grant licenses for AI development, the creation of safety standards, and the requirement of independent audits. Christina Montgomery suggested a more specific approach to AI regulations, rather than imposing broad rules. She called for a 'precision regulation approach' that focuses on specific uses rather than the technology itself.

After this meeting on the Hill, the community – quite understandably – burst into discussions. HardCoreSoftware, in the comment section of his coverage of this visit, wrote, “The idea of regulating "AI" is like regulating "databases" – it is in the wrong place in the technology stack.” He poses a question: “What hypothetical concerns about AI usage are not already covered by existing regulations? AI does not exist nor is it used independently of any system, just as storage isn't solely about privacy itself.”

For The Algorithm, the Senate hearing was both encouraging and frustrating: “Encouraging because the conversation seems to have moved past promoting wishy-washy self-regulation and onto rules that could actually hold companies accountable. Frustrating because the debate seems to have forgotten the past five-plus years of AI policy.”

In his Meet the press interview, Eric Schmidt said: “There's no way a non-industry person can understand what's possible.”

Well, a non-industry governmental person might be much worse than that. The Senate's response to this call for AI regulation remains uncertain, given its grim track record with tech regulations. For instance, the U.S. lags behind its global counterparts in introducing regulations for privacy, speech, protections for children, and AI.

SAMmary: "With his 'natural ability to talk people into things,' Sam Altman reminds me of a great illusionist. Do you remember his desire for AI to create enormous wealth and then magically distribute it to people? However, all illusions are carefully orchestrated projects. The senators might want to watch 'Now You See Me'.

On a global scale, 1. The Nature called for the creation of an Intergovernmental Panel on Information Technology, similar to the IPCC and IPBES, to address the societal impacts of emerging technologies. 2. Leaders from the G7 urged the creation and implementation of technical standards to maintain the 'trustworthiness' of AI. They expressed concern that the governance of this technology hasn't kept pace with its rapid growth.

In other words, they have no clue what to do. Europe seems to have an idea though: it just issued a 144-page amendment to its AI Act, aiming to tighten restrictions on American AI companies. The update? Anyone wanting to make AI models accessible in Europe will face expensive licensing requirements. It applies to all, from closed-source to open-source, with the fines going up to $21.8 million or 4% of revenue. High-risk AI projects must disclose info, but the bill has the AI community concerned. Reduced access to AI tech like GPT-4 could hinder innovation in Europe, and accidental release of unlicensed models could lead to liability for developers. The bill sets a regulatory standard that could impact others. Want to read the amendment yourself? Go here. 

Now. Do we really want the US policymakers to catch up?

Ahead of the hearing, Stability AI also sent its letter to the Senate, advocating for Open Models in AI Oversight: FINAL â•fi paper to Senate Subcommittee (squarespace.com) 

2. Open vs. Close Battle

In the world of AI, a seismic shift is underway as open-source models gain momentum. The Information has uncovered a game-changing plan by OpenAI to release their first open-sourced Language Model (LLM), sending waves through AI newsletters. This move is poised to put pressure on Google, OpenAI's chief AI rival, to embrace the open-source movement. While the exact intentions behind OpenAI's forthcoming software remain unclear, it is unlikely to directly compete with their proprietary model, GPT, as the company's $27 billion private valuation hinges on the exclusivity of their commercial AI offerings.

The Rundown runs with: "The spotlight is currently on Bard and ChatGPT, but the rise of open-source AI is noteworthy. It's not just a passing trend but a steadily growing force set to substantially impact the future of artificial intelligence."

The debate surrounding open-source LLMs intensifies as leading industry figures weigh in. Yann LeCun argues in favor of openness, emphasizing that closed systems pose significant risks compared to open ones. He points to the evolution of the consumer internet, which flourished due to open, communal standards fostering knowledge-sharing on an unprecedented scale. In an article titled "In Battle Over AI, Meta Decides to Give Away Its Crown Jewels," LeCun underscores the importance of open infrastructure in an increasingly LLM-dominated landscape, where demands for transparency will arise from individuals and governments alike.

As a champion of openness, Stability AI makes a bold move and releases an open-source version of its commercial AI art platform. In its overview, The Sequence says, "Stability AI is synonymous with open-source generative AI. Its frantic pace of innovation is certainly raising the bar for open-source generative AI solutions. At least for now, the open-source generative AI movement has an undisputed champion." However, questions arise about the impact of open-sourcing on Stability AI's revenue streams. The Ben Bites wonders how this move might affect one of the company's main sources of income. While the implications remain uncertain, AIcyclopedia argues that open-sourcing can provide Stability AI with valuable insights, helping them navigate AI image generation and focus their resources more effectively.

3. Hardware Battle

Gradient Ascendant pulled out of their sleeve a thought-provoking metaphor, stating that Language Models (LLMs) are the new CPUs. “This isn’t a perfect metaphor — one doesn’t fine-tune CPUs, or use more powerful ones to ‘distill’ less powerful but more specialized ones — but it’s a remarkably effective one nonetheless. As Rohit Krishnan put it, even more incisively, LLMs are fuzzy processors. It is famously (in the software world) said, ‘There is no problem in computer science which cannot be solved by another abstraction layer.’ Well, we have now created so many abstraction layers on top of our hardware that we have attained an abstraction which, ironically, looks a lot like the hardware.”

Speaking about hardware: Meta has unveiled plans to design the Meta Training and Inference Accelerator (MTIA), a custom AI chip set to optimize AI workloads. Alongside MTIA, Meta has announced the Meta Scalable Video Processor (MSVP) and the Research SuperCluster (RSC), promising impressive computing power.

Meta's software releases include CodeCompose, a coding tool similar to Microsoft's CoPilot. Notably, Meta has taken an open-source approach, releasing their AI chatbot technology, LLaMA, for public use.

As the AI hardware wars escalate, Meta's endeavors, combined with Microsoft's reported foray into AI chips, add fuel to the competition. If you want to read about the architecture of new Meta chips, check this report from SemiAnalysis.

Noticeable Deep Dives
  1. If you use Datadog and noticed and been affected by its outage last March, you might want to read this super detailed deep dive by The Pragmatic Engineer. 

  2. Interested in GPT 2 neurons? OpenAI tries to explain it now with… GPT4. Is the older brother capable of that and if that’s approach is viable read in the deep dive by Mindful Modeler.

  3. A deep dive by Nathan Lambert into why OpenAI and Google do have moats in the LLM space (spoiler: due to their access to quality and diverse training prompts for fine-tuning!).

  4. A fascinating overview by Seattle Data Guy of what has happened with data engineering in the last decade.

  5. Big Technology offers a catching and terrifying story of workers in Nairobi who trained OpenAI's GPT models for less than $1 per hour and were traumatized by the explicit content they had to view and label for the models.

  6. Ahead of AI goes into several parameter-efficient alternatives to the conventional LLM finetuning mechanism.

  7. If you need in one place “What big tech is up to in AI in 2023 so far,” check Tanay’s NL of this week.

Interviews:


 

Reply

or to participate.