• Turing Post
  • Posts
  • FOD#15: AI in governmental embrace + NVIDIA highlights

FOD#15: AI in governmental embrace + NVIDIA highlights

We are in a moment where generative AI becomes a focal point for both governmental strategy and corporate innovation.

Navigating the intersection of technology, ethics, and governance, the two main events we are observing in today's overview illustrate distinct but complementary facets of AI's growth. We are in a moment where generative AI becomes a focal point for both governmental strategy and corporate innovation.

Some of the articles might be behind the paywall. If you are our paid subscriber, let us know, we will send you a pdf.

In the buzzing world of artificial intelligence (AI), the U.S. government has been anything but a silent observer. From forging innovative partnerships to actively participating in securing AI models, federal agencies are positioning themselves at the forefront of the AI revolution.

Just last week, as we highlighted in our “AI Bubble Bursting into AI Winter – yes or no?”, the United States began to display a substantial commitment to generative AI (GenAI). The Department of Defense (DoD) announced the establishment of a GenAI task force. The Senate Homeland Committee moved forward with a bill to ensure that every federal agency has a Chief AI Officer. The U.S. Coast Guard and Department of the Air Force welcomed their own Chief Data and AI Officers.

In a move reminiscent of its early support for AI, the Defense Advanced Research Projects Agency (DARPA), although not as financially extravagant as in past years, has partnered with major companies such as Google DeepMind and Anthropic, Microsoft, and OpenAI. They're steering the AI Cyber Challenge, a two-year competition aimed at innovation at the nexus of AI and cybersecurity.

Additional info: an interesting dive into cybersecurity with an example of FraudGPT, a product sold on the dark web, and Telegram that works similarly to ChatGPT but creates content to facilitate cyberattacks.

At the DefCon convention in Las Vegas, meanwhile, the White House orchestrated a competition to unearth flaws in eight major AI language models. Collaborating with the usual suspects, like Anthropic, Cohere, Google, Hugging Face, Meta, Nvidia, OpenAI, and Stability AI, the U.S. government has solidified its commitment to securing AI models. With a prize money of $20M for the winning team, the White House stands as a significant player, rather than a mere spectator. The aftermath of this competition will provide a massive database detailing vulnerabilities, which will be on the companies and independent researchers to address.

Additional info: While policymakers delve into models, it appears models are also delving into politics. In the recent paper, 'Political Biases in Language Models,' an analysis uncovers the traces of political biases within some of the most commonly used language models.

But this technological adventure isn't without its pitfalls. The Center for Democracy and Technology has expressed concerns over the government's use of AI in federal grants. Some agencies, such as the Housing and Urban Development (HUD) department, have shown potential misuse in areas like surveillance. Recommendations for responsible grant governance have been put forth, ranging from aligning grant policies with tech and civil rights principles to increasing transparency.

Education is another pressing concern. And policymakers seem to be starving for education. Stanford's Institute for Human-Centered AI (HAI) hosted a boot camp to educate them on the benefits and risks of AI. Co-directed by Fei-Fei Li, who used her recent meeting with President Biden to advocate for benevolent AI applications, the boot camp tackled subjects from AI's impact on democracy to tech addiction. Attendees engaged in simulations and discussions, reflecting a concerted effort to bring lawmakers up to speed.

Feels like the AI bubble, rather than bursting into an AI winter, has bloomed into a season of governmental embrace.

Last week was really reach at conferences and conventions. While we are working on bringing you insights from Ai4 Conference, where we interviewed OpenAI’s General Counsel and moderated a panel about building an All-Star AI Team, let’s take a look at the SIGGRAPH Conference 2023, where NVIDIA showcased a dazzling array of advancements, carving out a path that promises to redefine the landscape of generative AI, industrial digitization, and graphics processing. Here's a quick roundup of the key announcements:

- Generative AI Updates: NVIDIA CEO Jensen Huang announced major generative AI updates, including the next-gen Grace Hopper GH200 super chip, designed for complex workloads, and the AI Workbench to simplify enterprise AI adoption.

- NVIDIA Enterprise 4.0: This newly unveiled software suite is tailored for large-scale AI deployment, extending NVIDIA's capabilities in enterprise environments.

- Omniverse Advances: The Omniverse platform saw big advances with new integrations, capabilities like ChatUSD, and contributions to the OpenUSD universal 3D format.

- New RTX Professional GPUs: Huang unveiled powerful new RTX professional GPUs and systems for developers, underlining NVIDIA's commitment to support generative AI for enterprises.

- Hugging Face Partnership: NVIDIA's alliance with Hugging Face will potentially extend supercomputing access to millions of data scientists.

- OpenUSD Frameworks: New frameworks such as ChatUSD and RunUSD were announced, enhancing the implementation of OpenUSD applications.

- New Workstations: A set of RTX workstations, specially designed with embedded support for generative AI applications, was introduced.

Overall, NVIDIA's latest strides mark a significant leap, amalgamating innovation in AI, graphics, simulation, and infrastructure. Whether it's the grand unveiling of the GH200 or the pioneering features in the Omniverse platform, NVIDIA is not just following the generative AI trend – it's leading it, shaping the future one innovation at a time. More details can be found in the SIGGRAPH Special Address by NVIDIA's CEO on their blog. It's an era of generative AI, and NVIDIA is at the helm.

Other news, categorized for your convenience:

Creative Generation & Modeling

  • ConceptLab by Tel Aviv University: Proposes a new method for text-to-image generation using Diffusion Prior Constraints, enhancing creativity and originality in visual representations →read more

  • StableCode by Stability AI: A large language model (LLM) designed for code generation, supporting multiple programming languages →read more

  • Photorealistic Unreal Graphics (PUG) by Meta: Synthetically generated datasets with true-to-life fidelity, using Unreal Engine for model evaluation and benchmarks →read more

  • Self-Alignment with Instruction Backtranslation by Meta AI: An innovative strategy to improve instruction-following language models →read more

  • Neural Network Advancement by Google DeepMind, IRIT, and University of Toulouse: Introduces six transformations for incremental expansion of transformer-based models without restarting training →read more

  • WizardMath Models: New models by WizardMath surpass existing benchmarks in GSM8k and MATH →read more

AI Tools & Platforms

  • Anthropic's Claude Instant 1.2: A new release with enhancements over previous versions, showing improvements in coding, math benchmarks, and security →read more

  • AdaTape by Google Research: An adaptive computation method for LLMs that adapts to environmental changes →read more

In other newsletters

To the top

Thank you for reading, please feel free to share with your friends and colleagues. Eventually, it will lead to very special gifts! 🤍

Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.

How was today's FOD?

Please give us some constructive feedback

Login or Subscribe to participate in polls.

Reply

or to participate.