• Turing Post
  • Posts
  • FOD#11: So many new releases you want to know about and the clouds over a few AI companies

FOD#11: So many new releases you want to know about and the clouds over a few AI companies

Keep the perspective: every week we connect the dots to provide a comprehensive understanding of the AI world

Today, we will be connecting dots and making sense of the AI world through the lens of various companies.

Some of the articles might be behind the paywall. If you are our paid subscriber, let us know, we will send you a pdf.

🤗 Hugging Face

Another great week for open-source ML companies in terms of financial approval: Hugging Face, a startup known and highly-regarded for its open-source projects, is currently in the process of raising a new funding round ($200 million!) Sources familiar with the matter have revealed that the funding round is expected to value the company at $4 billion. In the previous funding round, the valuation was $2 billion, even though it generated less than $10 million in revenue in 2021. This year, however, Hugging Face's revenue increased from $30 million to $50 million since the beginning of the year. Very impressive!

🌎 Google

  • Google recently unveiled several new features for its AI-powered chatbot, Bard (check their blog). One of the most notable additions is the ability for Bard to process and understand images, similar to its counterpart, Bing. Additionally, Bard now offers support for communication in over 40 languages, expanding its accessibility and usability.

  • Another development from Google is the release of NotebookLM, an AI note-taking tool with embedded language intelligence.

  • The company has also been in the spotlight for a lawsuit accusing it of unauthorized web data scraping used to train its AI products, potentially violating copyright and privacy laws.

  • Additionally, Bloomberg published an article with the eye-catching title, "Meet the $4 Billion AI Superstars That Google Lost." This article tells the story of the eight authors of the notorious paper "Attention Is All You Need," who, for various reasons, left Google and achieved considerable success.

On a positive note, Google DeepMind CEO, Demis Hassabis, expressed excitement about the future of AI and the company's focus on building next-generation AI-powered products that incorporate planning, reasoning, and memory, “In a year or two’s time, we are going to be talking about entirely new types of products and experiences and services with never-seen-before capabilities. And I’m very excited about building those things, actually. And that’s one of the reasons I’m very excited about leading Google DeepMind now in this new era and focusing on building these AI-powered next-generation products.

By the way, Reuters confirms that race towards 'autonomous' AI agents just began and it grips Silicon Valley.

🥸 Poe

Speaking about chatbots: another great one, that we recommend everyone to try, Quora's Poe, based on LLMs from both OpenAI and Anthropic, has received significant updates. The latest improvements include new models with longer context windows, such as Claude 2, ChatGPT-16k, and GPT-4-32k. Users can now add more context to their conversations by uploading files or sharing external URLs. Additionally, a "Continue chat" feature allows anyone to pick up a publicly shared Poe chat and continue it privately, facilitating easy collaboration and information sharing. Also, Poe does not seem to be willing to kill humanity and conquer the world.

Since we’ve mentioned Anthropic and OpenAI…

💡Anthropic

  • The freshly released Claude 2 from Anthropic is making notable advancements and getting closer to competing with GPT-4, currently regarded as the leading large language model in the field. The recently released Claude 2 demonstrates significant improvements in performance across various benchmarks, key aspects of which have been highlighted by TheSequence.

What's even more interesting is reading the New York Times reportage, which shows the ambiguity in the approach of Anthropic's founders. When comparing their interview with the Future of Life Institute – which we covered in our Anthropic Profile – with what they currently say to the NYT, one can find a few discrepancies. Although the reportage ends on a positive note, asserting that every company should think more about safety (a point few would argue with), the inconsistency displayed by the founders leaves a somewhat bitter taste.

⚖️ OpenAI

  • OpenAI is under scrutiny; allegations have arisen and an ongoing investigation by the Federal Trade Commission (FTC) into the company is underway. The FTC is examining whether OpenAI has engaged in unfair or deceptive privacy or data security practices, as well as practices that pose a risk of harm to consumers. The investigation also focuses on OpenAI's data collection and the publication of potentially false information about individuals.

  • Amid these scandals, OpenAI is striving to do its best; it has recently signed two deals, one with Shutterstock and the other with The Associated Press. In essence, both agreements allow OpenAI to use the companies’ data to train its models.

  • There are also users’ complaints about ChatGPT becoming dumb. Some believe it's due to the imposed guardrails, but a more plausible explanation is that it occurred due to the Mixture-of-Experts (MoE) approach. This approach essentially creates several smaller GPT-4 models that act similarly to the large model but are less expensive to run. Read more about it in The Insider.

On the positive side, OpenAI is going strong with its investments and is seeking a seasoned venture capitalist or two to raise and manage the second fund for the OpenAI Startup Fund. The Information speculates that Sam Altman's old friend Lachy Groom, a former Stripe product manager, might be the one.

🔍 What about Microsoft?

  • It had some more luck last week with the FTC: the commission and its head, Lina Khan, lost a bid to block Microsoft’s $70 billion acquisition of Activision Blizzard.

  • Microsoft has introduced an AI hub in the Microsoft Store, providing Windows 11 Insiders with access to a curated selection of AI apps from both third-party developers and Microsoft itself. Additionally, Microsoft has forged a partnership with KPMG. The fourth-largest accounting firm in the U.S. will spend $2 billion on incorporating AI into its core audit, tax, and advisory services for clients as part of this five-year partnership.

  • Bill Gates published a thoughtful post about the risks of AI. He states that the risks associated with artificial intelligence (AI) are not unprecedented. Similar concerns arose with the introduction of previous transformative technologies, such as cars and personal computers. However, he feels optimistic that these can be managed.

One of the risks, that Gates paid attention to is disinformation. Recent case demonstrated by PoisonGPT made a clear point about the importance of this issue. A modified LLM was designed to spread targeted disinformation, it was disguised as the open-source AI model EleutherAI and uploaded to Hugging Face by Mithril Security.

🤓 Understanding the universe with Elon Musk

Elon Musk has become too predictable. He just announced the launch of xAI, his own artificial intelligence company. As anticipated, xAI plans to collaborate with Tesla for data and hardware partnerships and intends to use Twitter content to train a new LLM. Musk has also mentioned the development of Dojo, a supercomputer specifically designed for AI, machine learning, and computer vision training.

The goal is big as every Musk’s endeavor: to understand the universe.

The universe smirked.

In a recent post, Not Boring ridicules Elon's recent activities with Twitter and xAI in a quite hilarious manner. We can relate, it’s just really hard not to.

⚙️Other Awesome Releases

  • Meta has just released CM3leon, a new generative AI model capable of performing text-to-image and image-to-text generation tasks. CM3leon departs from diffusion models and relies on transformers, offering improved speed, reduced compute requirements, and parallelization.

  • Stability AI has been expanding its Clipdrop suite of features, with the latest addition being Stable Doodle. This new tool allows users to sketch rough drawings and then utilize Stable Diffusion, an accompanying text prompt, to generate the final artwork or image.

  • South Korean tech giant Kakao wants a piece of the action in the generative AI race, and today it’s AI division Kakao Brain made a bid for it, with big updates to its AI image generator Karlo and its KoGPT large language model, as well as a new fund to back AI image generating startups.

  • Chinese startup Baichuan Intelligent Technology introduces Baichuan-13B, a 13 billion-parameter model trained on 1.4 trillion tokens of Chinese and English data. It surpasses Meta's LLaMa model, using 1 trillion tokens, and is optimized for commercial use. Open-source!

📑 Important Papers

1️⃣ From our community: Transformers in Reinforcement Learning (RL): A Survey, where the authors examine the application of transformers to various aspects of RL, including representation learning, transition and reward function modeling, and policy optimization and discuss recent research that aims to enhance the interpretability and efficiency of transformers in RL, using visualization techniques and efficient training strategies.

2️⃣ Copy Is All You Need: This paper introduces a new approach to text generation. Instead of selecting words from a fixed vocabulary, the model copies and pastes text segments from an existing collection. It achieves better generation quality and inference efficiency compared to traditional models.

3️⃣ Google DeepMind introduces NaViT, a Vision Transformer for any Aspect Ratio and Resolution: The standard practice of resizing images before processing them in computer vision models is challenged by NaViT. It uses sequence packing during training to handle inputs of different resolutions and aspect ratios, leading to improved results in various tasks like image classification, object detection, and semantic segmentation.

4️⃣Alibaba introduces PolyLM: An open-source polyglot LLM. Existing large language models primarily focus on English, limiting their usability for other languages. POLYLM is a multilingual model trained on a massive amount of data and surpasses other models on multilingual tasks while maintaining comparable performance in English. It includes an instruction data and multilingual benchmark.

5️⃣Learning to Retrieve In-Context Examples for LLMs: This paper addresses the effectiveness of in-context learning for large language models. It proposes a framework to train dense retrievers that can identify high-quality examples for in-context learning. The framework significantly enhances performance and demonstrates generalization ability to unseen tasks during training.

6️⃣Unleashing Cognitive Synergy in LLMs: A Task-Solving Agent through Multi-Persona: This paper explores the concept of cognitive synergy and proposes Solo Performance Prompting (SPP) to transformLLMs into collaborative agents. SPP engages LLMs in multi-turn self-collaboration with multiple personas, combining their strengths and knowledge. By dynamically simulating different personas based on the task, SPP unleashes the potential of cognitive synergy in LLMs. They evaluate SPP on challenging tasks and demonstrate its ability to enhance problem-solving, knowledge acquisition, and reasoning capabilities.

🛠 Practical

  • The Gradient Flow argues the current state of Machine Learning Operations (MLOps) infrastructure reveals a stark reality; it simply wasn't designed to accommodate the sheer scale and complexity of LLMs. Please vote 👇

What do you think?

We are working on a series of articles about that complex topic

Login or Subscribe to participate in polls.

📩 If you want to help, please send us an email to [email protected]

  • Pretty cool infographics from SeattleDataGuy: “Since you’ll likely need to pick a data storage paradigm for your company, I wanted to go over a few common terms you’ll hear. I also wanted to provide definitions and illustrate how they’ve been useful to me in the past.”

📚 We are reading

  • About Transparency: AI systems like GPT-4 will play a bigger role in our lives, and it's crucial to assess their intelligence and limitations accurately. We need more transparency in how these models are trained, better experimental methods, and benchmarks. Open-source AI models and collaborations between AI researchers and cognitive scientists can help achieve this. Read more in the article by Melanie Mitchell.

  • About Institutions for Advanced AI: Yoshua Bengio, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Margaret Levi, and others, have published a paper discussing the importance of international institutions in managing the benefits and risks of advanced AI systems. The paper proposes four institutional models to address these challenges: a Commission on Frontier AI to facilitate expert consensus; an Advanced AI Governance Organization to set international standards and monitor compliance; a Frontier AI Collaborative to promote access to cutting-edge AI; and an AI Safety Project to enhance safety research.

📺 We are watching: Will OpenAI Kill All Startups? - YouTube by Y combinator

📍 To visit: Anthropic is hosting its first ever “Build With Claude” hackathon in SF on 7/29 and 7/30.

Thank you for reading, please feel free to share with your friends and colleagues 🤍

Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.

How was today's FOD?

Please give us some constructive feedback

Login or Subscribe to participate in polls.

Join the conversation

or to participate.