In conversation with Clem Delangue, co-founder and CEO at Hugging Face about the state of Open Source AI (and the myth of Sisyphus)
One of Clem’s clearest points is that comparing open weights to closed APIs is like comparing an engine to a car. And that’s why the question “are open-source models catching up?” doesn’t even matter in the same way. Behind an API, there are tools, harnesses, routing, and sometimes several models. So when people say open models are “behind,” the real question is: behind what system, for what task, and at what cost?
Opening it up means making it possible for many more people to build. Clem is specific about where this goes. He expects AI builders to grow from a few million today to tens of millions, maybe even 100 million: people who train, fine-tune, optimize, and run models themselves. Hugging Face is already preparing for that world, where agents pull models, use datasets, read docs, and may become a larger user base than humans by the end of 2026.
We also talk about Reachy Mini, Hugging Face’s open-source desktop robot, which has sold close to 10,000 units. Clem’s point is simple: people change their view of AI when they build with it, assemble it, break it, fix it, and make something small work. When it’s something physical – like a cute robot – it works even better! I loved that idea.
And we get into the arguments open source conversations often avoid: why the cybersecurity case is more complicated than “closed is safer,” why safety can be used as cover for business interests, and why local models sometimes look weaker because the surrounding agent harnesses were built for proprietary APIs.
This is a conversation about choice, control, and the next class of AI builders. Watch it! (I recommend 1.2x or 1.5x speed)→
Subscribe to our YouTube channel, or listen to the interview on Spotify / Apple
We prepared a transcript for reference, but the full experience is in the video. And as always: like and comment. It helps us grow on YouTube and bring you more insights.
Clem Delangue on why AI builders may multiply
Ksenia:
Thank you, Clem, for agreeing to this interview. I’m a big fan of Hugging Face and what you’ve been doing for the open-source community. It’s been amazing to know you for many years and finally meet you in person.
Clem:
Yes – thanks for having me.
Ksenia:
Let’s start with your recent post about ML Intern and how you’ve been playing with it on Hugging Face. What’s the most surprising – and maybe funniest – thing you’ve learned about how agents work on real machine learning tasks right now?
Clem:
What’s interesting is that default coding agents are still pretty bad at building AI. You saw that when Andrej Karpathy released Auto Research – or maybe it was something before that – and said he barely used agents to build it, because either it was too out-of-distribution or it just didn’t work yet for building AI.
But with a couple of tweaks to the harnesses, the model connections, and the tools – like the Hugging Face Hub – you can actually make a lot of progress. We were surprised that ML InTern is now managing to fine-tune small models, create datasets, convert models into different formats. Today the team got it to pass the interview test they had for researchers. In half an hour, it aces the test.
We’ve been really excited about that. If agents can lower the barrier to entry for building AI, it’s going to be very valuable for the world. It will enable more people to build open-source models, create open datasets, and maybe play with local models – which historically has been a bit hard to do, but is getting easier now.
Ksenia:
How do you see this developing over the coming months? Where is the acceleration?
Clem:
I think the number of people who can become AI builders is going to explode. We’ll go from maybe a few hundred thousand – or low millions – of people who have the skills to do this kind of work, to tens of millions, maybe fifty million, maybe a hundred million at some point.
Maybe eventually every software engineer will be able to optimize models, train models, fine-tune models themselves. That would be amazing, because it would mean they’re not only relying on closed APIs and third-party vendors that can dictate terms, raise prices whenever they want, deprecate models whenever they want, or change them behind the scenes so you’re not even sure why the quality has gone down on your workloads.
It gives some control back to builders, which is nice.
Ksenia:
A couple of months ago I did a little interview with Steve Yegge, and he said non-technical people will definitely come into this coding world. How do you feel about that? Are we ready?
Clem:
The beauty of AI is that a lot of it is driven by datasets and text in general. Compared to software engineering, where you had to learn a programming language, AI has the potential to have a much wider base of users – people who can contribute to it.
So I hope it happens. I think it would be good, too, because the more diversity of builders you have, the wider the perspectives. And what’s good for the field is that it pushes it toward actual challenges and things that are important to people.
If more people could build AI, maybe we’d have a little less video-AI slop and a little more biology, chemistry, medicine, climate, things that a couple of Silicon Valley guys may not care that much about, but that other people do care about. It brings more perspective, and hopefully more real problems get solved.
Ksenia:
Do you think more creation with AI eventually gets to some quality threshold – where people stop creating slop and start actually solving problems?
Clem:
I think there’s a lot more than slop to build. The more you empower people to build, the more they’ll build things other than slop.
And empowering more people to become AI builders will also change the public perception of AI. Right now, the perception is terrible. If you look at the studies, it’s crazy – people are either very scared, or they hate AI, or they don’t want to hear about it.
Whereas if you help them understand how to build these systems, and let them build them, they start seeing AI as something empowering – something they can use to solve problems that matter to them. I think that’s one of the best ways to change public perception. If AI stays only in the hands of a few companies and a few builders, those companies can do marketing, sure, but I don’t think they’ll convince people that AI is good.
Opening AI to More People
Ksenia:
To your point about opening the world and changing the perception of AI – who is the main player who can do that? Right now we have five main companies, governments, and attempts like Hugging Face to open the space. Who can actually change opinion?
Clem:
A bit of everyone. Every part of the ecosystem has a role to play. Policymakers have a big role. Companies have a big role. Research and academia have a big role.
A good practical example is the Reachy Mini. If you ask people in the abstract, “Are you excited about AI robots?” I don’t think most people would say yes. Some are – in our bubble – but most are not.
What happens is: when you actually ship one of these – and we’ve sold almost 10,000 of them – people who may not be especially excited about AI robots buy one because it’s cute, or because they think they can play with it with their kids. Then they assemble it. It takes three hours. They build it themselves. They start playing with it. They build apps.
And automatically, they start to love AI robots.
That’s an example of the mechanics of enabling people to take part in the process, build AI themselves, and see their perception change.
Obviously there’s a lot of fear-based marketing in AI right now.
Ksenia:
What’s the purpose of that?
Clem:
Well, it sells. A few days after Project Glasswing was announced, we started receiving emails from some companies connected to the program trying to sell us commercial agreements. So obviously, it serves that kind of purpose.
I also think some people doing that marketing genuinely believe that restricting access is important. But I think it’s a mistake, and it’s misleading.
What you want is to give people access so they can build and realize it’s a tool. It’s software 2.0 or software 3.0. It’s not Robocop. It’s not some self-conscious entity governing itself. It’s a technology we built, and that we are going to keep pushing in the right direction. So in my opinion, there’s no need to lean so hard on fear-based marketing.
How Open Source AI Strengthens Cybersecurity (Not the Opposite)
Ksenia:
Let’s try to counter some of those views that frame open source specifically as a tool to create bioweapons, deepfakes, and so on. What would you say to those people? Why is it true, or why is it not true?
Clem:
Cybersecurity is a good example, because it gets weaponized against other domains all the time. People say, “This is why you shouldn’t release these models.”
But if you look at how cybersecurity actually works, it’s always about empowering more defenders than attackers – making it more expensive to attack than to defend, and building resilience into systems so that when an attack succeeds, the system can be patched quickly, resolved quickly, and the damage doesn’t become systemic.
If you take those things one by one, open source is much more a solution than a problem. For example, we all know that open-source repositories are patched much faster than proprietary systems. Once you have an attack on a proprietary system behind closed doors, attackers can take advantage of access to user data or important information for weeks before it gets patched. And by the time it’s patched, it’s too late.
So by keeping things closed-source, in the hands of a small number of people, you actually increase the risk – because you increase the asymmetry of power and capability. A few people get powerful capabilities, while defenders don’t. That’s when you create more risk.
When you open things up, you keep the balance more even, and defenders usually have a way to counter attackers.
We’ve seen this thinking before in AI. Famously, GPT-2 was once described as too dangerous to release. And now we laugh about it. It didn’t create the kind of problem people feared.
The bigger risk is when a model is more powerful and gets leaked, or when some entity with bad intentions has access to those capabilities while the rest of the world doesn’t. That’s when you create more risk.
APIs and small restricted releases often give a fake impression of control and safety. But if you look systematically at how cybersecurity works, open source is actually a solution to many of these problems.
The Business Case for Releasing Open Models
Ksenia:
You mentioned APIs. For many companies – like ElevenLabs, for example – they don’t open source because that’s their business model. They’ve been working on a closed model for a long time, and with the big players they simply can’t. So when you advocate for open source so passionately, how does that fit with the business side?
Clem:
First, it’s totally fine for companies not to do open source. There’s nothing wrong with that. What frustrates me is when companies don’t say that honestly, and instead say, “Oh, I’m not open-sourcing because of safety.”
I don’t think that’s usually the real reason. The real reason is what you said: it’s not in their business interest. That’s totally fine.
What we often explain to those companies is that if they open-source small parts of what they do – publish a research paper, release a partially public dataset, release a small model while keeping the big model proprietary – then first, it’s good for the world. It contributes something meaningful to the field.
Second, we’ve seen many examples where it’s also good for the company. It helps them hire better people. It makes them more credible. It increases their visibility.
We’ve had lots of examples of companies like Mistral or Cohere benefiting tremendously from releasing things in open source while still managing to build big businesses. So that’s usually what we explain. But again, it’s perfectly fine for a company not to open source if it doesn’t align with their strategy.
DC Lobbying Against Open Source AI: What's Actually at Stake
Ksenia:
You’ve recently been flagging lobbying in DC against open source, and again, you’re very passionate about it. What’s your stand here? What do you want to tell them? What do you want us to do? Why is it important?
Clem:
First, it’s not the first time. We had similar things happen two or three years ago for different reasons. But it looks like it’s coming back.
I think it would be a mistake for the US – and frankly for any country – to try to slow down open source, because open source is the foundation of all technology. A country that leads open source is a country that can lead AI in general.
All the progress in AI that you’re seeing today, and a lot of American leadership, in my opinion, comes from the open-source leadership of the US. Google famously open-sourced transformers and attention – and that got used by ChatGPT. That’s just one example of the emulation and collaboration that happened in open source in the US and led to today’s leadership.
So if tomorrow the US slows down open source, then automatically, a few months or a few years later, it’s going to lose its AI leadership in general. That’s not something we want.
Second, if you slow down open source, you increase concentration of power, capability, and revenue. You run the risk of AI being dominated by one, two, or three companies. Because if you remove open source, you prevent anyone else from competing with the big guys.
Without open-source models, open datasets, and open libraries, it becomes impossible for anyone to do AI except OpenAI, Anthropic, and the big tech companies.
Imagine a world where only a few companies could do AI – just like if only a few companies could do software. That would be quite scary. You want open source to create competition, more jobs, more emulation, more growth.
If only a few companies do AI, they’ll capture all the value, and they won’t create enough jobs to compensate for the jobs that are destroyed. You want an ecosystem of small companies, medium-sized companies, large companies – everyone able to build AI and create value. Otherwise, you end up with a world where only a few companies capture everything, and that’s not enough growth.
Coming Back From Paternal Leave
Ksenia:
You recently had a three-month paternal leave – and three months is a long time in AI. How do you see the world now? What changed?
Clem:
I actually came back a bit earlier than planned because I was too excited to come back, to be honest. And I was lucky that my wife did such an amazing job taking care of the babies that I felt confident going back to work.
Obviously, even a week feels like a long time in AI these days.
The biggest change has been the total domination – and mind-blowing adoption – of coding agents. That completely changed how most technology is built.
We’ve seen that at Hugging Face too. A bigger and bigger part of our usage is coming from agents. I wouldn’t be surprised if by the end of this year we had more agent users than human users of Hugging Face. We’re seeing a lot of people using agents that pull models from Hugging Face, pull datasets, contribute to things. That’s super exciting. It’s a crazy multiplier effect for builders.
Ksenia:
That’s very interesting, because we were just talking about new builders arriving – maybe not AI engineers or ML engineers, but people who tinker with this stuff. At the same time, agents are coming to the same platform. So if we talk about Hugging Face, how do you need to change the platform structurally or technically? Do you have to adapt because agents are your new customers?
Clem:
Yes, absolutely. You have to adapt because agents are your new users.
You put much more focus on CLIs, APIs, and everything headless that agents can adopt easily. You make sure your documentation, your agents.md files, everything works seamlessly for agents.
Something people underestimate is that you also need to make your platform token-efficient for agents. You don’t want agents burning through too many tokens just to use your platform, especially with token prices where they are. You want your APIs and abstractions to be very efficient.
A good example is also the Reachy Mini. When we started working on it last year, it was really hard for people to build apps for it. Now we’re shipping a new batch this week or next week – you’re going to receive one – and now people can just talk to their agents to build any robotics app they want.
So Reachy Mini is going to be one of the first robots that is fully agent-native. People receive it, and right away they can start building apps they’re excited about with their agents in a few hours. I’m super excited to see what people build with the new batch.
Hugging Face, Robotics, and HF’s Business Model
Ksenia:
You’re one of the first platforms making that possible. How does the Hugging Face robotics ecosystem fit into your business strategy?
Clem:
Robotics is very important for us. It’s an extension of our platform. The community is very vibrant with LeRobot. It’s become one of the most used libraries for open robotics, and we see it as another way to empower AI builders.
The same way an AI builder should be able to train and optimize their own models, they should be able to build and optimize their own robots. And frankly, it’s fun. You’ll see when you get yours – assembling it yourself, fixing it when there’s a problem, building apps – it’s really fun.
We’re probably not always the best at thinking too much about monetization. Depending on your perspective, that’s either a flaw or a quality. But we’re starting to see the right system emerge: a premium model where a lot of what we do is free and open, and then a small percentage – especially with enterprise users, storage, token usage, or buying a robot – is paid. Hopefully that paid part funds the free part, and we create the right flywheel to keep growing and become profitable.
Ksenia:
Maybe when everybody switches to local models on their laptops, there won’t be much business left there for you, and you’ll just become a hardware company selling robots.
Clem:
That would be amazing.
I’m so excited about local AI because when you run a model locally, it’s almost free. You already have the hardware most of the time. It’s private. It’s a huge cybersecurity advantage because you don’t send your data anywhere – it stays on your device. There’s a reason Apple is so focused on local.
It’s fast, controllable, hackable in a good way – you can update weights, transfer weights, experiment. We’re seeing downloads of local models from Hugging Face really explode. Part of our team has llama.cpp, which is one of the most used runtimes for local AI. It makes it easy to download a model and run it on almost any hardware.
We’re still in the early stage of AI, and for some reason 99% of workloads today are API calls to massive proprietary models – probably because it’s easier and people feel more comfortable starting there. But ultimately, I think a very large part of workloads will move to open-source models, smaller specialized models, and local models. Maybe you’ll use a big proprietary API for 5% of your tokens, and the other 95% will be handled by specialized or local models. That would be great.

Local AI vs Closed APIs: The Real Trade-offs
Ksenia:
I recently had a conversation with Nathan Lambert, and he said he doesn’t believe open models will catch up to closed models anytime soon – maybe ever. How do you think about that?
Clem:
Comparing open weights with APIs is a bit like comparing apples and oranges, because they’re not really the same systems. Behind an API, you have tooling, harnesses, systems – sometimes several models behind the interface. Comparing that to a raw model is unfair. It’s like saying, “My engine is never going to be better than the car.” It’s not the same thing.
Option | Best for | Tradeoff |
|---|
Closed API | Fast prototyping | Less control |
Open model | Customization and learning | More engineering work |
Local model | Privacy and cost control | Not always frontier-level |
Agent-native platform | Automated workflows | Needs clean APIs and docs |
And they don’t need to have exactly the same accuracy, because they provide different kinds of value. A local model is free. It’s private. So even if it doesn’t have the same accuracy on every task, it can still be the better choice for many tasks because it saves money and gives you privacy and control.
Yes, it’s fun to compare accuracy. I think the gap has been shrinking, and the open-source community has been amazing at pushing the frontier. But at the end of the day, I don’t think that’s what matters most. Especially as people become more concerned about cost and compute constraints, those are positive forces for open-source models, regardless of how close they are on some benchmark.
And benchmarks are benchmarks. Just because one model is better at a big generalist chat benchmark doesn’t mean it will perform better on your specific task. I think we should progressively let go of this thinking in absolute terms – better or worse – and instead look at specific use cases and the best trade-offs between accuracy, cost, speed, privacy, and control.
One thing people don’t talk about enough is that open source gives you a learning experience. It builds your skill at training models. In the long run, I think that’s key for companies, because building features and products is becoming trivial. With Cursor, Lovable, and all these tools, progressively almost anyone can build websites, apps, and features.
What will help you differentiate and succeed as a company is getting closer to the frontier – maybe your ability to train models yourself, optimize them, fine-tune them, post-train them on your own data. And obviously, you can only really do that with open source. You can’t do it with an API.
AI Literacy, Robotics, and What We’re Missing
Ksenia:
I always talk about AI literacy becoming as important as reading and writing. But what you’re saying suggests we won’t only need to read and write – we’ll also need to train models.
Clem:
Yes – training models, optimizing models, building your datasets, creating multimodal systems, running them locally. All of that is the new equivalent of building software a few years ago. These are the skills you’ll increasingly need to differentiate yourself.
Robotics too.
Skill | What it means now |
|---|---|
Fine-tuning | Adapting models to your own task or data, not only prompting closed systems. |
Dataset building | Creating, cleaning, and evaluating the data that makes AI useful. |
Local deployment | Running models on your own device or infrastructure for privacy, cost, and control. |
Agent harnesses | Understanding the tools, routing, APIs, and prompts around the model. The engine is not the car. |
Robotics | Building, testing, and fixing simple physical AI systems like Reachy Mini. |
Tradeoff thinking | Choosing by task, cost, speed, privacy, and control, not by benchmark rank alone. |
Ksenia:
Hugging Face needs to go to schools with Reachy Mini and teach model training.
Clem:
We already have a bunch of professors buying Reachy Mini to help their students learn robotics. It’s good.
Ksenia:
You’ve been playing with robotics more than many people. What’s the equivalent of robotic slop?
Clem:
In humanoid robots, there are definitely a lot of fake AI marketing videos. Not many humanoid robots have actually shipped or become really useful yet. So there’s a lot of slop on the marketing side.
Every time you see a robot video, use critical thinking. A lot of them are just marketing and nowhere close to the robot’s real capabilities or real-world behavior.
Reachy Mini is actually one of the robots that has shipped the most these days. And for that level of cost, it’s pretty close to the frontier – I haven’t seen anything significantly more advanced at that price.
It’s not yet full physical AI. It’s more of an intermediate step from chatbots toward robotics. We still have a lot to solve before reliable, useful humanoid robots become normal – maybe in three, four, five, six years.
Ksenia:
From the Hugging Face platform perspective, what’s missing? Datasets? What are the bottlenecks your audience can help solve in robotics?
Clem:
Yes – datasets are missing, especially large datasets, because they’re costly to gather and host. We recently worked on a product to help with that called Hugging Face Buckets. It’s a way to store large datasets on the Hub – a bit like S3, but designed more for AI. It builds on technology from Xet, a company we acquired, which deduplicates datasets and makes them much easier and cheaper to host.
So hopefully that helps.
Frankly, we also need more builders. Historically, robotics was seen as a difficult field to break into, and that scared people off. But I think that’s changing. Now more builders – especially software engineers – can start playing with robotics by buying a Reachy Mini and starting their learning curve. After a few months, especially with agents, they can do really cool things.
So: more builders, more datasets, more openness. Right now a lot of companies are still working in silos without sharing much of their research. More openness and more collaboration would accelerate robotics the same way it accelerated models.
Hugging Face Becoming an Institution
Ksenia:
How do you feel about Hugging Face becoming such a big part of the ecosystem – almost an establishment in its own right? Aren’t you afraid of becoming the kind of company you originally fought against?
Clem:
No. We’re still tiny compared to many others. We’re still very community-driven, and I think that keeps us grounded in what people actually need.
We now have 15 million AI builders using the platform. There’s a new repository created every eight seconds. Almost three million public models on the platform, almost a million datasets. But the field changes so fast that it creates really strong forcing functions for us to keep changing and innovating.
We’re only about 200 people as a company, which is relatively small compared to our peers. I think we have a good setup to keep pushing ourselves – on robotics, storage, infrastructure, and the agent side, both to help everyone become AI builders and to help agents themselves become AI builders with their users.
My paternity leave was amazing for that too – taking a break and then coming back with fresh founder energy. It helped.
Ksenia:
What was your biggest revelation?
Clem:
Having kids gives you more perspective. You stop taking things too seriously. You stop overthinking certain things and focus more on getting things done – on what’s most impactful – so that you also have time for your family.
That was probably the biggest shift. I’m sure more insights will come over time, but it’s still a bit fresh.
Ksenia:
Is it true that you recently turned down a big investment from Nvidia?
Clem:
We usually don’t comment on private fundraising conversations. We’ve been lucky to be in a very strategic position in the community and to have strong investor support. We probably could have raised ten times more money than we have so far, and we’re grateful for that.
But for something like us, I’m not sure fundraising is what matters most. We’re building a community platform for the long run. It’s always trade-offs. Sometimes more money in the bank, bigger teams, or bigger spending also comes with more constraints – constraints to return more money to investors, constraints to go in certain directions. These are always the trade-offs founders make.
What Excites – and Concerns – Hugging Face's CEO
Ksenia:
Concluding our conversation: what excites you the most in the coming couple of years? The previous years felt like rushing into the bubble – researching, experimenting, building. This year feels more concrete to me. I don’t know how you feel.
Clem:
Yes, the field is definitely maturing. As I mentioned, we’re moving from a world where everyone just used the largest proprietary model through an API to a world where people are more thoughtful. They’re thinking: for some things we use APIs, for others we use specialized open-source models, and for others we use local models that are free.
I’m excited about that. It’s a sign the field is maturing, and it makes things more sustainable. For many companies, it’s just not sustainable to keep increasing token spend on large models for everything.
So that’s probably what excites me most – and within that, local AI is probably the thing that excites me the most.
Ksenia:
Aren’t you afraid to go out of business?
Clem:
No. The more people do local AI, the more they solve problems, the more they see the value of AI, and the more they’ll use everything else too.
I think local AI is great. Tools like llama.cpp are amazing – when you see how people use them, the speed they get on laptops. A lot of open agent platforms have actually been pretty bad for local AI so far, which is paradoxical, because they’re supposed to be open coding-agent platforms.
Often the harnesses and surrounding systems were open-source, but optimized for proprietary APIs. So they worked really well with frontier closed models and poorly with local or open ones. People then assumed the models themselves were bad, but often it was the harnesses and all the tricks around the model that were making or breaking performance.
You can’t just take something like OpenCode and swap one model for another and expect it to work instantly. There are many things around the model that affect performance and accuracy.
But we’ve been working with these communities, and they’ve been amazing about improving this. So I’m excited to see what people build over the next few months as local models, open-source models, and better open agent platforms come together.
Ksenia:
What concerns you the most?
Clem:
What we talked about earlier – renewed lobbying against open source in the US. That’s concerning, because in my opinion it pushes in the wrong direction and is destructive for the field.
Builders already have enough challenges competing with some of the biggest technology companies in the world. You don’t want them also worrying about regulation that prevents them from running a model locally. If I want to write software and run it locally, I’m allowed to do that. Why should it be different for AI?
It doesn’t exactly worry me, because I think people will ultimately understand how important open source is and how good it is for the world. But it does bother me, because I feel like we have better things to do than fight these battles – and unfortunately we still have to fight them.
A Book That Matters
Ksenia:
My last question is always about a book. What book influenced you a lot – one you’d love to share, either from your formative years or more recently?
Clem:
One of my favorite books is The Myth of Sisyphus by Camus. The idea is that Sisyphus is condemned to push this rock up a mountain, and every time he reaches the top, the rock rolls back down and he has to start again.
The conclusion of the book – the last sentence – is that you must imagine Sisyphus happy.
The philosophy behind it is that, despite the seeming meaninglessness of the task, Sisyphus finds happiness in the task itself. There are more meaningful ways to look at the work than just reaching the top or succeeding at the end.
That’s been a very useful metaphor for me as a founder: enjoying the act of building itself, not only the outcome or where you ultimately want to be.
I think you need that even more in AI right now, because so many things are happening that people can feel nervous, stressed, overwhelmed. They ask: how do I keep up? How do I compete? Especially for us as parents – sometimes you see a 20-year-old in Silicon Valley working 24/7 and you think, how am I going to stay relevant in a world like that?
Adopting more of a mindset of enjoying the task, enjoying the journey, doing useful work, and having fun – that seems like a big part of it.
This interview has been edited and condensed for clarity.
← Previous about Open Source: Raffi Krikorian / Mozilla | Next: Ted Bailey / Dataminr →
Further reading
Hugging Face Hub docs – https://huggingface.co/docs/hub/index
Hugging Face Agents docs – https://huggingface.co/docs/hub/en/agents
Reachy Mini docs – https://huggingface.co/docs/reachy_mini/index
LeRobot docs – https://huggingface.co/docs/lerobot/index
llama.cpp GitHub – https://github.com/ggml-org/llama.cpp
Hugging Face Buckets docs – https://huggingface.co/docs/hub/storage-buckets
ML InTern (GitHub) – https://github.com/huggingface/ml-intern
XetHub joins Hugging Face (acquisition post) – https://huggingface.co/blog/xethub-joins-hf
GPT-2: Better Language Models and Their Implications (OpenAI, Feb 2019) – https://openai.com/index/better-language-models/
Relevant Turing Post resources:
Nathan Lambert – Open Models Will Never Catch Up – https://www.turingpost.com/p/nathanlambert
State of AI Coding (Steve Yegge framing) – https://www.turingpost.com/p/aisoftwarestack
FAQ
What is Clem Delangue’s argument about open source AI?
He argues that open source AI is about control, learning, competition, and access. More people should be able to train, fine-tune, optimize, and run models themselves.
Open weights vs closed APIs: what is the difference?
Open weights are model components builders can inspect and adapt. Closed APIs are full systems with tools, routing, harnesses, and sometimes multiple models behind the interface.
Why does local AI matter?
Local AI can reduce cost, improve privacy, and give builders more control. Clem argues it may handle many workloads that currently default to large proprietary APIs.


