A Thermodynamic Computer That Uses Chaos to Think
Summary
Discussion of advanced quantum computing concepts including cubits vs p-bits, quantum flux measurement, stochastic thermodynamics, and bio-inspired thermodynamic computing for AI applications.
Key Claims (3)
Cubits and p-bits represent different approaches to computing with specific advantages for energy efficiency.
Evidence: Technical comparison of bit architectures and energy scaling problem
Energy Based Models (EBMs) use programmable Boltzmann distributions for learning.
Evidence: Research on parametric shape potentials and exponential learning rules
Room temperature superconductivity connects to stochastic electrodynamics and zero point energy.
Evidence: Theoretical physics linking thermodynamic computing to quantum phenomena
Video Details
- Published
- November 11, 2025
- Duration
- 18:09
- Views
- 2,022
- Claims Extracted
- 3
- Theories
- 2
- References
- 3
Video Transcript
So we've gone from cubits to what do they call it? P bits is what they're calling it. They're claiming it's a different thing. So it is it is kind of like yeah it's a different engine, right? It's a different it's not the same as a a standard quantum computer because now we're going about this completely differently, right? Instead of creating our quantum flux, we're just measuring the existing quantum flux. That's how I look at it. The part where I disagree I this is definitely a quantum computer, right? Like the difference is that I think that from classical physics we can explain brownie in motion. The question is why do we have this brownie in motion and is there a connection to the zero point energy? Even in Paul Tibido's presentation with Charles Chase, one of the questions he gets afterwards is do you think there's a connection to the from the thermal energy to the zero point energy? And I would say yes. I would personally say yes. But that's where I am and I don't feel the need to argue with people about this because it's more of a conceptual view of the world. As long as your microchip works, I don't really give a crap what you call it or how you claim that it it functions. As long as it works, it works, man. So, the problems that they bring up, first of all, they did not like the quantum experience. They also talk about the noise is a problem. Now, the noise in this case is the quantum noise. So I think what they're talking about here is the difficulty it is to take a measurement. The thing about quantum systems is that if you try to measure them they fall apart. So it seems like they found an indirect way to measure this thermal flux which reduces the noise. And the noise is something that's talked about a lot. Even John Kramer brought up the quantum noise when it came to quantum communication systems and quantum computers. He says that the noise you have to reduce the noise to find the one photon that's entangled. You might have a hundred photons that are not entangled and one photon that is entangled. How do you sift through the noise to find the entanglement? This is almost like a way around that where they're saying, you know what, we're not even going to look for that. We're just going to measure everything naturally and then we're just going to see where it goes. So it's a different approach and the problem they're trying to solve is the energy scaling problem. The energy scaling problem is as we make the microchips more and more advanced they're going to require more and more energy and as we make more and more data centers we require this huge amount of energy. One way to solve that is to make fusion power where we have unlimited energy. Another way to solve that is to reduce the energy requirement of the microchips themselves and of the computers themselves. Which of these options is better? I don't necessarily know. I would say that both options we should be going down both roads. We should be both increasing our output of energy and reducing uh the amount of energy that we use. Both are great ideas. And so I that's why another reason why I'm all in kind of I don't want to say I'm all in, but I'm so far I'm pretty high on this company. Um so they're approaching the problem from the efficiency standpoint instead of approaching it from the standpoint of just producing more and more power. That sounds great. Here's my big question that maybe we can figure out while we watch this video. Is what does it do? Can is this going to require my understanding is this is going to require people to move to new hardware which is kind of a taboo thing. You generally don't want to say hey everybody we built this new thing. It just doesn't work with any of your old hardware. You need to switch to a whole new whole new hardware and potentially new software as well. If that's the case I'm not as high on it. one of interoperability and integration with existing systems is really important. So this is the part where I just don't know enough about software and I'm not pretending to be an expert on quantum computers or anything like that even though I I talk about quantum physics a lot. So what I would want to know is really what is the practicality of building data centers that are using these and what do we need to do to have software that can use these and run real programs off these processors. Their approach was to say let's look at the human brain. Let's assume the human brain is a quantum computer. How is the human brain using such a small amount of energy? This is I this is something I've even wondered about. People have asked me this on spaces and things like that. We should be looking at nature. If we want to know, hey, how do we reduce the energy requirement on our computers? Well, there's a computer right here in everybody's head. And the amount of energy that our bodies are using is extremely low. We are low entropy biological beings. We use an extremely small amount of energy to do what we would say is a straight up magical amount of computation. The amount of information that you are feeding through constantly is huge. So this was their basis for figuring out how do we lower the energy requirement and I love this. I love it times more energy efficient. Um and so you know the idea was you know if we engineered a computational system from first principles for AI uh inspired from biology how do we take inspiration for biology you don't try to mimic biology directly you don't try to do biomimicry you wouldn't build a plane that flaps its wings you'd understand the principles that biology has learned to to uh to harness and um essentially you you'd harness uh those principles directly with an artificially engineered device and so that's what we're doing with sort of a a type of physics called stoastic thick thermodynamics. So, this isn't like like your great-grandfather's thermodynamics. It's it's really it's a really recent theory. I know the core theorems are like 1999. Uh it's as old as Google. When math is as old as Google, you know, it's really young. Uh and that's why people don't really know about this type of physics and this type of math. And that's why I skip so on. >> So, you know what's also weird about this guys is he says stochastic thermodynamics here. Is this connected to stochastic electronamics? Because supposedly here's how this goes. And I'm I'm let's say [snorts] fairly confident in this assessment. Uh remember when I did the TR3B stream? Um Jared Yates. I spoke to Jared Yates. I think it was him. Maybe it was somebody else, but uh it might have been the other guy, Douglas Miller. I can't remember who it was. Anyway, stochastic electronamics was basically a different branch. Instead of quantum electronics, we go down this stochastic electronamics. Yes, there it is. Thank you. Thank you, Jason. Jason, stochastic electronamics is just quantum electronamics plus the ether. That's a simple way to think about it. So when he starts bringing up stochastic electronamics here, this is when I got really interested as well because I'm sitting here going, I think these guys are just measuring the ether. I think this microchip is just measuring the ether and running computer off of it. And if you were to ask me this, the funny part about this is like there's an analogy here to nuclear power. When people think of nuclear power, they think of, oh, we're splitting the atom and then we're harnessing that energy and then we're putting that energy on the grid. But that's not what we're doing in nuclear power. Nuclear power, we're just boiling water, making steam, and spinning a turbine. In my head, when I think of quantum computers, I think of these computers that are harnessing the ether. But that's not what's really happening in quantum computers. In quantum computers, we're like stimulating these cu these entangled cubits and and we're measuring them. When I think of quantum computers, what I really think of is what these guys are doing. These guys are doing the thing that I would think of as quantum computing, which is like just measuring the random fluctuations in the ether. That's what I would think of as quantum computing measuring the ether. Chad guys, we are out here. We're just This is You know what? We're just coming up. These guys should just hire me to be their marketing guy. I'll just come up with dope ass phrases. We should just these things these guys should be saying that they're measuring the ether. Twitter about it uh because I want to kind of spread the message. But really, you know, at the large scale things are classical. They're deterministic. You have, you know, Newtonian physics. You have like ballistics. Uh or in the in in the cases you have very strong signals, your computer is in a deterministic state. If you go to very low power and very low temperatures or very small time scales things are quantum they're in superp positions uh you know there's sort of uh quantum interference happening uh and you can make all sorts of analog quantum experiments you could build a quantum computer out of that type of physics but if you have a messoscale computer so or a meoscale system basically it's jittering right so it's it's more like chemistry or molecular dynamics or the physics of gases slloshing around and and it's essentially stocastic so there's some randomness to the dynamics and there's some randomness to the state at all times and so basically there's an opportunity to build a computer that operates in this mess scale, right? Um a computer that has a probabistic state um that has programmable probabistic transitions and that uses stoastic thermodynamics. And that's what we're that's what we're pioneering and we think it's the future of silicon and we also think it's the future of AI hardware from first principles and why is that? We think we think deep learning is going to struggle to capture the complexity of nature. uh a single forward pass has to get deeper and deeper in order to have enough computational complexity to to replicate um uh you know the the distribution of the data set and the distribution of the data set um sorry uh the distribution of the data set you know has a lot of compute from nature in order to generate it and if you have to capture everything in one forward pass you need to make your model bigger and bigger and then you're going to have more parameters and then since you have more parameters you need to kill their entropy their entropy and you need more data and so instead if we basically uh use use um use sampling right use like test time compute scaling right as we're familiar then you can have a higher complexity sort of transformation from your input to your output and you can capture higher complexity and data and deeper into the tales of the distribution um the problem is um you know Monte Carlo sampling so to do MCTS and so on any kind of Monte Carlo algorithm kind of sucks on a classical computer you need a pseudo random number generator you have this sort of sequential walk >> okay I like everything he's talking about here he's talking about, okay, everything is so straightforward. We're trying to like, you know, input this energy here, trying to measure it back out. He says the word first principles, guys, when you want to sound smart, use the word, just use the phrase first principles as often as possible. Just say, I figured this out from first principles, blah blah blah, and then you're going to sound smart as Most people have no idea what it means. Basically just means they're using standard physics. We're basically going back to the framework of physics to solve a problem, pretty much. So what he's about to explain here though is that Monte Carlo is a system we use to derive randomness. So if you've ever done any kind of gambling, let's I hate even using this as an example, but if you do gambling online, well, how do you determine what what you get back in response? It's a Monte Carlo system that uses a false random generator, right? And what he's going to say here is it's extremely inefficient. Yes, RNG. RNG. Thank you very much. RNG, random number generator. If you don't know what RNG stands for, get it into your lexicon. And what he's saying is the old systems do RNG very slowly. They do it very slowly and inefficiently. And he's saying if we just measure the ether, we measure these thermal fluctuations, then we can get better randomness than the Monte Carlo simulation would get much more efficiently as well. you know, the ratios are right and then you get um basically a bolt distribution and that's really slow. uh you can try to paralyze it but there's a limit right so basically to run these algorithms and make them competitive in the algorithmic landscape you need a new type of computer and that's essentially what we're building right so our mission is to build the densest substrate for AI with the the philosophy stuff I talk about increasing the wattage of civilization with EO and then with extropic we're trying to gain more intelligence per watt and then put together we have more intelligence for civilization we're scaling uh essentially the scope and scale of intelligence in our universe and so uh you know Trevor and I uh basically hacked like 10 years ago as students at Waterlue how to do AI on quantum computers uh from first principles and then Google like pushed us out of >> Yeah, I was gonna say did they just miss they missed that bleep? [laughter] I feel like that bleep just totally missed that guy's fbomb. But that's okay. So right here he's about to explain that basically he's a genius and Google like pulled him out of school because of how much of a genius he is on quantum computers and then they worked on quantum computers and then they left quantum computers. But let me go back to what he was saying a second ago because he says that this is basically a probabilistic AI. So when I again when I think of probabilistic AI I'm thinking you're just me you're taking a cube of data and you're measuring it. You're measuring the randomness in this cube of data or whatever of data seems like measuring the medium and he says we're doing this because it's instead of stimulating we're not stimulating this. So the difference between the cubit is we would stimulate these cubits measure data. In this case we're not stimulating it. We're using the random fluctuations as a resource. We're using the random fluctuations themselves as a resource. To me all this screams zero point energy personally. Okay. >> School. And then we we we launched this product known as TensorFlow quantum. And then he went to the hardware division. Uh I ended up working for Sergey Brin on like special projects of all kinds and physics and AI. But uh over time it kind of got jaded with quantum and and we we we thought this sort of middle ground was uh between classical and quantum you know the meoscales was going to be way more interesting. And so we set out to do a bio inpired or thermodynamic form of computing. And so you know we're building hardware we're building compilers and middleware and we're we're building ways to connect it to the current deep learning stack. Uh so that it just feels like you know regular jacks and whatnot. What sort of primitives are you using on this computer? Well, okay. So, this gets pretty cool, guys, because this is where he actually explains what it does. You could you could force like imagine we're molding something or we're creating uh we're doing sculpting. We're doing sculpting and we're going to sculpt our shape. Our shape is our design. Our shape is our information. So, we're going to create this shape of information. And what he's saying is instead of manually creating our shape using this bunch of energy to create the star that we want, instead we're just gonna kind of stimulate, we're going to fluctuate, we're going to force the data into a mold that we want it to be into. So instead of actually just physically molding it the way we want, we just put the mold around it and let it form itself into its certain shape. And then we just measure the shape. We measure the shape and that becomes our resource. So if that sounds weird, well that's because all this computing stuff is pretty damn weird. But maybe I didn't get it right. Let's see. >> Our our primitive choice is like EBMs or energy based models, right? Which are programmable Boltzman distributions. Those are they're pretty clean because um you know you parameterize this energy landscape. So essentially we create this potential in which the the electrons dance and by just waiting having this parameterized landscape. So we have a parametric shape by just waiting eventually you know the red dots will represent electrons in this landscape that's you know we controlled with voltages eventually equilibriates and gives you a distribution and then because it's an exponential uh you can actually you can actually get all sorts of nice learning rules uh that don't necessarily require backdrop uh such as contrast contrastive learning rules and and there's all sorts of learning rules like this right it's similar to um it's similar to like um Jeffrey Hinton's forward forward algorithm and so you could train this machine to do machine learning by contrastive learning uh and and and and we've done that with uh some small devices and some bigger ones. Um and uh essentially, you know, neural nets came from energy based models. They came from, you know, looking at averages of layers of of bolts and machines, right? That's what the 2024 Nobel Prize was for and that's when deep learning really started. And so we're kind of going back return, you know, we're going back to where neural nets came from and just and doing these primitives in hardware and software. Um you know, in terms of applications, I don't have to explain what proistic inference can do. It can do most things. It's a supererset of of deep learning. Uh you know there's all sorts of cool early early day applications and um you know year and a half ago we like posted our first experiment online. Uh we got cooked pretty hard. It was good. Uh but really for us it was like okay can we can we even like create a programmable electron diffusion device and and show we could run like basic algorithms on it and how the hell would we program it? And basically our learnings from taking quantum computing tech running it basically 100 times hotter allowed us this was like a breadboard prototype. So, you know, we rented this lab in Canada. Everything was rented. Uh, and and and and we did some first experiments and now we're we're kind of sitting on the paper and we're going to put it out basically after we launch our our our first silicon product. Um, but you know, the goal there is to like open source uh a viable way to build this type of computing. Uh, you know, that's maybe less commercially interesting, but that academia can take and and run with. You know, here's our first our first uh >> Okay, so yeah, one person said a good comment in the chat is that what is the benefit of this is that the big benefit probably the number one I mean we already talked about the energy savings, but the the thing about the energy savings that makes it so significant is that this could be something that could be run in your phone. This becomes now a microchip that can go in your phone where your phone can do AI generative content. So now you don't need to go to a website to log in to chat GP3 J chat GP3 whatever to have it generate content for you. Now your pro your computer at home can do it. Why? Because it's a room temperature just a microchip that's being run at room temperature as opposed to something that needs to be frozen at absolute zero with liquid nitrogen whatever else they do for superconductivity to happen. This is why room temperature superconductivity is so important.