Hyundai has been making industrial robots for decades. They are also big in shipbuilding and defence. The K2 black panther is one of the best tanks in the world. Imagine the hype if Tesla did half the things Hyundai did.
I didn’t know HN like to manufacture drama. Why does this event affect Tesla? If anything, it validates what they’ve been working on. Same about Nvidia’s self driving platform.
We don’t know the capabilities of either and how they match up against Tesla’s Optimus and FSD.
Another car company making robots. The link can easily been made. Everyone knows that Tesla has ambitions in robotics. Few know Hyundai has been making robots for decades.
There are at least 18 humanoid robots good enough to have Youtube videos of them moving around. Some are far more agile than this one.
Needs more manipulation. Such elaborate fingers and all it does is mime carrying a box.
There are some brief material handling demos at the end, but nothing challenging.
There's been considerable progress in robot manipulation in the past year, after many decades of very slow progress. This year's new manipulation demos have been for fixed base robot hands. Robot manipulation still isn't good enough for Amazon's bin picking.
The best demo of 2025 is two robot hands opening a padlock with a key, with one hand holding the lock while the other uses the key.
We'll probably see this start to come together in 2026.
I think the big differentiator for this one is the carrying capacity. They list 50kg instant/30kg sustained carrying capacity which is very impressive and I can't think of other humanoids with similar capability off the top of my head.
That's what I was thinking, but could not find the link. Here is it working on some standard tasks.[1] Grasping the padlock and inserting the key is impressive.
I've seen key-in-lock before, done painfully slowly.
Finally, it's working.
That system, coupled to one of the humanoids for mobility, could be quite useful. A near term use case might be in CNC machining centers. CNC machine tools now work well enough on their own that some shops run them all night. They use replaceable cutting tools which are held in standard tool holders. Someone has to regularly replace the cutting tools with fresh ones, which limits how long you can run unattended. So a robot able to change tool holders during the night would be useful in production plants.
See [2], which is a US-based company that makes molds for injection molding, something the US supposedly doesn't do any more. They have people on day shift, but the machines run all night and on weekends. To do that, they have to have refrigerator-sized units with tools on turntables, and conveyors and stackers for workplace pallets.
A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.
> A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.
A humanoid robot is significantly more complicated than any CNC. Even with multi-axis, tool change, and pallet feeding these CNC robots are simpler in both control and environment.
These robots don't produce a piece by thinking about how their tools will affect the piece, they produce it by cycling though fixed commands with all of the intelligence of the design determined by the manufacturer before the operations.
These are also highly controlled environments. The kind of things they have to detect and respond to are tool breakage, over torque, etc. And they respond to those mainly by picking a new tool.
The gulf between humanoid robotics in uncontrolled environments is vast even compared to advanced CNC machines like these (which are awesome). Uncontrolled robotics is a completely different domain, akin to solving computation in P by a rote algorithm, vs excellent approximations in NP by trained ML/heuristic methods. Like saying any sorting algorithm may be more complex than a SOTA LLM.
Most flexible manufacturing systems come with a central tool storage (1000+ tools) that can load each individual machine's magazine (usually less than 64 tools per machine). The solution to the problem you mention is adding one more non-humanoid machine. The only difference is that this new machine won't consume the tools and instead just swaps the inserts.
There is literally no point in having a humanoid here. The primary reason you'd want a human here is that hiring a human to swap tools is extremely cost effective since they don't actually need to have any knowledge of operating the machines and just need to be trained on that one particular task.
I think manipulation will come long before 2036, but the people doing high level planning on LLMs trained on forum discussions of Chucky movies and all kinds of worse stuff and planning for home robot deployment soon I think are off by a lot. Things like random stuff playing on TV rehydrating that memory that was mostly wiped out in RLHF; it will need many extra safety layers.
And even if it isn't just doing crazy intentional-seeming horror stuff, we're still a good ways off from passing the safely make a cup of coffee in a random house without burning it down or scalding the baby test.
Tesla’s R&D has been shit for years. The value it brings to the table is mass-manufacturing expertise.
Tesla can bomb the robot for a while. As long as it keeps its plants online, it can buy or partner with one of these guys with its manufacturing platform (and political connections).
I'm seeing them at #9. Maybe you meant most reliable electric vehicle (Model 3)? Their average rating is dragged down a lot by the cybertruck which CR says is a bit of a lemon.
I think that you're just off on timing. The previous poster didn't mention the even worse news for Tesla: its sales declined for the second straight year, and it's no longer the leading manufacturer of electric cars. (BYD is.)
I hope you’re right and I also am near-certain you will be in my shoes definitely in 2 years, maybe 3. Here’s hoping that won’t be the case. (My pessimistic take is this will have ~0 effect due to retail still being in it for FSD (via cyber cab) and the robot)
The "creepiness" isn't a bug; it's a conflict of expectations.
We are looking at a machine that mimics the human form (bipedal, two arms) but completely ignores Biological Constraints (tendons, ligaments, joint limits).
When it rotates 180 degrees at the waist, our brains trigger a "Body Horror" response because we subconsciously map our own anatomy onto it. We see a broken spine. The robot just sees a shorter path.
This is purely Unconstrained Kinematics. Hyundai isn't trying to build a "Better Human." They are building a "Bipedal Forklift" that just happens to fit through our doorframes.
It’s a tool. Let’s stop judging it like it’s supposed to be one of us.
> Tech CEOs and their breathless AI hype have demonstrated to everyone how dreadfully effective it is to weaponize anthropomorphization and pareidolia.
Read my paper, It's called "AI Consciousness: The Asymptotic Illusion of Artificial Agency—A Quantum-Topological Proof of Consciousness Non-Computability via Orchestrated Objective Reduction and Penrose-Lucas Arguments" You'll find it on Academia & Zenodo
It pretty much shuts that crap down into fantasy land where it deserves to stay.
Here's the abstract:
This paper challenges the prevailing assumption that computational escalation can bridge the ontological gap between artificial intelligence and genuine consciousness, presenting the "Asymptotic Illusion Thesis": simulation, regardless of fidelity, cannot transmute into phenomenal reality due to a fundamental topological obstruction.
We establish that neural networks operate within Class P (Polynomial) complexity—deterministic, syntactically defined systems—while conscious experience maps to Class NP (Nondeterministic Polynomial) complexity, possessing an interiority that resists algorithmic compression.
This is not a technical limitation but a geometric mismatch: the linear structure of computation fundamentally cannot access the high-dimensional manifold of qualia.
Our analysis demonstrates that organizational closure, not code, defines sentience.
Biological systems exhibit autopoietic constraint-generation in continuous thermodynamic struggle (f(f)=f), creating intrinsic necessity absent in artificial systems governed by programmed teleology.
We invoke Penrose-Lucas arguments and Orchestrated Objective Reduction (Orch-OR) theory to establish that genuine free will and agency require non-computable quantum processes in neuronal microtubules—a computational capacity Turing Machines cannot replicate.
Recent neuroscientific debunking of "Readiness Potential" signals vindicates top-down veto control as the locus of volition, supporting quantum indeterminacy models of free will.
The hard problem of consciousness—the explanatory gap between physical processes and subjective experience—represents a topological barrier, not an engineering problem.
Current large language models represent "Stochastic Parrots" proficient in syntactic manipulation but devoid of semantic comprehension.
Our conclusion: the consciousness-computation boundary is physical and absolute, necessitating a reorientation of technological development toward preservation of biological consciousness rather than simulated existence.
The comments here are focusing on the "awkward transition" from dancing to static, but that's the most honest part of the demo.
We’ve been spoiled by 10 years of highly choreographed, multi-take Boston Dynamics sizzle reels. What we just saw was the transition from R&D Showpiece to Factory Tool.
The "awkwardness" is what actual deployment looks like.
It’s electric (no hydraulic leaks).
It has 56 DoF (redundancy for complex assembly).
It’s being deployed in Georgia now, not in "3 months maybe."
Tesla is shipping Optimus Sub-Primes. Hyundai is shipping a boring, reliable, high-torque worker. I’ll take the boring static model that actually has a spec sheet over another backflip video any day.
> Tesla is shipping Optimus Sub-Primes. Hyundai is shipping a boring, reliable, high-torque worker. I’ll take the boring static model that actually has a spec sheet over another backflip video any day.
We don't actually know how boring or reliable it is.
But the key here is really the mind. Atlas looks strictly worse for a given task than any other kind of robot. Its only advantage is the touted lower training costs. It's very unclear how that really measures up. You can see a robot do cool stuff on stage and imagine it must be great, but the only thing that really matters for manufacturing is whether they can lower the training cost for new tasks to much less than what other static non-humanoid robot manufacturers make.
There it seems dubious. They only seem to talk about and demonstrate one task, engine part sequencing. It appears to be just a pick and place task. It's not obvious why existing robots can't do it well. They make general claims about how it's often not worth automating a task, because it changes too quickly or it costs too much to program a robot. Sure. But that's a statement about the quality of AI not the form of the robot.
Existing pick place machines work great and can handle messy real world noise like objects being in random positions and places. They are much, much faster than a humanoid robot will be, and much cheaper. So what's Atlas' advantage on the factory floor?
Whoever approved making a robot look like it is about to dance before awkwardly panning to a "static" model should not be making those decisions. It literally killed the vibe in the room. People went from the verge of freaking out to the biggest let down ever that it ruined whatever they said afterwards.
Disagree, it was a bad move in two different ways. It was anticlimactic emotionaly and it didn’t convey the right message rationaly either.
Anticlimax because the first robot hyped up the entrance of the second robot. It was emotionaly conveying that “hey you think these groovy movements are great? Check out this guy.” But once it become clear that the next guy is just a dumb statue it deflated. How lively the first one was made the second one that much worse in context. A step back.
That is the emotional fail. But perhaps you don’t care about that. Think about what additional message the stage presence of the second robot conveys. The first robot estabilished that they can make a smooth robot. They drove home that the robot is usually autonomous, but in any way it is not pupetted by a guy in a motion tracking suit. The presentation covered how the robots will be used, who will be the first pilot costumer, how will it be introduced and how will it be manufactured. These are all great answers to a concern someone from the audience might have.
But what is the concern to which the second robot is the answer for? Did you doubt even for a second their ability to make the same robot you can already see on the stage but in blue? Because i didn’t. Not before they shown the static demonstration. If they just said “we are working on a production optimised, and streamlined v2” i would have totaly accepted that they can do it.
The only message the second non-working robot communicates is that they are having trouble with their production model. They couldn’t even make it stand in one spot and wave politely! Something is cooked with it and badly. It adds nothing positive to the message of the presentation while introduces the very visible sign that something is wrong.
Now, do I think they won’t be able to solve the problems eventually? Of course not. Heck maybe it will be up and running within days. But why show something which is not working? It is such an unforced error. The first robot could have just done the dance then pointed at the screen and then walked out and nothing would have been less about the whole presentation.
I'm not trying to say you're wrong. I'm trying to say that it did not kill the "vibe" for me, so to say. For all I know they _wanted_ the second one to move, but it wasn't ready in time, and situations like that are completely legitimate. I still am very impressed by what they _did_ demonstrate. Can't win them all!
> Decisions will not be made based on emotions from a demo at CES.
Sure. It is not a mistake with grave consequences. Something can be a mistake and not matter much in the long run. Like the CEO could have went on stage wearing mismatched shoes, or wearing a red clown nose. It wouldn't ruin everything. Wouldn't bankrupt them. If the robots are good they will be still sold. But it would just undermine the message a little bit. For no good reason whatsoever.
The fundamental questions will be: Do the robots work? Are they cheaper than the equivalent labour from humans? (including all costs on both sides of the comparison.) Nothing else matters in the long run. They could have just never went to CES and it would be all the same.
> Im sorry, but this is just too much.
ok :) if you say so. But then tell me. What did the stage presence of the second robot add to the show?
>"We just couldn't pry the actual production samples out of our engineers hands at the lab this week. "
sounds like "Our CEO ordered samples to be shipped but those pesky engineers just wouldnt do it guys!"
>"Um, so we're going to be showing you videos"
Except they didnt even show videos, just some bad CGI aka "We rented this huge ass auditorium to show you our pet. Golden elephant is currently in our basement, he is tired right now so instead look at all those cool drawings my nephew made"!
This may be a bit overwrought, I’m on my 3rd watch and can’t identify the moment where breakdancing could be expected, my best guess is when it does a tai chi position at 3m20s, which seems unlikely but perhaps on the nose if you’re western, young, not a dancer, and don’t know breakdancing well, i.e. as a series of static positions moved through slowly.
Ever been involved with a demo, especially at an event the size of CES? I'd also choose a human controller. Frankly, the fact that they were honest and wrote it into the script is a positive indication.
I am used to thinking of robots having to be physically isolated from the squishy humans. Probably a reasonable safety precaution for what is a gen 0.8 model to not have an opportunity to run amuck while being filmed.
It's interesting so far we have not seen the new Atlas actually functioning. In the past Boston Dynamics announcements have always been done with real hardware. But this time it's only with models and CGI.
Why does it needs bipedal legs on a flat factory floor, does it need them to walk up the bus steps it takes to get to it's second story walk up apartment where it lives w/ its robot family?
The end goal is to have these things be general purpose do all sorts of jobs everywhere. Civilization is designed around human anatomy, so if you want to replace humans with a drop-in robot replacement in as many places as fast as possible then a humanoid robot is how you’re going to do it. It’s also probably more cost effective to design 1 robot that does everything, than design 1 robot specialized for 1 task. Specialized robots will fill in gaps, but humanoids will be the vast majority of the AI-based robot workforce.
This, and ease of transition. I imagine if you're buying a robot for a factory/complex/rig/jobsite, you would want one that can access anywhere a human can. You can't just rebuild a factory for a couple robots
Does the robot need to eat its robot lunch in the breakroom and share some robot memes w/ its robot bros? We already have robotics in factories, when they are more efficient than humans the factories are retrofitted or rebuilt around the new machines.
I don't know why this keeps getting repeated. Honestly I'm taking the exact opposite bet and am going to work on making it as easy as possible to build specialized robots and machines.
People seem to misunderstand how easy it is to build a humanoid robot and how hard it is to program robots in general. Even if you build a humanoid robot that is perfectly general purpose mechanically, you will still need to program it like a computer that just happens to have arms and legs.
Yeah and it has to be more cost efficient too, these things take multiple car batteries that have to be recharged more than daily. None of this makes any sense, it's just fantasy land for tech bros.
Why does Hyundai need "general purpose" bots, do they want to create a robot civilization? They don't need a robot to "do everything", they need robots to efficiently make cars. What do you even mean "do everything? Does it need to take the bus home and make love to its robot wife, walk its robot dog around the block? It doesn't need legs, you have drunk the koolaid, snap out of it. AGI isn't around the corner and your fantasy robot world isn't real.
Considering that, either of two things would be sufficient for them to make general purpose robots:
* It will be one of their numerous business lines and provide substitutable labor for the others, not all of which are in car construction
* Considering they are in things from credit cards to railways to steel, they would like to add a new product line of selling robots to other customers
None of this is outrageously out of line. Various companies start with some business lines and end up with others. This is particularly common in Korea where Samsung wasn't always a semiconductor company. Hyundai themselves were in construction first. Closer to home, Amex was a logistics company. These things happen. Perhaps you are familiar with Softbank which was a PC software publisher and is now an investment company.
So let me get this straight, you think they'll have one robot that what, flies around to each job site and does each different job? This makes 0 sense, you are not thinking like an engineer. What you are dreaming of is fantasy land, again, wake up you are delusional.
What will it cost though? They keep talking industrial, I guess it will be very expensive. You would hope that a company like Hyundai can mass produce these things for a sane price really.
Maybe wait for a consumer version... without 56 DoF. Although who knows what kind of laundry folding might be possible with 56 degrees of freedom, and fully rotating joints!
Just curious about the price of these once the roll of the line. I would pay 150k for it today, not 500k. So I wonder if I will have one next year or not.
And for now; who knows if it can fold laundry, literally the only demos we get is dancing and marial arts, 2 things I could not care less about. I want my house painted in whatever hot weather (painters don't work in the summer here because too hot), laundry picked up, stairs cleaned etc. I don't need a 100k robot doing Korean dances.
But I think this 56 DoF might be more interesting than whatever the consumer product will be, as the consumer products seem to be vastly worse even than the industrial ones and both had 'sketchy' demos of doing very simple tasks (slow, parkinson like, many takes, often with someone controlling it with vr glasses and controller).
Hyundai aims not only to use humanoid robotics but also to scale it industrially. The company plans to build a production system with a capacity of up to 30,000 robots per year. Analysts at Morgan Stanley Research predict the market for humanoid robotics to reach a volume of around five trillion US dollars by the year 2050. For the period around 2028, when Hyundai plans to start scalable production, a unit price of about 150,000 US dollars is expected.
$150k is about 2-3 manufacturing floor salaries for one year. I am quite certain many companies would prefer to buy a compliant robot slave that will never ask for a raise, take sick leave, or get demoralized than to pay 2-3 fulltime employees the value of their labor, only to have them leave for a better job 6 months later and have to train new ones all over again.
This robot is going to put all the children out of work at the Hyundai factory in Georgia. Or maybe their child labor augments the robot for tight to reach areas?
Makes much more sense, right? In factory it's so much easier to control the environment (flat floors etc.) than to develop the ever next-generation robot that can cope with all situations. Also I don't know what that battery-assembling is a humanoid anyway.
When people say "humanoid robots are designed for human work spaces" 95% of the time they mean the counter/table is at adult human height, the other 5% of the time they mean the robot has to climb stairs.
I wonder how easy it is to control. Like does someone have to spend 6 months in a lab programming each movement, or could you have it doing a new task by the end of the day?
It would be cool to have one at home as a little helper some day.
Wonder how much this would cost out the door. If figure’s optimistic target is 150k per robot after mass manufacturing. This level of dexterity is only needed for personal robots. I would love this to be sub 10k
The robot companies are all get closer, but I don't think a humanoid robot will do anything at a cheaper cost than an actual human in 2026. It definitely looks like there is a path though.
Will the winner in the humanoid robot game be the one that develops the most human like hands? A good test would be a robot that can thread a needle and sew on a button.
Interesting the Atlas robot can only operate from -4 to 104⁰ F. That upper limit is pretty weak, I wonder what starts to break or not work properly at 110⁰ F?
> I wonder what starts to break or not work properly at 110⁰ F?
Most likely the cooling of the actuator motors. You need to keep the magnets in the motors under their curie point or they stop being magnets. At the same time the coils right next to the magnets are heated by the electricity going through them.
I wonder if this will cause new legislation to be created, and new government bodies like the FDA. If this becomes available to many homes, what is to stop a hacker from programming this humanoid to kill its owners?
> what is to stop a hacker from programming this humanoid to kill its owners?
What's to stop a hacker from hacking into the tesla update server and pushing an update that causes all teslas to max accelerate right off bridges?
I wonder if over-the-air updates for cars will cause new legislation and a new regulatory body making it illegal to push a murder-update to cars, cause otherwise someone will surely do that.
It's neither that easy to "just hack anything", nor does the world have skilled malicious people that want to commit murder, if only they could do it through hacking instead of with a gun.
Like, this fear-mongering about "what if the hackers turn this into a weapon" seems like such a silly worry in a country where anyone can trivially acquire a gun and a bump-stock, or a car, or a drone and materials for a bomb. Or a canister of gasoline and a pack of matches.
The only robotics ETFs available to the public are full of the usual dross like Nvidia and Tesla.
If your broker gives you access to the South Korea Stock Exchange, Hyundai's ticker is 005380.KS - I believe they own Boston Dynamics, so that gives you credible exposure to robotics.
I was wondering the same thing about bio-tech years ago and never found one I trusted. Recently learned about “investment themes” offered by Charles Schwab, and I was actually impressed with their offerings and the ability to tailor your picks & percentages in a given “theme.”
What claim will Elon make next to defend the stock price?
We don’t know the capabilities of either and how they match up against Tesla’s Optimus and FSD.
Needs more manipulation. Such elaborate fingers and all it does is mime carrying a box. There are some brief material handling demos at the end, but nothing challenging.
There's been considerable progress in robot manipulation in the past year, after many decades of very slow progress. This year's new manipulation demos have been for fixed base robot hands. Robot manipulation still isn't good enough for Amazon's bin picking. The best demo of 2025 is two robot hands opening a padlock with a key, with one hand holding the lock while the other uses the key.
We'll probably see this start to come together in 2026.
Thus far I see no evidence that robot manipulation will come together by 2036, let alone 2026.
https://www.pi.website/blog/pistar06 has some reasonable footage of making espresso drinks, folding cardboard boxes, etc.
That system, coupled to one of the humanoids for mobility, could be quite useful. A near term use case might be in CNC machining centers. CNC machine tools now work well enough on their own that some shops run them all night. They use replaceable cutting tools which are held in standard tool holders. Someone has to regularly replace the cutting tools with fresh ones, which limits how long you can run unattended. So a robot able to change tool holders during the night would be useful in production plants.
See [2], which is a US-based company that makes molds for injection molding, something the US supposedly doesn't do any more. They have people on day shift, but the machines run all night and on weekends. To do that, they have to have refrigerator-sized units with tools on turntables, and conveyors and stackers for workplace pallets. A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.
[1] https://www.pi.website/blog/olympics
[2] https://www.youtube.com/watch?v=suVhnA1c7vE
A humanoid robot is significantly more complicated than any CNC. Even with multi-axis, tool change, and pallet feeding these CNC robots are simpler in both control and environment.
These robots don't produce a piece by thinking about how their tools will affect the piece, they produce it by cycling though fixed commands with all of the intelligence of the design determined by the manufacturer before the operations.
These are also highly controlled environments. The kind of things they have to detect and respond to are tool breakage, over torque, etc. And they respond to those mainly by picking a new tool.
The gulf between humanoid robotics in uncontrolled environments is vast even compared to advanced CNC machines like these (which are awesome). Uncontrolled robotics is a completely different domain, akin to solving computation in P by a rote algorithm, vs excellent approximations in NP by trained ML/heuristic methods. Like saying any sorting algorithm may be more complex than a SOTA LLM.
There is literally no point in having a humanoid here. The primary reason you'd want a human here is that hiring a human to swap tools is extremely cost effective since they don't actually need to have any knowledge of operating the machines and just need to be trained on that one particular task.
And even if it isn't just doing crazy intentional-seeming horror stuff, we're still a good ways off from passing the safely make a cup of coffee in a random house without burning it down or scalding the baby test.
Tesla’s R&D has been shit for years. The value it brings to the table is mass-manufacturing expertise.
Tesla can bomb the robot for a while. As long as it keeps its plants online, it can buy or partner with one of these guys with its manufacturing platform (and political connections).
Not a bullish case. But also not a death knell.
I don't see how that squares with the ramp-up and QC issues that are well-documented at this point.
Or Hyundai EVs breaking down 10x more often than worst ICE cars according to ADAS.
https://www.consumerreports.org/cars/car-reliability-owner-s...
They’re shipping. Nobody else is (in America) for battery electrics at that scale.
That doesn’t port perfectly to robotics. But it’s a good enough fit to give them, at the very least, a seat at every auction.
(Tesla also has cheap acquisition currency in its stock.)
Manufactured (No pun intended) political outrage most likely. Seems to be the M.O for the last few years at least.
We are looking at a machine that mimics the human form (bipedal, two arms) but completely ignores Biological Constraints (tendons, ligaments, joint limits).
When it rotates 180 degrees at the waist, our brains trigger a "Body Horror" response because we subconsciously map our own anatomy onto it. We see a broken spine. The robot just sees a shorter path.
This is purely Unconstrained Kinematics. Hyundai isn't trying to build a "Better Human." They are building a "Bipedal Forklift" that just happens to fit through our doorframes.
It’s a tool. Let’s stop judging it like it’s supposed to be one of us.
Tech CEOs and their breathless AI hype have demonstrated to everyone how dreadfully effective it is to weaponize anthropomorphization and pareidolia.
Read my paper, It's called "AI Consciousness: The Asymptotic Illusion of Artificial Agency—A Quantum-Topological Proof of Consciousness Non-Computability via Orchestrated Objective Reduction and Penrose-Lucas Arguments" You'll find it on Academia & Zenodo
It pretty much shuts that crap down into fantasy land where it deserves to stay.
Here's the abstract:
This paper challenges the prevailing assumption that computational escalation can bridge the ontological gap between artificial intelligence and genuine consciousness, presenting the "Asymptotic Illusion Thesis": simulation, regardless of fidelity, cannot transmute into phenomenal reality due to a fundamental topological obstruction.
We establish that neural networks operate within Class P (Polynomial) complexity—deterministic, syntactically defined systems—while conscious experience maps to Class NP (Nondeterministic Polynomial) complexity, possessing an interiority that resists algorithmic compression.
This is not a technical limitation but a geometric mismatch: the linear structure of computation fundamentally cannot access the high-dimensional manifold of qualia.
Our analysis demonstrates that organizational closure, not code, defines sentience.
Biological systems exhibit autopoietic constraint-generation in continuous thermodynamic struggle (f(f)=f), creating intrinsic necessity absent in artificial systems governed by programmed teleology.
We invoke Penrose-Lucas arguments and Orchestrated Objective Reduction (Orch-OR) theory to establish that genuine free will and agency require non-computable quantum processes in neuronal microtubules—a computational capacity Turing Machines cannot replicate.
Recent neuroscientific debunking of "Readiness Potential" signals vindicates top-down veto control as the locus of volition, supporting quantum indeterminacy models of free will.
The hard problem of consciousness—the explanatory gap between physical processes and subjective experience—represents a topological barrier, not an engineering problem.
Current large language models represent "Stochastic Parrots" proficient in syntactic manipulation but devoid of semantic comprehension.
Our conclusion: the consciousness-computation boundary is physical and absolute, necessitating a reorientation of technological development toward preservation of biological consciousness rather than simulated existence.
By the time someone does it’s too late.
Even the few who do read about anthropomorphization still have to override their subconscious reaction. We’re all human after all..
The market may yet settle on a different form factor that won’t trigger the ick.
Flat tabletop with robot legs??
I mean HAL is starting to look mighty fine right now. "Hello Dave"
We’ve been spoiled by 10 years of highly choreographed, multi-take Boston Dynamics sizzle reels. What we just saw was the transition from R&D Showpiece to Factory Tool.
The "awkwardness" is what actual deployment looks like.
It’s electric (no hydraulic leaks).
It has 56 DoF (redundancy for complex assembly).
It’s being deployed in Georgia now, not in "3 months maybe."
Tesla is shipping Optimus Sub-Primes. Hyundai is shipping a boring, reliable, high-torque worker. I’ll take the boring static model that actually has a spec sheet over another backflip video any day.
We don't actually know how boring or reliable it is.
But the key here is really the mind. Atlas looks strictly worse for a given task than any other kind of robot. Its only advantage is the touted lower training costs. It's very unclear how that really measures up. You can see a robot do cool stuff on stage and imagine it must be great, but the only thing that really matters for manufacturing is whether they can lower the training cost for new tasks to much less than what other static non-humanoid robot manufacturers make.
There it seems dubious. They only seem to talk about and demonstrate one task, engine part sequencing. It appears to be just a pick and place task. It's not obvious why existing robots can't do it well. They make general claims about how it's often not worth automating a task, because it changes too quickly or it costs too much to program a robot. Sure. But that's a statement about the quality of AI not the form of the robot.
Existing pick place machines work great and can handle messy real world noise like objects being in random positions and places. They are much, much faster than a humanoid robot will be, and much cheaper. So what's Atlas' advantage on the factory floor?
https://www.fanucamerica.com/solutions/applications/picking-...
I also wish Tesla and Figure all the best, of course. More competition in this space will ultimately benefit all of us.
Sometimes, demos are just not ready on time. It's a reality of life. Not every company throws baseballs at their Cybertruck windows onstage.
Anticlimax because the first robot hyped up the entrance of the second robot. It was emotionaly conveying that “hey you think these groovy movements are great? Check out this guy.” But once it become clear that the next guy is just a dumb statue it deflated. How lively the first one was made the second one that much worse in context. A step back.
That is the emotional fail. But perhaps you don’t care about that. Think about what additional message the stage presence of the second robot conveys. The first robot estabilished that they can make a smooth robot. They drove home that the robot is usually autonomous, but in any way it is not pupetted by a guy in a motion tracking suit. The presentation covered how the robots will be used, who will be the first pilot costumer, how will it be introduced and how will it be manufactured. These are all great answers to a concern someone from the audience might have.
But what is the concern to which the second robot is the answer for? Did you doubt even for a second their ability to make the same robot you can already see on the stage but in blue? Because i didn’t. Not before they shown the static demonstration. If they just said “we are working on a production optimised, and streamlined v2” i would have totaly accepted that they can do it.
The only message the second non-working robot communicates is that they are having trouble with their production model. They couldn’t even make it stand in one spot and wave politely! Something is cooked with it and badly. It adds nothing positive to the message of the presentation while introduces the very visible sign that something is wrong.
Now, do I think they won’t be able to solve the problems eventually? Of course not. Heck maybe it will be up and running within days. But why show something which is not working? It is such an unforced error. The first robot could have just done the dance then pointed at the screen and then walked out and nothing would have been less about the whole presentation.
Sure. It is not a mistake with grave consequences. Something can be a mistake and not matter much in the long run. Like the CEO could have went on stage wearing mismatched shoes, or wearing a red clown nose. It wouldn't ruin everything. Wouldn't bankrupt them. If the robots are good they will be still sold. But it would just undermine the message a little bit. For no good reason whatsoever.
The fundamental questions will be: Do the robots work? Are they cheaper than the equivalent labour from humans? (including all costs on both sides of the comparison.) Nothing else matters in the long run. They could have just never went to CES and it would be all the same.
> Im sorry, but this is just too much.
ok :) if you say so. But then tell me. What did the stage presence of the second robot add to the show?
>"We just couldn't pry the actual production samples out of our engineers hands at the lab this week. "
sounds like "Our CEO ordered samples to be shipped but those pesky engineers just wouldnt do it guys!"
>"Um, so we're going to be showing you videos"
Except they didnt even show videos, just some bad CGI aka "We rented this huge ass auditorium to show you our pet. Golden elephant is currently in our basement, he is tired right now so instead look at all those cool drawings my nephew made"!
# Initial robot tai-chis towards the right of the video
# This is a stage act that "cues up" the second robot
# One can expect that this "ta-da" moment will have the presented robot do something
# Instead, the presented robot stands there doing nothing
# We have statues that are hundreds of years old easily accessible. Hence a new statue is not interesting
https://youtu.be/CbHeh7qwils?t=437
also
"the next version is totally ready, but here's a full-size model"
What happened?
On one hand, this is great. It portends that all of us will benefit from intense price-and-feature competition between Hyundai, Tesla, and others.
On the other hand, Ironman 2's Hammer Drones no longer seem so far off:
https://youtu.be/Ryth87k2Mww?t=78
and Robocop doesn't seem so far-off either:
https://www.youtube.com/watch?v=ECemP5fi_n0
We sure live in interesting times.
People seem to misunderstand how easy it is to build a humanoid robot and how hard it is to program robots in general. Even if you build a humanoid robot that is perfectly general purpose mechanically, you will still need to program it like a computer that just happens to have arms and legs.
Considering that, either of two things would be sufficient for them to make general purpose robots:
* It will be one of their numerous business lines and provide substitutable labor for the others, not all of which are in car construction
* Considering they are in things from credit cards to railways to steel, they would like to add a new product line of selling robots to other customers
None of this is outrageously out of line. Various companies start with some business lines and end up with others. This is particularly common in Korea where Samsung wasn't always a semiconductor company. Hyundai themselves were in construction first. Closer to home, Amex was a logistics company. These things happen. Perhaps you are familiar with Softbank which was a PC software publisher and is now an investment company.
Expensive compared to other industrial robots?
Maybe wait for a consumer version... without 56 DoF. Although who knows what kind of laundry folding might be possible with 56 degrees of freedom, and fully rotating joints!
And for now; who knows if it can fold laundry, literally the only demos we get is dancing and marial arts, 2 things I could not care less about. I want my house painted in whatever hot weather (painters don't work in the summer here because too hot), laundry picked up, stairs cleaned etc. I don't need a 100k robot doing Korean dances.
But I think this 56 DoF might be more interesting than whatever the consumer product will be, as the consumer products seem to be vastly worse even than the industrial ones and both had 'sketchy' demos of doing very simple tasks (slow, parkinson like, many takes, often with someone controlling it with vr glasses and controller).
From the article:
Hyundai aims not only to use humanoid robotics but also to scale it industrially. The company plans to build a production system with a capacity of up to 30,000 robots per year. Analysts at Morgan Stanley Research predict the market for humanoid robotics to reach a volume of around five trillion US dollars by the year 2050. For the period around 2028, when Hyundai plans to start scalable production, a unit price of about 150,000 US dollars is expected.
It would be cool to have one at home as a little helper some day.
So I looked it up and it seems Hyundai owns Boston Dynamics now.
Most likely the cooling of the actuator motors. You need to keep the magnets in the motors under their curie point or they stop being magnets. At the same time the coils right next to the magnets are heated by the electricity going through them.
IANAL.. I know current US computer crime laws are extremely broad and ill-defined. Curious to hear opinion from someone who actually knows some law.
What's to stop a hacker from hacking into the tesla update server and pushing an update that causes all teslas to max accelerate right off bridges?
I wonder if over-the-air updates for cars will cause new legislation and a new regulatory body making it illegal to push a murder-update to cars, cause otherwise someone will surely do that.
It's neither that easy to "just hack anything", nor does the world have skilled malicious people that want to commit murder, if only they could do it through hacking instead of with a gun.
Like, this fear-mongering about "what if the hackers turn this into a weapon" seems like such a silly worry in a country where anyone can trivially acquire a gun and a bump-stock, or a car, or a drone and materials for a bomb. Or a canister of gasoline and a pack of matches.
Boston Dynamics and DeepMind form new AI partnership
https://news.ycombinator.com/item?id=46504966
If your broker gives you access to the South Korea Stock Exchange, Hyundai's ticker is 005380.KS - I believe they own Boston Dynamics, so that gives you credible exposure to robotics.
There are a few ADRs if you don’t. HYUD and HYMLF come to mind. (Owning Korean stocks is a pain.)