Allow me to borrow a phrase from baseball for a moment. If you follow the sport, you’re likely aware of the concept of the “three true outcomes.” They are, specifically, a home run, a strikeout, and a walk. The throughline between the three is that, in most instances, they’re not determined by the defense.
There’s gray area, of course, as is the case with any attempt at defining an absolute. There’s also a longstanding question of how valuable the concept is, in the forever cold war of baseball analysis. That’s all fine, because I’m mostly interested in co-opting the phrase here.
Broadly speaking, the three true outcomes for a robotics startup would be:
- Going public
- Getting acquired
As with the baseball world, there’s plenty of gray area to maneuver around in here. In robotics, specifically, you can forever maintain a perfectly successful company based on DARPA grants. Unlike baseball, it’s possible to do any combination of the above three, really.
But the heart of the question I’d like to get to here is this: What’s the best outcome for a robotics startup? No one wants number three, of course. But like striking out, it’s a very real — and disappointing — possibility. And as we’ve seen, not even a tremendous influx of VC funding can prevent startup failure entirely — especially in robots, where the barrier of entry is so intensely high. And besides, robotics is overdue for a little market correction in the face of macro trends.
IPOing has been an extremely rare outcome for robotics companies, even in the (now bygone) golden era of SPACs. Given the state of the overall market, some planned SPACs were put on hold in the interim, in hopes of riding more favorable trends. Frankly, number two seems like a perfectly reasonable — and often ideal — outcome for many firms. Robotics requires long runways and a lot of resources that a big corporation can offer.
Where you start to run into trouble, however, is fit. I imagine conversations happen all the time wherein the potential acquirer has a dramatically different notion than the acquiree. We do see these bad fits from time to time, of course. Maybe the company doesn’t understand the market fit or understand the resources that go into keeping a robotics firm afloat, or maybe they just had wildly different notions of what their robots could and couldn’t do. For every Amazon buying Kiva, there’s several Googles buying Boston Dynamics.
There were some question marks around Hyundai’s subsequent acquisition of the latter firm. A car company isn’t the most natural fit for what Boston Dynamics does, though I will say that this week’s announcement of the Boston Dynamics AI Institute is an interesting — and promising — wrinkle to this story. Research has always been a big piece of what the company does, and the new facility affords the company a lot of runway and resources, backed by a $400 million investment. That’s several times what Ford recently invested in its own U of M facility.
Most intriguing of all, BD’s founder and former CEO, Marc Raibert, will be heading up the institute. “Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” he said in a release tied to the news. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”
Following Google’s bungling of the acquisition (and a number of others around the same time, under Andy Rubin’s purview), it’s worth a check-in to see how the firm’s efforts in the category are going. My coverage of the space has largely revolved around Alphabet X graduates. Most prominent (thus far) is drone delivery service Wing, though we’re starting to see interesting work out of robotics software firm Intrinsic.
Last year, we also gave some column space to Smarty Pants, a promising soft robotic exoskeleton being developed by the lab. In March, the lab also offered a preview of Project Mineral, an autonomous rover designed to collect crop data. Specifically, it’s working to phenotype plants. The company writes:
Today, when most researchers phenotype plants, they carefully walk through fields, marking different plant traits with a notebook, pen and ruler. But imagine trying to eyeball how many beans are in a bean pod, or how long the leaves are, or how many flowers have blossomed. Now imagine doing that for thousands of plants, every week by hand, in the heat of summer. That’s the phenotyping bottleneck.
To help with this challenge, Mineral has been giving the Alliance’s researchers tools to help them run more experiments and discover more crop traits. For the last year, Mineral’s rovers — nicknamed “Don Roverto” by the local team — have been gently rolling through the test fields outside Future Seeds, capturing imagery of each bean plant and using machine learning to identify traits such as leaf count, leaf area, leaf color, flower count, plant count and pod dimensions. The rover does this continuously for every plant in the field, and knows exactly where each plant is so it can return a week later and report back on how the plant’s doing.
Slightly jealous that Haje got to pay Google’s in-house robotics efforts a visit this week. He wrote about the experience, which involves some work being done with fellow X graduate. He explains:
Speed and precision is one thing, but the nut Google is really trying to crack in its robotic labs, is the intersection between human language and robotics. It is making some impressive leaps in the level of robotic understanding natural language that a human might use. “When you have a minute, could you grab me a drink from the counter?” is a pretty straightforward request that you might ask a human. To a machine, however, that statement wraps a lot of knowledge and understanding into a seemingly single question. Let’s break it down: “When you have a minute” could mean nothing at all, just meant as a figure of speech, or it could be an actual request to finish what the robot is doing. If a robot is being too literal, the “correct” answer to “could you grab me a drink” could just be the robot saying “yes.” It can, and it confirms that it is able to grab a drink. But, as the user, you didn’t explicitly ask the robot to do it. And, if we’re being extra pedantic, you didn’t explicitly tell the robot to bring you the drink.
All told, I think there’s a case to be made here for developing robotics and AI companies in-house — of course, very few firms possess the resources of an Alphabet/Google. And even with Google’s time, money and patience, we’re a ways away from seeing how such pursuits might actually pay off.
Meanwhile, Xiaomi’s efforts are a massive question mark. Thus far, the company’s robotics work looks more akin to Samsung. Beyond some success with robotic vacuums, I don’t have much reason to believe its work is more than show at the moment. That includes last year’s Spot-like CyberDog and CyberOne, a new humanoid robot that debuted alongside some phones. From a design standpoint, it’s clear why the robot is being compared to Tesla’s thus-far-unseen efforts. It also gives a more…realistic expectation of what to anticipate from such a bipedal robot.
Before I leave you for the week, here is some funding news from an interesting startup: YC-backed Mobot just raised a $12.5 million Series A. The company creates robots designed to help developers test apps for issues.
“There are tools developed by companies like Applitools, Test.ai and others that leverage existing emulated testing frameworks to automate testing for mobile apps. However, the unfortunate reality is that many defects often slip through the cracks of software-based, emulated testing because it doesn’t accurately represent testing on real hardware,” founder Eden Full Goh tells TechCrunch. “Presently, Mobot is not positioning ourselves as a competitor or replacement for emulators and automated testing. Rather, our goal is to replace the inevitable manual quality assurance that everyone is still having to do and will increasingly have to do as device fragmentation grows in the next five to ten years.”
Meanwhile, I had an exclusive from CleanRobotics, the Colorado-based firm behind the waste-sorting robotic trash receptacle TrashBot. The company raised a $4.5 million Series A to scale out a robot designed to improve recycling sorting at the source.
“Recycling rules are confusing and consumers are often so confused that their recycling accuracy is less than chance, leading to highly contaminated recyclables, which no one is buying,” CEO Charles Yhap notes. “Our system improves material diversion from landfills, resulting in more recyclables and less waste.”
One person’s Actuator is another person’s treasure.
This article was originally published on TechCrunch.com. Read More on their website.