Jller is a machine that sorts stones from a specific river according to their geologic age.
The machine works with a computer vision system that processes the images of the stones and maps each of its location on the platform throughout the ordering process. The information extracted from each stone are dominant color, color composition, and histograms of structural features such as lines, layers, patterns, grain, and surface texture. This data is used to assign the stones into predefined categories.
I have been following with fascination the match between Google's Go-playing AI AlphaGo and top-tier player Lee Sedol and with even more fascination the human reaction to AlphaGo's success. Many humans seem unnerved not only by AlphaGo's early lead in the best-of-five match but especially by how the machine is playing in those games.
Then, with its 19th move, AlphaGo made an even more surprising and forceful play, dropping a black piece into some empty space on the right-hand side of the board. Lee Sedol seemed just as surprised as anyone else. He promptly left the match table, taking an (allowed) break as his game clock continued to run. "It's a creative move," Redmond said of AlphaGo's sudden change in tack. "It's something that I don't think I've seen in a top player's game."
When Lee Sedol returned to the match table, he took an usually long time to respond, his game clock running down to an hour and 19 minutes, a full twenty minutes less than the time left on AlphaGo's clock. "He's having trouble dealing with a move he has never seen before," Redmond said. But he also suspected that the Korean grandmaster was feeling a certain "pleasure" after the machine's big move. "It's something new and unique he has to think about," Redmond explained. "This is a reason people become pros."
"A creative move." Let's think about that...a machine that is thinking creatively. Whaaaaaa... In fact, AlphaGo's first strong human opponent, Fan Hui, has credited the machine for making him a better player, a more beautiful player:
As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn't see before. And that makes him happy. "So beautiful," he says. "So beautiful."
Creative. Beautiful. Machine? What is going on here? Not even the creators of the machine know:
"Although we have programmed this machine to play, we have no idea what moves it will come up with," Graepel said. "Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands -- and much better than we, as Go players, could come up with."
Generally speaking,1 until recently machines were predictable and more or less easily understood. That's central to the definition of a machine, you might say. You build them to do X, Y, & Z and that's what they do. A car built to do 0-60 in 4.2 seconds isn't suddenly going to do it in 3.6 seconds under the same conditions.
Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it's even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things -- paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace -- and do those things creatively and better than people.
Update: AlphaGo beat Lee in the third game of the match, in perhaps the most dominant fashion yet. The human disquiet persists...this time, it's David Ormerod:
Move after move was exchanged and it became apparent that Lee wasn't gaining enough profit from his attack.
By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White's powerful counter-attack.
I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.
Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.
One of the game's greatest virtuosos of the middle game had just been upstaged in black and white clarity.
AlphaGo's strength was simply remarkable and it was hard not to feel Lee's pain.
Let's get the caveats out of the way here. Machines and their outputs aren't completely deterministic. Also, with AlphaGo, we are talking about a machine with a very limited capacity. It just plays one game. It can't make a better omelette than Jacques Pepin or flow like Nicki. But not only beating a top human player while showing creativity in a game like Go, which was considered to be uncrackable not that long ago, seems rather remarkable.↩
Boston Dynamics, creator of the Big Dog prancing robot, has upgraded their Atlas robot, which can walk on two legs, open doors, stack boxes, walk on slippery terrain, recover from being shoved, etc. And everyone's all HA HA HA TERMINATOR but soon enough the HA HAs will become less hearty and more nervous. It took human ancestors hundreds of thousands of years to evolve from quadrupeds to bipeds and Boston Dynamics has done the same in just a few years.
Mark my words: no good will come of playing box keep-away with robots and treating them like, well, machines. It's already started...did you notice Atlas didn't even look behind itself to see if it needed to hold the door for anyone? And you think manspreading on the subway is a problem...wait until we have to deal with robotspreading by robots whose ancestors we shoved with hockey sticks.
"Madeline the Robot Tamer" is a really lovely video about Madeline Gannon, a woman who dances, so to speak, with robots. As a resident at Pier 9, she developed Quipt, "a gesture-based control software that gives industrial robots basic spatial behaviors for interacting closely with people." It's a wonderful demonstration of robots and humans learning to work together.
When people drive cars, collisions often happen so quickly that they are entirely accidental. When self-driving cars eliminate driver error in these cases, decisions on how to crash can become pre-meditated. The car can think quickly, "Shall I crash to the right? To the left? Straight ahead?" and do a cost/benefit analysis for each option before acting. This is the trolley problem.
How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you'd want your car to sacrifice you instead? Two? Six? Twenty?
The video above introduces a wrinkle I had never considered before: what if the consumer could choose the sort of safety they want? If you had to choose between buying a car that would save as many lives as possible and a car that would save you above all other concerns, which would you select? You can imagine that answer would be different for different people and that car companies would build & market cars to appeal to each of them. Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.1
Ethical concerns like the trolley problem will seem quaint when the full power of manufacturing, marketing, and advertising is applied to self-driving cars. Imagine trying to choose one of the 20 different types of ketchup at the supermarket except that if you choose the wrong one, you and your family die and, surprise, it's not actually your choice, it's the potential Trump voter down the street who buys into Car Company X's advertising urging him to "protect himself" because he feels marginalized in a society that increasingly values diversity over "traditional American values". I mean, we already see this with huge, unsafe gas-guzzlers driving on the same roads as small, safer, energy-efficient cars, but the addition of software will turbo-charge this process. But overall cars will be much safer so it'll all be ok?
The bit about Uber is a joke but just barely. You could easily imagine a scenario in which a Samsung car might choose to hit an Apple car over another Samsung car in an accident, all other things being equal.↩
The trolley problem is an ethical and psychological thought experiment. In its most basic formulation, you're the driver of a runaway trolley about to hit and certainly kill five people on the track ahead, but you have the option of switching to a second track at the last minute, killing only a single person. What do you do?
The problem becomes stickier as you consider variations of the problem:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you -- your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
As driverless cars and other autonomous machines are increasingly on our minds, so too is the trolley problem. How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you'd want your car to sacrifice you instead? Two? Six? Twenty? Is there a place in the car's system preferences panel to set the number of people? Where do we draw those lines and who gets to decide? Google? Tesla? Uber?1 Congress? Captain Kirk?
There's an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley's path so it hits a different worker. The first worker has an intended suicide note in his back pocket but it's in the handwriting of the second worker. The second worker wears a T-shirt that says PLEASE HIT ME WITH A TROLLEY, but the shirt is borrowed from the first worker.
Reeeeally makes you think, huh?
If Uber gets to decide, the trolley problem's ethical concerns vanish. The car would simply hit whomever will spend less on Uber rides and deliveries in the future, weighted slightly for passenger rating. Of course, customers with a current subscription to Uber Safeguard would be given preference at different coverage levels of 1, 5, and 20+ ATPs (Alternately Targeted Persons).↩
Motherboard has an interesting story about how women who lose limbs are finding prosthetic devices are made for men: "Man Hands."
When Jen Lacey gets her toes done, she does both feet, even though one of them is made of rubber. "I always paint my toenails," she says, "because it's cute, and I want to be as regular as possible." But for a long time, even with the painted toes, her prosthetic foot looked ridiculous. The rubber foot shell she had was wide, big and ugly. "I called it a sasquatch foot," she jokes. "It's an ugly man foot."
Part of the problem is that most prosthetic devices are designed by men and most prosthetists are men.
There are a few reasons for all this male-centric design. The history of prosthetics is, in large part, a history of war. One of the earliest written records of a prosthetic device comes from the Rigveda, an ancient sacred text from India. Ironically, that amputee is a woman--the warrior queen Vishpala loses her leg in battle and is fitted with a replacement so she can return and fight again. But after that, the history of prosthetics is nearly entirely a history of men--Roman generals, knights, soldiers, dukes.
Every year, 30 percent of those undergoing an amputation are women. In other words, it's the 70 percent that's male that drives the market.
I, for one, welcome our new ROBOPRIEST overlords. I found ROBOPRIEST on artist Josh Ellingson's website. The robot costume-for-two was intended to perform wedding ceremonies and is the brainchild of Ellingson and Selene Luna, a 3'10" performance artist. It speaks in a robot voice, has flashing eyes, and the interior of its hatch is decorated with dirty pictures.
The idea of ROBOPRIEST started as a joke on Twitter between me and Selene Luna, an actress friend of mine in Los Angeles. We were trying to come up with funny ideas to collaborate on wedding services.The joke then turned into reality when Selene asked me to build ROBOPRIEST for her one woman show, "Sweating the Small Stuff" in San Francisco. The costume consisted mostly of cardboard and foam rubber with a skeleton of plastic hula hoops. The "eyes" are speakers equipped with voice-activated electro-luminescent wire. The audio for ROBOTPRIEST's voice and various sound-effects were created by sound designer, Jim Coursey.
Its components include children's toy claws, silver lame, ductwork, an iPod, and a harness that enables Luna to operate the costume from inside while riding piggyback on Ellingson.
Selene pilots ROBOPRIEST from a harness attached to my back. The harness is called The Piggyback Rider and is really just a backpack strap with a bar that runs along the bottom. This allowed Selene to comfortably stand on my back and easily hop off if needed. The top of ROBOPRIEST is equipped with a hatch from which Selene can address her minions. The inside of the hatch is decorated with a collage of nudie magazine clippings (NSFW), something that I thought appropriate for the insides of a repressed robot's head at the time, although it may just have been all the hot-glue fumes getting to me.
Ellingson's site has sound clips and a video of ROBOPRIEST announcing himself, and there are lots of photos on Flickr showing the build process.
Clive Thompson writes about the newest innovation in junk mail marketing: handwriting robots. That's right, robots can write letters in longhand with real ballpoint pens and you can't really tell unless you know what to look for. Here's a demonstration:
But it turns out that marketers are working diligently to develop forms of mass-generated mail that appear to have been patiently and lovingly hand-written by actual humans. They're using handwriting robots that wield real pens on paper. These machines cost up to five figures, but produce letters that seem far more "human". (You can see one of the robots in action in the video adjacent.) This type of robot is likely what penned the address on the junk-mail envelope that fooled me. I saw ink on paper, subconsciously intuited that it had come from a human (because hey, no laser-printing!) and opened it.
Handwriting, it seems, is the next Turing Test.
There is also a company that provided handwritten letters for sale professionals and I don't know if that or the robot letters are more unusual.
Artificial intelligence is already well on its way to making "good jobs" obsolete: many paralegals, physicians, and even -- ironically -- computer programmers are poised to be replaced by robots. As technology continues to accelerate and machines begin taking care of themselves, fewer jobs will be necessary. Unless we radically reassess the fundamentals of how our economy and politics work, this transition could create massive unemployment and inequality as well as the implosion of the economy itself.
Susan Schneider, professor of philosophy at UConn, is among those researchers and scientists who believe that the first alien beings we encounter will be "postbiological in nature"...aka robots.
"There's an important distinction here from just 'artificial intelligence'," Schneider told me. "I'm not saying that we're going to be running into IBM processors in outer space. In all likelihood, this intelligence will be way more sophisticated than anything humans can understand."
The reason for all this has to do, primarily, with timescales. For starters, when it comes to alien intelligence, there's what Schneider calls the "short window observation" -- the notion that, by the time any society learns to transmit radio signals, they're probably a hop-skip away from upgrading their own biology. It's a twist on the belief popularized by Ray Kurzweil that humanity's own post-biological future is near at hand.
"As soon as a civilization invents radio, they're within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI," Shostak said. "At that point, soft, squishy brains become an outdated model."
To use Elon Musk's language, biological beings would be a "biological boot loader for digital superintelligence". Schneider's full paper on the topic is here: Alien Minds.
Amazon's newest fulfillment center1 features hundreds of robots. Watch them work in an intricate ballet of customer service through increased speed of delivery and greater local selection. Also, ROBOTS!
Neill Blomkamp (District 9, Elysium) is coming out with a new film in the spring, Chappie. Chappie is a robot who learns how to feel and think for himself. According to Entertainment Weekly, two of the movie's leads are Ninja and Yo-Landi Vi$$er of Die Antwoord, who play a pair of criminals who robotnap Chappie.
Discussions of AI are particularly hot right now (e.g. see Musk and Bostrom) and filmmakers are using the opportunity to explore AI in film, as in Her, Ex Machina, and now Chappie.
Blomkamp, with his South African roots, puts a discriminatory spin on AI in Chappie, which is consistent with his previous work. If robots can think and feel for themselves, what sorts of rights and freedoms are they due in our society? Because right now, they don't have any...computers and robots do humanity's bidding without any compensation or thought to their well-being. Because that's an absurd concept, right? Who cares how my Macbook Air feels about me using it to write this post? But imagine a future robot that can feel and think as well as (or, likely, much much faster than) a human...what might it think about that? What might it think about being called "it"? What might it decide to do about that? Perhaps superintelligent emotional robots won't have human feelings or motivations, but in some ways that's even scarier.
The whole thing can be scary to think about because so much is unknown. SETI and the hunt for habitable exoplanets are admirable scientific endeavors, but humans have already discovered alien life here on Earth: mechanical computers. Boole, Lovelace, Babbage, von Neumann, and many others contributed to the invention of computing and those machines are now evolving quickly, and hardware and software both are evolving so much faster than our human bodies (hardware) and culture (software) are evolving. Soon enough, perhaps not for 20-30 years still but soon, there will be machines among us that will be, essentially, incredibly advanced alien beings. What will they think of humans? And what will they do about it? Fun to think about now perhaps, but this issue will be increasingly important in the future.
The directorial debut of Alex Garland, screenwriter of Sunshine and 28 Days Later, looks interesting.
Ex Machina is an intense psychological thriller, played out in a love triangle between two men and a beautiful robot girl. It explores big ideas about the nature of consciousness, emotion, sexuality, truth and lies.
R2-D2 excels in areas where humans are deficient: deep computation, endurance in extreme conditions, and selfless consciousness. R2-D2 is a computer that compensates for human deficiencies -- it shines where humans fail.
C3-PO is the personification of the selfish human -- cloying, rules-bound, and despotic. (Don't forget, C3-PO let Ewoks worship him!) C3-PO is a factotum for human vanity -- it engenders the worst human characteristics.
I love the chart he did for the piece, characterizing 3PO's D&D alignment as lawful evil and his politics as Randian.
Automata is a film directed by Gabe Ibáñez in which robots become sentient and...do something. Not sure what...I hope it's not revolt and try to take over the world because zzzz... But this movie looks good so here's hoping.
Jacq Vaucan, an insurance agent of ROC robotics corporation, routinely investigates the case of manipulating a robot. What he discovers will have profound consequences for the future of humanity.
Automata will be available in theaters and VOD on Oct 10. (via devour)
This video combines two thoughts to reach an alarming conclusion: "Technology gets better, cheaper, and faster at a rate biology can't match" + "Economics always wins" = "Automation is inevitable."
That's why it's important to emphasize again this stuff isn't science fiction. The robots are here right now. There is a terrifying amount of working automation in labs and warehouses that is proof of concept.
We have been through economic revolutions before, but the robot revolution is different.
Horses aren't unemployed now because they got lazy as a species, they're unemployable. There's little work a horse can do that pays for its housing and hay.
And many bright, perfectly capable humans will find themselves the new horse: unemployable through no fault of their own.
I've had this page of misbehaving robot animated GIFs up in a tab for a few days now and every time it pops up on my screen, I watch all of them and then I laugh. That's it. Instant fun. The garbage truck is my favorite, but what gets me laughing the most is how exuberantly the ketchup squirting robot sprays its payload onto that hamburger bun.
SRI International and DARPA are making little tiny robots (some are way smaller than a penny) that can actually manufacture products.
They can move so fast! And that shot of dozens of them moving in a synchronized fashion! Perhaps Skynet will actually manifest itself not as human-sized killing machines but as swarms of trillions of microscopic nanobots, a la this episode of Star Trek:TNG. (via @themexican)
I don't want to stand in the way of all science, but I am completely on board with the banning of all research into the creation of a dancing dog robot that throws cinder blocks with ease. Oops, I am too late. And now this is happening.
This place isn't too far from me in Boston, so if anyone wants to meet up for a little Terminator 2 style future saving, let me know.
The Smart SPHERES, located in the Kibo laboratory module, were remotely operated from the International Space Station's Mission Control Center at Johnson to demonstrate how a free-flying robot can perform surveys for environmental monitoring, inspection and other routine housekeeping tasks.
In the future, small robots could regularly perform routine maintenance tasks allowing astronauts to spend more time working on science experiments. In the long run, free-flying robots like Smart SPHERES also could be used to inspect the exterior of the space station or future deep-space vehicles.
They are outfitting the Smart SPHERES with Android phones for data collection:
Each SPHERE Satellite is self-contained with power, propulsion, computing and navigation equipment. When Miller's team first designed the SPHERES, all of their potential uses couldn't be imagined up front. So, the team built an "expansion port" into each satellite where additional sensors and appendages, such as cameras and wireless power transfer systems, could be added. This is how the Nexus S handset -- the SPHERES' first smartphone upgrade -- is going to be attached.
"Because the SPHERES were originally designed for a different purpose, they need some upgrades to become remotely operated robots," said DW Wheeler, lead engineer in the Intelligent Robotics Group at Ames. "By connecting a smartphone, we can immediately make SPHERES more intelligent. With the smartphone, the SPHERES will have a built-in camera to take pictures and video, sensors to help conduct inspections, a powerful computing unit to make calculations, and a Wi-Fi connection that we will use to transfer data in real-time to the space station and mission control."
Well, this is something...an ex-jewel thief decides to unretire and rob people with help from his robot butler. I had to look this up on IMDB to make sure it wasn't something from Funny or Die or College Humor.
Best robotic sidekick since Mr. Spock. Now reboot Lethal Weapon with Donald Glover and a robot playing the Mel Gibson role. (Yes, I meant Donald. Danny is clearly too old for that shit.)
You're not going to believe me, but this mushroom processing machine is pretty fascinating. There's lots of deceptively simple engineering to mechanically manipulate the mushrooms...the auto-alignment and size-sorting bits are especially interesting.
Update: I did a quick calculation...if a 6-ft-tall human could jump as high as this robot relative to its height, they could jump 315 feet into the air, high enough to land on the roof of a 30-story building. (If you ignore the scaling issues, that is.)
Amazon announced recently that they bought a company named Kiva for $775 million. In cash. Kiva makes robots for fulfillment warehouses, of which Amazon has many. When I heard this news, I was all, robots are cool, but $775 million? But this short video on how the Kiva robots work made me a believer:
Flesh and blood cheetahs are the fastest land animals, capable of traveling at more than 70 mph for shorts periods of time. This robotic cheetah can only do 18 mph but could probably go forever and ever until everything on the Earth has been caught and consumed by its steely jaws.
For reference, Usain Bolt's average speed over 100 meters is ~23 mph, so at least he's safe...for a little while. (via ★interesting)
Update: Another team working at MIT has built a robotic cheetah that can leap over obstacles on the run.
No word on how the team working on the robotic cheetah that can rip bloody human flesh from the bone is coming along.
I think I've featured this robot on the site before (yep, here it is), but she seems to have acquired some new skills. Throwing the mobile phone into the air and catching it is flat-out unbelievable but I liked the quiet deftness of the hand's rice tweezing.
I am unclear on exactly how this works, but it does work amazingly well.
The gripper uses the same phenomenon that makes a vacuum -- packed bag of ground coffee so firm; in fact, ground coffee worked very well in the device. But the researchers found a new use for this everyday phenomenon: They placed the elastic bag against a surface and then removed the air from the bag, solidifying the ground coffee inside and forming a tight grip. When air is returned to the bag, the grip relaxes.
There is so much here. The "previously-unseen towel" part of the title, the slightly-femmy movements of the robot, the way the 50X speed-up makes it look like a Svankmajer film, the diligent care with which it smooths out each towel when it's done, and the palpable shock when it returns to the towel table and there aren't any left to fold.
The Big Picture collects a number of photos of robots...particularly robots interacting with humans. (The third one is particularly freaky/awesome.) I'm wondering how these photos will look 50/60/70 years from now when (presumably) robots are smart & capable enough that they are thought of a new sentient life form (rather than as machines) and are entitled to the rights that humans have.
There are now 1 million industrial robots toiling around the world, and Japan is where they're the thickest on the ground. It has 295 of these electromechanical marvels for every 10000 manufacturing workers -- a robot density almost 10 times the world average and nearly twice that of Singapore (169), South Korea (164), and Germany (163).
When the war with the machines starts, Africa will be humanity's last stronghold.
First, a fruit fly is tethered to a rod with a cylindrical LED display around it. The display shows geometric patterns that are known to make a fruit fly move left or right - a kind of virtual reality simulator for flies. Since the fly is tethered, it can't actually move, but it tries to anyway. "The fly's pretty dumb," says roboticist Brad Nelson, who created the "flyborg" with colleagues at the Swiss Federal Institute of Technology Zurich.
The patterns on the display are triggered by images transmitted from a camera mounted on a miniature robotic car. If the car approaches an obstacle, the display shows the appropriate pattern and the fly reacts accordingly. As it does so, another camera detects minute changes in the movements of its wings. "We measure the lift force and kinematics in real time," says Nelson.
The goal is to figure out how the fly makes decisions about movement so that those decisions can be replicated by a computer.
Swiveling frenetically, they analyzed digital images of items scattered randomly on a swiftly moving conveyor belt and picked up the items using suction cups that blow air in and out at their tips. They then worked together to place line up the items in rows inside boxes.
Ken Graney's Roomba has broken the three laws of Roombotics. "The first law states that the device 'must not suck up jewelry or other valuables, or through inaction, allow valuables to be sucked up.' The second law prescribes that Roomba 'must obey vacuuming orders given to it by humans except when such orders would conflict with the first law.' The third and final law authorizes a Roomba to 'protect its own ability to suction dust and debris as long as such protection does not conflict with the first or second law.'"
is in violation of logo usage and copyright infringement and you could face legal litigation if the usage continues.
Senior Multimedia Designer
This letter is really strange for two reasons:
a) It's not from a lawyer. It's from the "Senior Multimedia Designer" from a company called Rossroy Interactive. I guess this guy is cheaper than a lawyer. Of course, the lawyer might have realized that the Apple Dodge Neon page is a parody of both the Dodge Neon and the Apple iMac and is therefore probably protected under copyright laws.
b) It's unclear what is wrong with the page. Is it the Dodge logo or the Apple one? I did some checking and Rossroy Interactive is in Michigan...making it a good bet that the Dodge logo is the one in question. It might have been nice of them to mention that.
Anyway, I don't think I'll be taking the page down right now. I've got a couple letters to write and some legal codes to pore over.