Nvidia went to China last week and made a series of interesting announcements having to do with smart cities and autonomous cars. (The video is worth watching.)
IBM made an announcement on advancements in tying the Weather Channel to its Watson artificial intelligence engine and targeted marketing.
We also found out about Oculus’ Fall in Love VR project which is kind of like The Bachelor or The Bachelorette, but the significant other is a hot computerized avatar.
Intel announced Loihi, a new AI processor that emulates the human brain.
All of these things have broad implications for how we will perceive the world in a decade — and strangely enough, for how the world will perceive us. Our reality, or at least our perception of it, will be massively changed.
I’ll offer some predictions about the world we can expect in 2027 and close with my product of the week: an amazing new cellphone-sized camera that can outperform DLSRs.
In another 10 years, we should be close to critical mass in both electric-powered cars and cars that drive themselves. More than 300K people preordered Tesla’s new Model 3, and 25K have preordered the new Jaguar iPace.
Other than supercars, car preorders at this magnitude are almost unheard of — and this hasn’t been lost on the car companies. Most have plans to ramp up electric car production massively over the next few years, and this is the same time that we’ll also be ramping up autonomous driving.
Couple this with Qualcomm and WiTricity’s efforts to create wireless charging, which most of these cars will be using, and these cars not only can drop you off at work, but also drive themselves to a recharging station to recover.
There is some question of whether you’ll buy your car or just use Lyft (Uber’s survival is in doubt at the moment), and Lyft just partnered with Ford to make this happen.
If you do buy a car (or most any other major purchase), you’ll likely do it virtually. Car original equipment manufacturers already have virtual reality headsets in trial runs at dealerships, with folks like Audi leading the charge.
When Jaguar announced and showcased its electric car, the iPace, it used VR, allowing buyers to see and experience the car long before manufacturing lines were even set up.
Oh, and there is a pretty good chance that this Airbus project to create an autonomous car that can be carried by a drone will be in production. Now that is cool.
One of the biggest personal impacts of AI will be in the home. One of the problems we’ll likely experience is people choosing to interact with machines that tell them what they want to hear rather than other people who tend to be less willing to be extremely agreeable.
The work being done by the Fall In Love VR project, I noted above. You think there are Web addiction problems now — wait until the pornography industry gets involved in this. On the positive side, you could get a workout partner who always would be there for you. On the negative side, it will be far harder to catch scammers and telemarketers in the act.
This technology will merge with digital assistants and robotics, making it very likely you’ll have an increasingly intelligent, mobile, smart robot helping you around the house. The initial target market for this will be people who are disabled in some way, even just by age.
I’m not ready to wrap my head around what fully blending human-like personalities into these robots could do to relationships. As Nvidia showcased during its event in China, these ever-more-intelligent systems will be able to learn just what you like over time, in order to mold themselves into your ideal companion. I’m thinking that in comparison, relationships with people will truly suck.
Speaking of interaction, we should have real-time translation with proper inflection, as well as a far more advanced speech-to-text capability. Many of us simply may decide to forgo mice and keyboards (last week Microsoft’s Holographic keyboard display patent became public) , suggesting cubicle farms likely will become nonviable due to the related noise.
Ironically, because the proliferation of autonomous cars will make traffic concerns largely a thing of the past (they will better manage congestion, and you’ll be able to work while riding in the self-driving car), the need to go into work should be reduced massively. Meetings increasingly can be attended by digital avatars, who likely will take better notes than you ever did.
Your appliances and equipment will be better able to determine a coming problem, and to automatically schedule a fix, suggesting that extended warranties will be replaced by service agreements.
Oh, and we already have robotic vacuum cleaners that map out our homes. Future versions could better secure them as well.
You’ll be able to conduct searches based on lines spoken in a video to find specific locations corresponding to snippets of audio (that also was showcased at Nvidia’s event).
At the heart of this latest wave is a focus on monitoring and facial recognition. This is the ability for city officials and agencies to know where you are and what you are doing any time you are out of your home.
This not only will make you a ton safer — because the increasing number of cameras will identify a crime or injury in progress and more accurately track the criminals and dispatch help — but also will provide a far more complete idea of what your interests and priorities are for future city growth and investment.
Granted, it’s likely we’ll still be having a cow over the massive lack of privacy, but that train left the station some time ago.
The city in most cases will know where you are, and often even what you are doing. I can certainly anticipate city services that allow you to better track your children and spouse, which could be a huge problem for many members of both groups.
Tracking isn’t just for people — these systems will track cars, trucks, and perhaps even pets (which would be handy for those of us who own runners). By connecting to cars and homes, cities should be able to better manage traffic, utilities, law enforcement and repair services — and if you get into trouble, automatically route you to the police station.
This stuff is coming together very quickly. In a few short years, our homes will be able to better clean and maintain themselves, our cars will drive significantly better than we do and will be a ton cleaner, and our cities will pretty much know things about us we likely don’t realize ourselves.
Where we work, who or what we interact and fall in love with, and the concept of privacy will have changed a lot (Nvidia showcased an estimate that cities will own a billion security cameras in a decade).
It will be a very, very different world.
The Light L16 camera — check out the quality of the pictures it takes — is a weird thing to look at. It has a ton of sensors and lenses — the “16” in the name refers to the number of lenses the camera has. It looks very strange, but this allows the camera to capture amazing depth of field, and to allow a level of editing and quality hard to find in a DSLR let alone something the size of a smartphone.
One thing initial users have discovered is that to get this quality, you still need to treat the camera like a DSLR, which means you must steady it and be cognizant of the settings. That means for the best shots, you’ll want tripod.
It’s currently being offered as a preproduction special of US$1,299, but the initial run apparently is sold out. Once it starts shipping, the price will jump to $1,700 which is still cheap for DLSR quality.
I know I virtually never carry my DLSR because the damn thing is so big, but this I’d likely carry. Oh, and this is the first generation of the product. Imagine what the second generation will be like.
I’m fascinated by segment-changing innovative products, and this week the Light L16 is certainly that, so it’s my product of the week.