A More Mobile Mobile is Coming

Data traffic on mobile networks will reach 367 exabytes — that’s 367×10^18 bytes or 367 billion gigabytes — in 2020, up from 44 exabytes in 2015, according to Cisco Systems’ recently released Visual Networking Index Global Mobile Data Traffic Forecast, 2015-2020.

A lot of that is video, which accounted for 55% of all mobile data traffic in 2015, and will account for 75% in 2020, predicts Cisco. But significant contributions to growth are expected from automotive infotainment and mobile networks to support the Internet of Things, including connected cars, said a Cisco Global Technology Policy VP in a company blog on February 3.

Auto manufacturers recognize that in-car electronic technology is now a stronger driver of sales than traditional measures of automotive performance like torque, power, straight-line acceleration, and cornering ability.

Automotive connectivity is here now, of course, in such common systems as GPS navigation and radar detectors that use GPS to recognize fixed sources of x-ray emissions, such as automatic doors, and not count them as “threats.”

Hyundai Mobis path guidance with heads-up display. (Photo: Ken Werner)

Hyundai Mobis path guidance with heads-up display. (Photo: Ken Werner)

But there is much more to come, with the integration of in-carsystems, vehicle-to-vehicle communication, and vehicle-to-mobile-network communication culminating in autonomous vehicles. Although we are beginning to talk (a lot) about autonomous vehicles, non-specialists may not have been giving too much thought to the various levels of vehicle automation. Fortunately, our friends at the U.S. National Highway Traffic Safety Administraton (NHTSA) have.

The NHTSA defines five levels of vehicle automation.

Level 0: No automation
Level 1: Function-specific automation, such electronic stability control and ABS. This is standard today.
Level 2: Combined function automation. An example is adaptive cruise control combined with lane centering. Such systems are widely available today as extra-cost options.
Level 3: Limited self-driving automation. The driver can, under certain conditions, cede full control of all safety-critical functions to the vehicle. The driver must be available for occasional control, or to take over when conditions are no longer suitable for self-driving.
Level 4: Full self-driving automation. The driver is expection to provide destination or navigation input, but not to control the vehicle at any time during a trip.

At TU Automotive’s one-day “Consumer Telematics Show,” held in Las Vegas during CES 2016, the members of a panel entitled “Making Autonomous Driving a Reality,” opined that Level 2 is an impressive offering that makes commutes much less tiring. Level 4 is much harder to do, but Level 3 is tricky because it requires the driver to switch from complete uninvolvement to taking full control, perhaps rather quickly (although the NHTSA definition specifies that a reasonable amount of time should be provide for the driver to do this). People tend not to be good at this sort of transition, particularly when they don’t have the ongoing situational awareness a driver — at least an attentive driver — would traditionally have. One panel member suggested that this Level 3 scenario may be unworkabel and that we will have to leap from Level 2 to Level 4.

A final comment from a panel member was “Don’t forget the social component of the human-car interaction.” As the car does more, the driver is likely to anthropomorphize the car. Man-machine communication would be optimized if the car could respond in kind. “Hello, Chevy….” “Hello, Ken. My but you look sporty today.” What kind of holographic avatar should Chevy have to optimize this communication? For that matter, what kind of avatar should I have? I can’t have Chevy seeing me the way I really am.