Currently reading: L2+: Why the industry is turning its back on full self-driving

Ford abandons public testing of level-four-capable autonomous vehicles due to their "near-term" unprofitability

Ford has ended its campaign seeking legal approval to test level four autonomous vehicles (AVs) on public roads in the US, citing the long-term unprofitability of the technology.

A letter sent by the firm to the US’s National Highway Traffic Safety Administration (NHTSA) requests the withdrawal of a petition to allow it to test up to 2500 AVs per year.

The decision to withdraw from testing fully autonomous vehicles was made after Ford closed its Argo AI venture. 

The letter said: “As evidenced by the shutdown of our ADS partner Argo AI, we believe the road to fully autonomous vehicles, at scale, with a profitable business model, will be a long one.”

Ford will instead concentrate its resources on “nearer-term” level two-plus and level three technologies – which do not require a legal exemption to test on public roads in the US.

The news comes after Horiba Mira managing director Declan Allen exclusively told Autocar that the large projects developing fully autonomous vehicles will “probably get kicked down the road a bit”. 

Allen added: “There’s quite a large development on the driver assistance systems and moving to level two-plus and level three systems.”

Argo ai ford fusion side static

But what are level two-plus – or simply ‘L2+’ – systems, and what do their introduction mean for the car industry?

L2, for level two, is at least reasonably widely known for indicating the stage of autonomy on a scale of zero to five as defined by the Society of Automotive Engineers (SAE). Level three is when the car takes control, however brief, so L2 is the highest level of assisted driving before you hit true autonomy.

But then came L2+. It’s not an SAE accepted term, but one that has been increasingly touted by car makers in recent months as they temper their once bold predictions of full autonomy and start becoming realistic about what’s possible within a medium-term timeframe. VW Group boss Oliver Blume even referred to L2++ in a recent investor call, further muddying the waters.

Back to top

The Plus part has come to generally refer to driving with hands off the wheel, but with eyes on and brain engaged. In the broadest definition the car is controlling the driving functions, ranging from following a mapped route, to making lane changes and modulating speed in traffic. The driver can drop their hands but remains ‘in the loop’, to use the jargon, with eye-trackers checking they really are paying attention. It’s also illegal, at least in Europe and the UK. 

Regulation No 79 from the Economic Commission for Europe of the United Nations (UNECE), the body that sets the homologation rulebook in the region (and isn’t a function of the European Union), states that drivers must have their hands on the wheel. No one, not even Tesla, has sought to bypass that.

However, the UNECE is in the process of drafting new regulations to allow hands off, and Matthew Avery, director of research at UK safety testing body Thatcham, reckons it could be law by 2024. 

Mercedes s class level 3 autonomous driving

Of course, level three, where you can not only take your hands off but also cede control to the car in certain conditions, is allowed by the UNECE regulations, specifically regulation 157 Aux. But so far only Mercedes, for the S-Class and EQS, has sought to homologate a system.

No one has yet followed Mercedes, mainly because it’s really very hard to do. “You’ve got to have a lot of technology on the vehicle, it’s really difficult to get type approved and its expensive,” says Avery, whose Thatcham organisation works closely with the Euro NCAP crash assessment programme. “So manufacturers are really keen to launch L2+ instead.”

Back to top

The appeal is that L2+ will be easier and cheaper than L3 to achieve. It keeps the burden of liability with the driver but still gives the sensation of hands-free driving. 

This is not the dream of robotaxis whisking you home while you sleep, but real, palpable autonomy that car makers believe customers will pay extra for. It’s already happening in the US, where GM with SuperCruise and Ford with BlueCruise are taking money from customers for the privilege of removing their hands from the wheel. Ford reckons 65,000 drivers are using BlueCruise in the U.S., paying $600 for a three-year subscription.

In fact, it’s one reason Ford disbanded its Argo AI robotaxi venture. The US giant asked itself the question: why do we have all these people working on a distant technology when we could be deploying them on something that helps both customers and our bottom line right now? “We can make so much bigger impact to so many more people if we direct them towards L2+ and L3,” Doug Field, chief technology officer for Ford’s Model E electric division, told attendees at a recent conference hosted by the bank Bernstein.

As car makers fret over which software battles they can win against the big tech giants like Google, autonomy is one sure-fire bet. “Anything that you put up on a screen inside a car that you can do with a phone is table stakes,” Field said, using a Las Vegas term to mean base expectations. “Where it gets interesting is around autonomy.”

Mercedes eqs 2022 drive pilot closeup

Back to top

Along with Ford, BMW is another company betting big on L2+ for its Neue Klasse platform cars expected in 2025. VW is interested, while Chinese premium-angled makers such as Nio and Xpeng are already making inroads. Analyst firm Frost & Sullivan reckons the number of vehicles on the roads globally with L2+ capability will reach 11 million by 2025, up from 115,450 in 2020.

Another big appeal of L2+ is that it represents an important stepping stone on the route to real autonomy. The basic chip, relatively simple camera and associated software that enables driver-assist functions common now such as adaptive cruise control and lane centring will have to be upgraded to systems that have a much better idea of where the car is located and what obstacles it faces.

That’s triggering a supplier war among the chip companies, notably Qualcomm, Intel-owned Mobileye and Nvidia as they not only angle themselves to supply the system-on-chip (SoC) brains to power L2+ technology but also the sensors (if they have them, like Mobileye does) and the “vision” software stack to interpret events.

This is definitely costly. For example Mobileye’s L2+ “SuperVision” system takes information from two of its EyeQ cameras supplemented by an additional 11 cameras and including the chip costs $1000-2000 per car. That’s compared with around $50 for its basic ADAS chip, according to a recent report on the company published by the analyst arm of the bank Evercore ISI.

Mobileye has a head start in that it has around 70% of the global ADAS chip market working with most car makers, according to Evercore. SuperVision made its debut on Geely electric brand Zeekr’s 001 car and is scheduled to be incorporated into more Geely brands such as Volvo and Polestar, as well as Porsche, Audi and Ford, the report stated. It is also targeting the likes of Toyota and the wider VW Group. 

Mobileye nio front quarter static 2022

Back to top

A big selling point is that the system can make the segue into level three and level four autonomy by adding additional sensors operating independently that initially work in the background as validation for the eventual day when it’s tasked with hands-off, brain-off driving.

This is where coming from the ADAS side can accelerate autonomy faster than pouring money into robotaxis, so one theory goes. “They are essentially also refining and validating the base layer software for their autonomous solution for OEMs over the coming five years while being profitable,” Chris McNally, head of global automotive research for Evercore, wrote.

There are still hurdles for L2+. The final regulations are still to be thrashed out and Avery at Thatcham expects the use of L2+ will be geofenced – it’ll only work on certain roads. This is already the case with SuperCruise and BlueCruise, which work on highways mapped in high definition. Lacking any regulatory oversight, Tesla’s L2+ ‘Full Self-Driving’ (FSD) system works anywhere in the US, but the internet is filled with videos showing the dangers of that approach.

Mapping roads in HD is “significantly more expensive” than standard definition, according to Remco Timmer, head of production management at Here Technologies, the Dutch company with a dominant market share when it comes to location services and on-board mapping. “It needs to be accurate in centimetres rather than metres. It’s typically filled with 3D content, also all kinds of localisation objects, like signs or other elements that help the car figure out its positioning,” Timmer said.

Mapping to a level that enables autonomous driving could get faster and cheaper as companies like Mobileye and Here use the fleets of ordinary cars already embedded with their technology to record their surroundings and feed back, allowing L2+ cars to truly provide “address-to-address” hands-free navigation. Mobileye, for example, reckons it can map its native Israel in 24 hours this way.

Some of the technology – for example, driver monitoring – will be in the car anyway thanks to NCAP and the European Union’s General Safety Regulation raft of requirements being phased in now (and which the UK is likely to accept). Checking to see if you really are looking at the road will be an infrared camera tracking your gaze, a technology that will become more clever. Sweden’s Veoneer, for example, is investigating cognitive monitoring that doesn’t just follow your gaze but registers whether you’ve spotted potential obstacles. That could be irritating if it’s wrong but also freeing if it works correctly, Thatcham’s Avery suggests. “It could enable manufacturers to dial out annoying warning systems,” he said. “It knows you’re not about to drive into that parked car, because it registers you’ve seen it.” It would also spot that someone was driving drunk or drugged up.

Back to top

Rather than warning against hands-free driving, safety organisations are in fact very welcoming of L2+ over L2. “Professionally, I’ve no problem with it,” said Avery. “At the moment [with level one or two semi-autonomy] you don’t have to keep watching the road. You can be on your mobile phone, wobbling the wheel occasionally to trick the car into thinking you’re paying attention.”

Whether L2+ will be accepted by consumers is another question. Ford’s BlueCruise figures suggest it will be, but others are not so sure. “A customer doesn’t come into a showroom and say I want level 2+. They have no idea what that is,” Ned Curic, chief technology officer at Stellantis, said at the most recent Paris show. “We really don't see a huge demand for hands-free driving from customers.”

That said, Stellantis is working on delivering level three as part of its new computing platform dubbed STLA Brain, starting with Maserati from 2024 and powered by Qualcomm’s chips. The lure of extra revenue from customers willing to download greater autonomy, as proven by the uptake of Tesla’s FSD (the original L2+) at a whopping $199 a month, is just too hard to ignore. The dream of full autonomy is not dead yet, but we’ve a few stepping stones to cross yet. L2+ is betting customers will be willing to act as the safety driver for the next stage of validation, overseeing the car fully taking over the boring jobs. For some, it’ll be worth paying for.

Additional reporting by Charlie Martin

Join our WhatsApp community and be the first to read about the latest news and reviews wowing the car world. Our community is the best, easiest and most direct place to tap into the minds of Autocar, and if you join you’ll also be treated to unique WhatsApp content. You can leave at any time after joining - check our full privacy policy here.

Add a comment…