Kirsten Korosec | The Verge The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts. 2018-06-05T14:19:29+00:00 https://www.theverge.com/authors/kirsten-korosec/rss https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&h=150&crop=1 Kirsten Korosec <![CDATA[Tesla’s annual shareholder meeting will decide Elon Musk’s future]]> https://www.theverge.com/2018/6/5/17429066/teslas-shareholder-meeting-elon-musk-what-to-expect 2018-06-05T10:19:29-04:00 2018-06-05T10:19:29-04:00

Last week, Tesla CEO Elon Musk was bombarded with hundreds of questions from fans via Twitter — along with declarations of adoration and the occasional aspersions — ahead of the company’s annual shareholder meeting on June 5th.

The questions ran the gamut: Will Tesla create an electric microbus? Will it participate in the Formula E all-electric race? Will there be a Tesla motorcycle? How about an electric barbecue that plugs into your Tesla vehicle?

Amid the swirl of idealism, it’s easy to forget the present-day problems

Whether these quixotic questions are answered doesn’t really matter, although it’s likely that some will be. Musk has shown an affinity for questions that focus on Tesla’s future rather than its present. Tesla’s annual shareholder meetings have historically delivered informational nuggets on existing products and plans for new ones — much to the delight of its base of true believers. The meeting is scheduled for 2:30PM PT on Tuesday at the Computer History Museum in Mountain View, California.

Amid the swirl of idealism and support for Musk and his vision, it’s easy to forget the present-day problems the company is facing. Tesla must resolve manufacturing problems and ramp up production of its Model 3 electric vehicle without compromising on quality or worker safety if it hopes to deliver on its many other plans, which include a Class 8 heavy truck, next-generation Roadster, solar roofs, and self-driving cars.

While the believers focus on the future, here are some of the more near-term questions we’d like to see answered at the meeting.

What is the status of Model 3 production problems?

Tesla has struggled to produce the Model 3 since the first deliveries were made in July 2017 during a splashy handover event for employees. Problems first became public in early October 2017 when Tesla reported it had produced just 260 of the electric vehicles, which was far below its goal, and it delivered only 220 of those.

Since then, production has improved, but it’s still falling short of Tesla’s own targets. The company reported at the end of the first quarter in April that it had reached a production level of 2,020 Model 3s per week, short of its 2,500-per-week goal. It’s aiming for 5,000-per-week by the end of the next quarter.

The failure to meet its production goals has increased pressure on the company and has prompted prospective Model 3 owners to request that the company return their $1,000 refundable deposit for the vehicle. Nearly a quarter of all Model 3 deposits in the US were refunded as of the end of April, according to data released on Monday from Second Measure, a company that analyzes billions of anonymized purchases to answer real-time questions about consumer behavior. Tesla has said its internal numbers are different from what Second Measure’s data shows, but the automaker didn’t provide specific numbers.  

Tesla’s financial future is directly tied to the Model 3

To be clear, Tesla is still sitting on thousands of reservations, and in turn, tens of millions of dollars in cash holdings as a result of those deposits. Tesla’s Model 3 net reservations, including configured orders that have not been delivered, exceeded 450,000 at the end of the first quarter, according to Tesla. The company has acknowledged that the cancellations are “almost entirely due to delays in production in general and delays in availability of certain planned options, particularly dual motor AWD and the smaller battery pack.” Tesla contends that “owner happiness with the product is extremely high.” 

Still, Tesla’s financial future is directly tied to the Model 3 and the company’s ability to produce and deliver vehicles that meet the expectations of its buyers.  

Will activist shareholders chip away at Musk’s power?  

Shareholders will vote whether to approve three of Tesla’s board of directors: Antonio Gracias, 21st Century Fox CEO James Murdoch, and Musk’s brother Kimbal Musk. Tesla supports the re-election of all three directors. Activists are advising shareholders to vote against their re-election over concerns that the board lacks independence. Tesla argues that, with the exception of Elon Musk and Kimbal Musk, all of its current members are “independent directors.”

A separate non-binding resolution calls for Tesla to remove Musk as board chairman; the proposal would keep Musk as CEO. Unsurprisingly, Tesla opposes the resolution. The board believes the company is still best served by Musk continuing as chairman and argues that board leadership needs to be in “lockstep” with operations during times when a company must quickly adapt to constant change and outside pressures.

Criticism of Tesla’s board isn’t new

Criticism of Tesla’s board isn’t new. There have been efforts launched by activist investors to force a more independent board in previous years as well. Last year, shareholders voted against a proposal to declassify Tesla’s board, a move that would have forced all directors to face an annual re-election, as opposed to staggered three-year terms.

And the same players — proxy advisory firms Institutional Shareholder Services and Glass Lewis as well as union-affiliated investment adviser CtW Investment Group — are back again this year. ISS has recommended investors vote against Gracias and Murdoch and support Kimbal Musk. ISS also supports the proposal to split the chairman and CEO roles.

It’s unlikely that shareholders will approve the proposal to dump the three board members or force Musk from the chairman position. Musk is Tesla’s largest shareholder with 21.9 percent of company shares, and he enjoys support from other big fund managers, making it difficult to pass any proposal he’s opposed to.

However, lingering problems with Model 3 production as well as more recent volatile behavior from Musk, particularly during the company’s first-quarter earnings call, could prompt more shareholders to vote for the measure. If that happens, Musk and the board might be forced to make some changes.

What will a reorganized Tesla look like?

Musk is aware that some changes at the company, which is burning through cash at an incredible rate as it tries to ramp up production of the Model 3, are warranted. But he has his sights set on third-party contractors and management, not the board.

In May, Musk sent a memo describing plans to flatten the management structure at Tesla as part of a reorganization aimed at streamlining operations. The memo followed the departure of Tesla executives such as Matthew Schwall, who left to join Waymo, and senior vice president of engineering Doug Field, who is on temporary leave. Musk also talked about a reorganization during Tesla’s first-quarter earnings call, where he described third-party contractors as “barnacles.”

It’s unclear how the barnacle-removal process has progressed or what a flattened management structure might look like. Meanwhile, one of its directors, famed venture capitalist Steve Jurvetson, is still in purgatory. Jurvetson, a friend and adviser of Musk’s, was one of Tesla’s earliest investors. He joined Tesla’s board in 2009 and is also a director of Musk’s private rocket company SpaceX.

Where is board member Steve Jurvetson?

In November, Jurvetson took a leave of absence from both boards following his exit from the venture firm he co-founded, Draper Fisher Jurvetson, amid an investigation into sexual misconduct. Jurvetson has moved on professionally; he started his new venture firm Future Ventures this spring. But Musk has yet to weigh in on Jurvetson’s future on the Tesla or SpaceX boards. It’s unusual for a board member to take such a long leave of absence.

Tesla hints at a future that includes Jurvetson. In its proxy statement, the board writes, “We believe that Mr. Jurvetson possesses specific attributes that qualify him to serve as a member of our Board, including his experience in the venture capital industry and his years of business and leadership experience.”

What is the plan for its future products?

Tesla does have other products in its pipeline, including future vehicles such as the recently revealed Roadster prototype, the Tesla Semi, and the mysterious Model Y, as well as its solar roof product and longer-term goals to give its vehicles fully autonomous capabilities.

This cascade of plans — many of which are outlined in Musk’s “Master Plan, Part Deux” — are bewitching in part because they offer a glimpse of what Tesla could become if executed smartly: a profitable ecosystem of sustainable energy products that touch every point of a person’s daily life.

Present-day challenges with the Model 3 are the smelling salts to such aims. Without the Model 3, the company will be hard-pressed to execute any of these other ideas, no matter their merit or demand. If Model 3 production continues to jam up the company, these may be delayed or scrapped altogether.

However, while the Model 3 is the most pressing problem — and the one that deserves the most attention — there is certain pragmatic information that could prove valuable to shareholders who want to gauge whether Tesla has the fortitude and shrewdness to roll out new products without repeating mistakes from its recent past.

]]>
Kirsten Korosec <![CDATA[Nvidia further distances itself from Uber in wake of fatal self-driving crash]]> https://www.theverge.com/2018/3/29/17176552/nvidia-uber-self-driving-fatal-crash-huang 2018-03-29T13:16:02-04:00 2018-03-29T13:16:02-04:00

Uber’s self-driving cars do not use Nvidia’s autonomous vehicle computing platform, Nvidia co-founder and CEO Jen-Hsun Huang clarified on Wednesday. The computer graphics cards maker continues to distance itself from what is considered the first fatal accident involving a self-driving car.

“Uber develops their own sensing and driving technology,” Huang said during a Q&A session with the press at Nvidia’s annual GPU Technology Conference in San Jose. Uber does use Nvidia’s standard GPU, but it does not use the company’s self-driving vehicle platform. An Uber spokesperson confirmed that it only uses Nvidia’s standard GPUs.

“Uber develops their own sensing and driving technology.”

An Uber self-driving test vehicle struck a pedestrian on March 18th in Tempe, a suburb of Phoenix. The vehicle was in autonomous mode at the time of the collision, and a vehicle operator was behind the wheel. Uber immediately suspended its autonomous vehicle testing in Arizona, as well as in Pittsburgh, San Francisco, and Toronto.

The fatal Uber crash has thrust the ride-hailing company into the worst kind of spotlight. It has also affected the rest of the autonomous vehicle industry. Companies racing to develop autonomous vehicles — vocal advocates of a technology they say will reduce fatal car accidents — now face questions about public safety.

Within days of the crash, competitors such as Intel’s Mobileye publicly criticized Uber’s system, and former supporters and suppliers tried to limit the collateral damage. Arizona governor Doug Ducey, a pro-business politician and champion of autonomous vehicle technology, took the unusual step of suspending Uber’s testing in the state.

Velodyne, which supplies Uber with its LIDAR sensing technology, tried to preempt questions about its product. A Velodyne executive said its laser-based sensors were able to spot the pedestrian involved in the crash, suggesting that the flaw lay with Uber’s software.

Nvidia is no different. The company has touted the capabilities of its GPUs — and the self-driving platform architecture enabled by its core technology — as it evolves beyond its initial purpose to deliver faster and more realistic graphics for video games. This same technology happens to be ideally suited to handle the vast amounts of data needed to unlock the power of artificial intelligence. Thousands of companies now use the Nvidia GPU for their various artificial intelligence pursuits, including self-driving vehicles.

Nvidia’s aggressive pursuit of AI for use in self-driving vehicle technology has made it a darling among investors, but that attention has its own share of risks.

Nvidia said on Tuesday that it had stopped testing its autonomous vehicles on public roads, sending its stocks down more than 12 percent as investors interpreted the move as connected to the technology Uber used in its self-driving test vehicles.

Last January, Nvidia announced that it had been working with Uber on self-driving technology but provided few details on the nature of their collaboration. Huang’s comments this week have shed more light on that.

Questions about the Uber crash and Nvidia’s decision to suspend its autonomous vehicle testing on public roads came up repeatedly at the company’s GPU Technology Conference in San Jose this week.

“This is obviously an important moment.”

Huang pushed back against those questions, arguing that any company developing autonomous vehicle technology should have paused testing on public roads following the Uber crash. He also clarified that Nvidia halted testing on public roads within a day or two of the fatal Uber crash. Nvidia has five autonomous vehicles that it uses to test its Drive computing platform in New Jersey and California. The company uses vehicles to collect data in New Jersey, California, Japan, and Germany. It has continued to test on private tracks and use simulation.

“This is obviously an important moment,” Huang said, adding that the company has a lot of testing it can do on closed tracks and with its simulator. “You should pause to learn from it,” Huang continued. “There’s no question that everyone in the industry should pause.”

“As soon as the news became clear to us we stopped,” he said. “And the reason is that it’s good engineering practice. Whether you are using the highest level of engineering quality systems or the highest level of care, it doesn’t matter. If there’s an incident that happens and there’s a new piece of information you can learn from, then you pause to learn from it.”

He added, “I think that everybody in the industry should.”

Huang emphasized that every company currently using the Nvidia Drive platform is still using the product as they test and develop their autonomous vehicle programs.

Nvidia’s newest iteration of its Drive PX platform, which will deliver more than 10 times its predecessor’s processing power, was revealed last year. Pegasus will be marketed to the hundreds of automakers and tech companies that are currently developing self-driving cars starting the second half of 2018, the company says.

]]>
Kirsten Korosec <![CDATA[Lyft is bringing another self-driving car pilot program to the Bay Area]]> https://www.theverge.com/2017/9/7/16266104/lyft-drive-ai-self-driving-car-pilot-program-ride-sharing-san-francisco-bay 2017-09-07T09:00:11-04:00 2017-09-07T09:00:11-04:00

Lyft is partnering with yet another self-driving car startup — this time with Drive.ai in the San Francisco Bay Area — to launch a pilot program that will shuttle ride-sharing customers to their destinations in vehicles controlled by artificial intelligence, not humans.

The partnership is just the latest step in Lyft’s plan to offer up its vast network of passengers and drivers to companies developing self-driving cars. Lyft already has partnerships with GM, Boston-based NuTonomy, and Waymo, the Google self-driving car project that spun out to become a business under parent company Alphabet.

The pilot program is expected to launch “soon”

Drive.ai and Lyft did not specify when this pilot program will start actively shuttling passengers in autonomous vehicles. Executives with Drive.ai and Lyft say they expect to launch “soon.”

Initially, the pilot will involve a small set of passengers who will opt in to this program, Taggart Matthiesen, senior director of product of Lyft, told The Verge. He did not provide a specific number of vehicles or participants, but noted the self-driving car will have a Drive.ai safety driver behind the wheel to take over in case the artificial intelligence controlling the car fails, and of course, to meet California regulations.

Once a ride is requested, Drive.ai’s software will evaluate whether or not the route is feasible, said Carol Reiley, co-founder and president of Drive.ai. This may be a route that has been pre-selected, Reiley said, adding that the company’s self-driving technology can handle rainy and nighttime conditions.

Drive.ai, which was founded by former graduate students working in Stanford University’s Artificial Intelligence Lab, will use the pilot to test the limits of the self-driving car technology and further develop it so the riding experience is consistent, regardless of the conditions of route, Reiley said. A consistent and safe ride is something all companies developing self-driving car technology are chasing, with varying degrees of success.

Despite the lack of logistical details, the partnership is a milestone for the lesser-known Drive.ai, which received an autonomous vehicle testing permit from the California Department of Motor Vehicles in April 2016. This partnership — the first that Drive.ai has publicly announced — gives the startup the opportunity to potentially bring its self-driving cars to the 350 cities in 40 US states where Lyft operates.

Drive.ai uses a different approach from other companies racing to deploy autonomous vehicles. Startups generally train their self-driving vehicles with deep learning technology, a sophisticated form of artificial intelligence algorithms that allow a computer — essentially the car’s brain — to learn by using a series of connected networks to identify patterns in data. Traditionally, deep learning is used to teach the car how to recognize objects, such as the ability of the car to detect a traffic light or a pedestrian. In general, the use of deep neural networks is limited to this task.

Drive.ai is applying deep learning to the entire process of self-driving, including how the autonomous car makes decisions, like how to negotiate a four-way stop safely. The deep neural network learns on its own, getting better with experience. But because there is no code for an engineer to program or evaluate, some argue it’s too risky or immature to deploy for public use.

Lyft’s approach lies in stark contrast to its biggest rival Uber, which has poured hundreds of millions of dollars into a research center in Pittsburgh and spent $680 million in 2016 to acquire self-driving truck startup Otto. Uber’s aggressive approach has garnered plenty of headlines; it has also caused problems that now threaten to derail its plans. Uber’s refusal to apply for a $150 permit to test self-driving cars in California initially led it to move the operation to Arizona, where regulations are looser. The company eventually conceded to California regulations. And it’s now embroiled in a lawsuit filed by Waymo alleging that it stole trade secrets when it acquired Otto, which was started by longtime Google self-driving project engineer Anthony Levandowski.

Lyft’s collaborative approach is a stark contrast to its biggest rival, Uber

Meanwhile, Lyft has taken a more collaborative route. The company has already locked in three partnerships (that we know of) with companies to test autonomous ride-sharing programs. It struck its first partnership in January 2016 with GM, a deal that included the automaker investing $500 million into the company. GM has a long-term vision to first test, and eventually deploy, self-driving cars within Lyft’s network.

In July, Lyft went a step further and announced it was launching an open platform to give companies working on self-driving cars access to its ride-sharing network of nearly 1 million rides per day. The goal is to let companies test their tech in real-world ride-sharing scenarios.

“We’re not looking to vet the technology,” Matthiesen said. “We want to do this in a very agnostic way, which is why we built this open platform.”

]]>
Kirsten Korosec <![CDATA[An inside look at Ford’s $1 billion bet on Argo AI]]> https://www.theverge.com/2017/8/16/16155254/argo-ai-ford-self-driving-car-autonomous 2017-08-16T15:15:15-04:00 2017-08-16T15:15:15-04:00

Somewhere between the 14th and 15th floors in a concrete stairwell, Bryan Salesky pauses, searching for the right words to explain his mission for the foreseeable future. He wants to give cars the eyes, ears, and brains they need to operate without humans. And he wants to do it for Ford Motor Company by 2021.

The CEO of Argo AI — a startup that appeared seemingly out of nowhere six months ago, with $1 billion in backing from Ford — is hardly alone in the pursuit to transform the automobile into a vehicle controlled by artificial intelligence. Though a fire alarm interrupted an interview in a San Francisco conference room, Salesky stays focused and collected. And if he is feeling the pressure to develop and deliver this system so Ford — its sole customer, backer, and majority shareholder — can deploy fully autonomous vehicles in just four years’ time, it doesn’t show.

Instead, he comes off as optimistic about the company he founded with Peter Rander, who, as former engineering lead at the Uber Advanced Technologies Group, helped bring the ride-hail company’s first-generation self-driving prototypes to public roads.

Yet, Argo stands out from the hundreds of companies pursuing self-driving technology due to its unique deal with Ford that has invested big in this little-known startup that is now primed to compete with Google’s Waymo, Uber, GM’s Cruise Automation, Tesla, and Aurora — a short list of heavyweights all working on a so-called “full stack solution” of self-driving cars.

“Artificial intelligence will be an essential player in autonomous vehicles of the future.”

Argo was literally yanked out of obscurity by the Ford deal, but there are dozens (if not more) AI / auto startups still wallowing in obscurity. “Artificial intelligence will be an essential player in autonomous vehicles of the future,” said Michelle Krebs, executive analyst with Cox Automotive’s Autotrader. “That’s why automakers like General Motors, Toyota, and Ford are snapping up companies with AI expertise.”

Argo won the lottery, essentially. (Its website is still laughably sparse.) What remains to be seen is if Ford made a wise roll of the dice. Much of exactly what Argo is doing remains unknown.

In broad terms, Argo is developing self-driving technology that Ford can use to deploy fully autonomous Level 4-capable vehicles for commercial on-demand service. In other words: something like a self-driving taxi service. Level 4 is a designation by SAE International that means the car takes over all of the driving in certain conditions.

Argo is tasked with developing the entire “virtual driver system,” which means all of the sensors like cameras, radar, light detection, and ranging radar known as LIDAR, as well as the software and compute platform. Ford has also charged Argo with how to create high-definition maps, keep them “fresh,” and sustain that over time, Ford’s CTO and vice president of research and advanced engineering Ken Washington said during a presentation at The Information’s autonomous vehicle summit in June.

The end result is a full-stack system designed just for Ford’s self-driving vehicles so they know where they are in the world, can detect and understand objects in their environment, and then make the right decisions.

The road ahead for Argo AI — and Ford, for the matter — is dotted with obstacles. Argo is now battling with the competition to recruit roboticists and machine learning experts. It’s a fight that is pushing salaries, including bonuses and equity offerings, for self-driving car engineers well beyond $250,000 a year.

And then there are the technical challenges, which must be solved by 2021 if Ford is to meet its own publicly declared deadline.

“I think that Ford had a start,” Salesky said, noting the company’s decade of autonomous research and development. “There was nothing that Ford was doing that was inherently wrong or busted. They were pursuing one method and we were like, ‘Hey there are a bunch of methods we can pursue and all combine to solve this problem robustly.’”

The two big technical challenges are with perception, or the ability for the car to see and understand objects around it, and decision-making.

The two big technical challenges are with perception and decision-making

Salesky, along with many others in the field, say perception is the stickier problem, because it’s important for the autonomous vehicle to not only detect relevant objects, but to predict what those objects like a car, pedestrian, or bicyclist are going to do. Once it has the correct and robust information, making a decision is relatively easy.

There are two schools of thought on how to solve these problems. Startups like Drive.ai and tech giant Nvidia argue that deep neural networks — a sophisticated form of artificial intelligence algorithms that allow a computer to learn by using a series of connected networks to identify patterns in data — can be applied to everything the self-driving car does from recognizing objects to making decisions.

Proponents of deep learning say these algorithms most closely mimic how the human brain learns. But Salesky contends deep learning requires more research and computing power for it to be used in autonomous vehicles in the near term.

The other approach — and the one Argo is taking — is to train these deep nets to solve very specific problems and then surround that with machine learning algorithms that tie into the broader system.

Machine learning is a form of artificial intelligence that uses algorithms to identify and analyze patterns in data, learn from it, and then make predictions. For instance, machine learning algorithms are used to take data from cameras or LIDAR to teach the vehicle how to recognize a stop sign or moving car.

To the casual observer, it might seem like the race to deploy autonomous vehicles began less than two years ago. And by some measure, it did. High-profile acquisitions — like GM’s purchase of Cruise Automation — and a string of public declarations by tech companies and automakers to bring self-driving cars to roadways have dominated the headlines.

But in reality, the road to developing autonomous vehicles began in earnest more than a decade ago.

Scan the directories of every company working on self-driving cars today, and you’ll likely discover the names of people who participated in at least one of the three autonomous vehicle challenges in 2004, 2005, and 2007 that were funded by the Defense Advanced Research Projects Agency.

the road to developing AVs began more than a decade ago

Since those early days of research and experimentation, a group of robotics and machine learning experts have migrated between academia to the world of startups and large corporations — often times moving from co-worker to competitor.

Those early projects forged the people and relationships that are now showing up in the some of the leading self-driving car programs globally. Argo AI is one example.

When Salesky left the Google self-driving project in September, the race to deploy autonomous vehicles had already shifted into overdrive, with automakers, tech companies, and startups jockeying for a leading position — a battle that would have seemed ridiculous just three years before. Salesky was part of that small circle that companies coveted.

When Salesky arrived at the National Robotics Engineering Center, a unit within Carnegie Mellon University’s Robotics Institute, he found his true love. He was senior software engineer on the winning team in the 2007 autonomous vehicle challenge funded by DARPA. Chris Urmson, who would later head up Google’s self-driving project and recently co-founded self-driving car startup Aurora, was director of technology of that team.

Salesky made the leap over to the Google self-driving car project, where he eventually became director of hardware development. During his time at Google, he headed up discussions with Fiat Chrysler Automobiles that led to a partnership to produce self-driving Chrysler Pacifica minivans.

In short, Salesky held a golden ticket to a well-funded startup or a high-profile position at a major automaker or tech company. He chose option number three.

“I felt like the next step should be something I’d want to commit to for a very long time, and something that would give us the best opportunity possible to get a product out the door,” Salesky said. “I thought the best way to do that was to start a company and find an OEM that was like-minded with us and have a deep partnership.”

Around the same time, Ford was ramping up its own self-driving car plans: in August announced it would deploy a commercial self-driving car service by 2021. The automaker said it would achieve that target date by expanding its research lab in Palo Alto, California, and investing or buying autonomous vehicle technology startups. All of this is now ultimately the responsibility of Jim Hackett, who in May took over as president and CEO of Ford. Hackett reports directly to Bill Ford, executive chairman, great-grandson of the founder.

Not long after leaving Google, Salesky and Rander started their new company with a small investment from an undisclosed source; neither Salesky or Ford will identify the source of the seed money. Negotiations with Ford, which included some involvement from Bill Ford, began in earnest before the end of 2016. The deal was sealed and announced in February. While the former CEO Mark Fields was privy to negotiations with Argo AI, the primary players were Washington; Raj Nair, the former CTO who now leads Ford North America; and John Casesa, group vice president of global strategy.

Argo was not acquired by Ford, stressed Salesky, the only time he exhibits even a smidge of prickliness in a long and winding interview. Ford is the majority stakeholder and has two seats on a five-seat board. Salesky and Rander have two seats as well. Still, Ford’s influence is notable. Since Argo AI’s public debut in February, it has amassed more than 100 employees, many of them Ford engineers who were working in the R&D department on a virtual driver system.

“Most of our software engineers, the architects who know how to write well-designed and scalable software, they went to Argo,” said Washington.

An examination of LinkedIn profiles shows the largest percentage of Argo AI engineers and data scientists at this point have come from Ford, followed by more than a dozen from Uber Advanced Technologies Group, and a smattering of other tech companies and organizations, including the NREC, Apple, Microsoft, and Google.

The plan is to hit 200 employees by the end of the year

The plan is to hit 200 employees by the end of the year. Employees will be spread out across Argo’s three locations in Pittsburgh, Pennsylvania; Mountain View, California; and Dearborn, Michigan, where the autonomous vehicles Ford uses for testing collect data. Argo will refresh the virtual driver system in those cars toward the end of the year. Argo has also taken over, and is now evolving, a simulation system that Ford was developing, according to Washington.

As young as Argo is, it’s not quite accurate to call these “early days” anymore. Videos of fully autonomous test vehicles navigating streets successfully (with a human behind the wheel, just in case) no longer hold the same cachet as they did a year ago. The long technical and regulatory slog is on. In Salesky’s view, it’s not a winner-takes-all game. “Three trillion miles are driven each year in the US. That’s a huge amount of opportunity,” Salesky said. “There’s room for more than one player in this space.”

Tesla, GM, and Volvo are some of the other car companies actively pursuing autonomous development. But still, other automakers are pooling resources and working with suppliers, skeptical about the risks of investing in autonomy. In contrast, Ford and Argo are holding their cards close to the vest during this R&D phase. It’s a partnership that appears to rest on being the player that comes in first.

]]>
Kirsten Korosec <![CDATA[This startup is using Uber and Lyft drivers to bring self-driving cars to market faster]]> https://www.theverge.com/2017/7/19/16000272/lvl5-self-driving-car-tesla-map-lidar 2017-07-19T16:38:44-04:00 2017-07-19T16:38:44-04:00

Roads aren’t the stagnant strips of concrete and asphalt they appear to be. They’re constantly changing. Traffic lights are added and removed, stop-and-go intersections turn into roundabouts, a typically quiet street turns into a construction zone. It’s happening everywhere, all the time.

Human drivers might be able adjust to this dynamic environment, but autonomous vehicles need extra help. They need maps. But not just any old map will do, say the three founders behind Lvl5, a new mapping and localization startup that launched publicly today.

high-definition 3D maps that are constantly refreshing

Lvl5 was founded in December by Andrew Kouri and Erik Reed, who both worked on Tesla’s Autopilot team, and George Tall, a computer vision engineer from iRobot, has developed a way to take enormous amounts of video collected from a camera and turn it into high-definition 3D maps that are constantly refreshing. These maps will always reflect the latest road conditions, providing self-driving cars with the information they need to detect and plan their route safely.

“The thing that everyone is kind of ignoring silently is that self-driving cars won’t ship unless we have really good HD maps that update every single day,” Kouri said in an interview with The Verge. “And nobody has a system to do this yet. This is what we’re building.”

The company, which graduated from the Y Combinator accelerator in March, contends that it’s an inexpensive solution that beats other costly sensors that tech companies and automakers are betting on. And it believes it has a better system than much bigger competitors like Mobileye, TomTom, and HERE.

Kouri says self-driving cars don’t need LIDAR, light detection, and ranging radar used to see the world around it. That’s a departure from what many automakers and tech companies like Google’s Waymo say is needed for the safe deployment of autonomous vehicles.

Lvl5’s philosophy, in many ways, mirrors Tesla’s approach, which contends it can deploy fully autonomous vehicle technology without relying on LIDAR.

“We don’t really care if LiDAR wins out or computer vision wins out.”

“We don’t really care if LIDAR wins out or computer vision wins out,” Kouri said. “Right now we know that if we want to make self-driving car en masse, cameras are ready and LIDAR is not.”

The company’s system uses consumer-grade cameras and a computer vision algorithm to turn all of the video it captures into useable, 3D maps. But it needed to scale it.

So they reached out to Uber and Lyft drivers who can crowdsource the video data via a dashcam app created by Lvl5 called Payver.

Drivers are paid to mount smartphones on the dashboard of their cars and run the app, which automatically collects video, accelerometer, and GPS data. Huge amounts of data are captured; video is taken every meter along a vehicle’s route. The compressed data is then sent to the cloud and then sent to lvl5’s central hub. From there, lvl5 uses its computer vision algorithm to translate all of this footage into high-definition 3D maps.

“That’s something that even Tesla doesn’t do right now,” Kouri said. (Although it should be noted that earlier this year Tesla started collecting “short video clips” using the car’s external cameras to learn how to spot lane lines, traffic lights, street signs, and other visual cues. The aim is to use this data to improve autonomous safety features, Tesla has said.)

The company was able to get 2,500 drivers to download and use the app. In three months, they’ve developed a pipeline of map data that covers 500,000 miles of US roads, and it’s refreshed constantly.

The startup is currently piloting its system with an unnamed major automaker. The aim is to work with multiple automakers, each one paying an initial fee to install the system into the vehicles. The HD maps change multiple times in a day, which requires constant maintenance. Lvl5 will charge a monthly subscription fee per vehicle to maintain the maps.

“If Tesla solves this problem that’s great, but they only have 250,000 cars on the road.”

“If Tesla solves this problem that’s great, but they only have 250,000 cars on the road,” he said. “On the other hand, if we partner with three or four OEMs, we’re going to prevent a lot of Josh Brown incidents from happening.”

Last year, Joshua Brown was killed while driving a Tesla Model S while the semi-autonomous Autopilot feature was engaged. A white tractor trailer drove across the divided highway that Brown was traveling on. In that moment, neither the driver nor the camera used for Autopilot, noticed the white side of truck set against a brightly lit sky. Meanwhile, the radar resolution tricked Autopilot into thinking there was a space between the road and the underside of the truck for the car to pass under.

Autopilot thought it was a bridge because it looked similar to a radar signature of a bridge, Kouri explained. “Had there been a map, the car would have known there’s no bridge here,” he said, adding that it would have slammed on the brakes because it was “seeing” an anomaly.

The fatality didn’t lead directly to Lvl5. But the accident did underscore the significance of what we were doing, Kouri said.

“At that time I knew that others were close to shipping Level 2 autonomy, but likely didn’t have any maps available to them—a dangerous combination,” Kouri explained in an email. “This was our call to action.”

]]>
Kirsten Korosec <![CDATA[George Hotz wants to help everyone hack their cars]]> https://www.theverge.com/2017/7/7/15933554/george-hotz-hacking-self-driving-cars-comma-ai 2017-07-07T09:02:48-04:00 2017-07-07T09:02:48-04:00

George Hotz, who gained worldwide fame when he cracked the iPhone and PlayStation 3 as a teenager, wants everyday folks — not just hackers — to be able to tinker with today’s software-laden cars.

On Friday, the founder of autonomous driving startup Comma.ai, launched a toolkit of sorts for the modern car owner. Instead of torque wrenches and screwdrivers, Comma.ai is going to start selling a piece of hardware that, when coupled with its new “Cabana” software, will help owners see everything the sensors in their cars are doing.

Of course, seeing all that data pouring in (here’s a demo of it in action) doesn’t really matter if you don’t understand it. Comma.ai is also launching what Hotz describes as a decoder ring for your car. The company has created a DBC repository called opendbc that integrates with the Cabana software and lets users create a database about their car and share it.

“I felt like I had to do this.”

The goal is to build a repository with a DBC file for every car ever manufactured and democratize access to the decoder ring for your car, according to a Medium post published by Comma.ai on Friday.

This information provides the first step to making a car self-driving, Hotz says.

The hardware component called Panda is a dongle that plugs into the OBDII port located on all cars produced after 1996. Customers can now order the $88 universal car interface, which provides USB and Wi-Fi that can be used to connect to computers or smartphones. The product will ship in about four weeks after the company finishes a round of tests to ensure it plays nicely with the internal electronics of vehicles. Comma.ai has already made about 100 Pandas; about 50 have been shipped to beta testers.

How It Works

A user pairs the Panda hardware with Chfr, a dashcam app previously developed by Comma.ai that lets car owners record and review their drives. If the Panda is paired with Chffr, users can record all the sensor data from their cars. If the car has sensors — Hotz recommends any 2005 or newer luxury car and other vehicles produced beginning in 2010 — then users will be able to see all kinds of data. Users can get simple information like the speed and more complex data like the RPM of the engine, how much gas is in the tank, what the suspension is doing, whether the anti-lock brakes are on, and even how hard the driver hit the brakes.   

George Hotz

Hotz contends that this is not like the dozens of other dongles developed by the seemingly endless string of connected car startups that have emerged in the past several years.

“What those things are doing is using the standard API for cars,” Hotz said, adding that it’s the same API that mechanics and people who conduct emissions tests can access. “It’s very limited. Ours gets you access to everything the manufacturer has access to.”

The Panda supports all the internal communications networks (known as a vehicle bus) that interconnects components in a vehicle. For instance, newer, more complex vehicles can have several control area network buses or CAN buses, which are designed to allow interconnection between different vehicle systems with just software.

Pivoting Around Regulations

Last October, Comma.ai appeared to abandon its first product just weeks before it was supposed to ship. The $999 aftermarket kit, which was designed to give regular cars semi-autonomous skills, was pulled by Hotz after receiving an inquiry from the National Highway and Traffic Safety Administration about the safety of the product.  

Comma.ai didn’t shutter its doors or try to fight regulators to be able to sell its piece of hardware. Instead, it pivoted and in November released its self-driving software to the public. Hotz posted OpenPilot, the self-driving code it developed and describes as an open-source alternative to Tesla’s semi-autonomous feature Autopilot. It also posted plans for NEO, the robotics platform or hardware, on GitHub.

All Comma.ai wants is to own the data and the network

The company encouraged people to build their own self-driving kits. By putting its code into the wild, Comma.ai was able to bypass those pesky federal regulators and, hopefully, improve the software.

But it still had its limitations. The software enabled adaptive cruise control and lane keeping assist for the cars Comma.ai had reversed engineered, an Acura ILX in 2015, a Honda Civic in 2016, and this year, a Toyota Prius.

In 2015, it took Hotz weeks to reverse engineer an Acura ILX and then use that information to develop a kit to give it semi-autonomous features. Using this Panda-Cabana toolkit, Hotz says the same reverse engineering takes just minutes. More importantly, a certain make and model only needs to be reverse engineered once, if it’s shared on Opendbc.

“If we crowdsource this and everyone reverse engineers their own car, we will have, by the end of the year, support for every single car,” Hotz said in a phone interview without the typical bravado that he exhibited in the early days of Comma.ai.

Hotz told The Verge he never really wanted to be in the business of making hardware. “I felt like I had to do this,” he said. “I would love it if there was a great dongle available for $50 or $100 on Amazon that I could just tell everybody to buy and use it with the software. But there isn’t.”

Looking back, he says their first effort to sell an aftermarket kit was a misguided strategy to control the ecosystem, similar to Apple’s iOS operating system and associated products.

Instead, all Comma.ai wants is to own the data and the network.

“We want to be the Android of self-driving cars,” Hotz said. “To make this platform as open as possible. Let’s live up to being the Android, and let Tesla be the iOS.”

]]>
Kirsten Korosec <![CDATA[Tesla confirms it’s in talks to build a factory in China]]> https://www.theverge.com/2017/6/22/15855060/tesla-china-factory-talks-production-eletric-cars 2017-06-22T13:45:53-04:00 2017-06-22T13:45:53-04:00

Tesla has confirmed it’s in talks with the Chinese government to open a factory in the Shanghai region, as the company aims to fulfill a promise by its CEO Elon Musk to produce 500,000 cars a year by 2018. 

The official confirmation, which was provided in an emailed statement, puts to rest months of speculation that Tesla was going to open a factory in the world’s largest automotive market.

The electric automaker and energy storage company said it expects to “more clearly define its plans for producing electric vehicles in China by the end of the year.” The news of the talks was first reported by Reuters.

Tesla says it plans to open a factory in the world’s largest automotive market

“Tesla is deeply committed to the Chinese market, and we continue to evaluate potential manufacturing sites around the globe to serve the local markets,” the company said in an emailed statement. “While we expect most of our production to remain in the US, we do need to establish local factories to ensure affordability for the markets they serve.”

It isn’t immediately clear if Tesla would take on a local partner as part of a joint venture in China, a long-held requirement for foreign companies wanting to set up shop there.

Today, Tesla produces all of its cars in the US. However, if the company hopes to increase its production to 500,000 vehicles a year — that’s a six-fold increase over 2016 numbers — and reach more consumers in Europe and Asia, it will have to open factories closer to those customers.

A factory in China should theoretically help Tesla sell more cars there. Today, the company has to pay a 25 percent tariff on the cars it imports, which in turn, raises the price tag of the vehicles. 

A factory in China should theoretically help Tesla sell more cars there

Rumors of an impending deal with China heated up in April after Musk made an unexpected visit to the country and met with Vice Premier Wang Yang, a high-ranking government official. Just a month before, Tesla disclosed that Chinese internet giant Tencent Holdings bought a 5 percent stake in the company.

Tesla had a rough start in China. Sales fell short in 2014 despite an aggressive effort to open service centers and stores, roll out its fast-charging stations known as Superchargers, and kick off a hiring spree that led to more than 600 staff.  

However, Tesla turned its sluggish sales around and had a breakthrough year in 2016 with sales topping $1 billion — three times higher than 2015, according to documents filed with the US Securities and Exchange Commission. While the US is still Tesla’s largest market, China has vast potential thanks to the growing percentage of car ownership and the country’s effort to improve air quality and encourage sales of electric and hybrid vehicles.

]]>
Kirsten Korosec <![CDATA[Tesla offers another hint that the Model 3 is on the way]]> https://www.theverge.com/2017/6/19/15835736/tesla-model-3-battery-production-hint 2017-06-19T23:52:00-04:00 2017-06-19T23:52:00-04:00

Tesla has started producing the battery cell for the upcoming Model 3 at its massive — and still-under-construction — Nevada Gigafactory, an important milestone as the company prepares to launch its electric vehicle made for the masses.

The information, shared by Tesla CTO JB Straubel during a presentation over the weekend at the Midwest Renewable Energy Association’s Energy Fair 2017, provides an important progress report on a vehicle that is hotly anticipated by critics, fans, and shareholders. Straubel’s comments were first reported by Electrek.

Tesla is just meeting its own deadline

“This is where we’re at today with this project; it’s still got a little ways to go,” said Straubel as he talked about the Gigafactory. “But we’ve started production of Model 3 cells actually right now, so we’re starting to ramp up those cell manufacturing lines and crank this up as we begin to ramp Model 3.”

The $5 billion Gigafactory has been mass-producing the 2170 lithium-ion battery cells — a cylindrical battery designed and engineered with Panasonic — since January. However, these cells were slated for use in Tesla’s commercial and residential energy-storage products, the Powerwall 2 and Powerpack. There are two different types of 2170 cells with two different charging purposes: one for energy products and for electric vehicles.

Tesla Gigafactory

This 2170 battery cell will not be used in the Model S or X vehicles, Musk confirmed in a tweet on Monday.

Tesla has said Model 3 battery cell production would begin in the second quarter. This means that Tesla, which started Model 3 battery cell production less than two weeks before the second quarter ends, is just meeting its own deadline.

The gigafactory has been mass-producing lithium-ion battery cells since January

The Model 3 vehicle will be produced at Tesla’s factory in Fremont, California, where work began months ago to set up the manufacturing lines. Tesla is supposed to begin production on the Model 3, which starts at $35,000 before federal incentives, next month.

Last year, nearly 400,000 people placed $1,000 refundable deposits to reserve a chance to order the new car. Tesla CEO Elon Musk has said the list of potential buyers continues to grow, though Tesla hasn’t released the number of orders placed since last year.

“We’ve started production of Model 3 cells actually right now.”

But Tesla hitting its production deadline is just the first challenge. Space in its 5.3-million-square-foot Fremont factory is also an issue. The facility was already almost full this year, when it produced 83,922 vehicles, and will only become more cramped as Model 3 production begins. Speaking during the company’s annual shareholder meeting on June 6th, Musk said that Tesla was also shifting production of the Model 3’s electric motor and gearbox components to its Gigafactory, in a bid to lessen the stresses on the Fremont location.

]]>
Kirsten Korosec <![CDATA[What to expect at Tesla’s annual shareholder meeting]]> https://www.theverge.com/2017/6/6/15746908/tesla-shareholder-meeting-2017-elon-musk 2017-06-06T13:47:40-04:00 2017-06-06T13:47:40-04:00

Tesla CEO Elon Musk got a jump on the company’s annual shareholder meeting yesterday, answering a few questions — among the thousands received — via Twitter.

He chose to answer cutting-edge queries about his underwear preference and how he celebrates a successful shareholders’ meeting (commando and Cabernet wine from California, for the inquiring minds out there).

Musk chose to answer cutting-edge queries

He also weighed in on how many factories it would take to duplicate Ford’s global production. (Ford has nearly two dozen factories). Musk believes it would be four or five Gigafactories, an answer that’s in line with previously stated plans. Though he did provide a bit more information on where, stating that these factories would be located close to the end customer, a hint that Tesla might eventually have factories in other key markets like China. Tesla has recently denied a report that it planned to open a factory in China’s Guangdong Province. And news coming out of China indicates that a move toward halting electric car permits might derail this strategy, even if it did proceed. Still, Musk’s latest comment on Twitter clarifies the company’s strategy to widen its geographic footprint.

The company plans to widen its geographic footprint

Tesla plans to finalize the location for its third, fourth, and fifth Gigafactories in 2017, according to its fourth quarter 2016 shareholder letter. (Its first Gigafactory, near Reno, Nevada, will mass-produce lithium-ion batteries and make electric motors and gearbox components for its Model 3 electric car. The second Gigafactory is a solar plant in New York.)

Call his speed round on Twitter a warm-up to the real thing, an annual meeting that traditionally has abounded with informational nuggets and updates from Musk about what Tesla is up to and where it’s headed.

Tesla’s annual shareholder meeting will be held at 2:30PM PT at Computer History Museum in Mountain View, California.

The questions today will likely be a bit more probing than “boxers or briefs?” Expect questions — and even some answers — on Tesla’s upcoming Model 3 electric car for the masses, its plans for an electric truck, expansion of its manufacturing footprint, the integration of SolarCity into the larger business, and Musk’s future at Tesla. A request for the Model 3 offered as a vegan option wouldn’t be a surprise either.

And expect a challenge to how Tesla structures its board.

Shareholders will vote today whether to declassify the board, a move that will force all directors to face an annual re-election as opposed to staggered, three-year terms.

Expect a challenge to how Tesla structures its board

The proposal was made by pension funds and has the backing of two proxy advisory firms, Institutional Shareholder Services and Glass Lewis, which argue that the board is stacked with directors who have personal and professional ties to Musk. The goal is a more independent board that is faced with annual elections.

But Tesla is against this proposal, arguing that its board structure allows the company to focus on long-term strategies “without being distracted by special interests that seek only short-term returns,” the company said in a filing with the US Securities and Exchange Commission.

The company pointed to several examples of decisions that might have appeared counterintuitive, but set up the company for success, including its decision to be a car manufacturer not just a supplier of electric vehicle components, its acquisition of SolarCity, and its plan to build the largest lithium-ion battery factory in the world.

]]>
Kirsten Korosec <![CDATA[Intel predicts a $7 trillion self-driving future]]> https://www.theverge.com/2017/6/1/15725516/intel-7-trillion-dollar-self-driving-autonomous-cars 2017-06-01T16:21:35-04:00 2017-06-01T16:21:35-04:00

The race to be the first to deploy autonomous vehicles is on among carmakers, emerging startups, and tech giants. Amid this constant news cycle of deals and drama, the purpose of all of it can get lost — or at least a bit muddied. What exactly are these companies racing for?

A $7 trillion annual revenue stream, according to a study released Thursday by Intel. The companies that don’t prepare for self-driving risk failure or extinction, Intel says. The report also finds that over half a million lives could be saved by self-driving over just one decade.

The study, prepared by Strategy Analytics, predicts autonomous vehicles will create a massive economic opportunity that will scale from $800 billion in 2035 (the base year of the study) to $7 trillion by 2050. An estimated 585,000 lives could be saved due to autonomous vehicles between 2035 and 2045, the study predicts.

By 2050, business use of mobility as a service will generate about $3 trillion in revenue

This “passenger economy,” as Intel is calling it, includes the value of the products and services derived from fully autonomous vehicles as well as indirect savings such as time.

This is hardly the first attempt to place value on autonomous vehicles, nor will it be the last. However, Intel’s study offers a few interesting predictions for autonomous vehicles and how a combination of mobile connectivity, population density in cities, traffic congestion and subsequent regulation, and the rise of on-demand ride-hailing and car-sharing services will be the catalysts in this new economic era.

Of course, Intel has a vested interest in rosy predictions about the future of autonomous transportation. The chipmaker has promised to spend $250 million over the next two years to develop self-driving technology, and recently acquired Jerusalem-based auto vision company Mobileye for an eye-popping $15 billion. And Intel is working with BMW to put self-driving cars on the road later this year. So when Intel pays for a study that predicts self-driving cars will cause cash to rain from the sky, it should be seen as equal parts industry analysis and wishful thinking.

Autonomous technology will drive change across a range of industries, the study predicts, the first green shoots of which will appear in the business-to-business sector. These autonomous vehicles will first appear in developed markets and will reinvent the package delivery and long-haul transportation sectors, says Strategy Analytics president Harvey Cohen, who co-authored the study. This will relieve driver shortages, a chronic problem in the industry, and account for two-thirds of initial projected revenues.

One of the bolder predictions is that public transportation as we know it today — trains, subways, light rails, and buses — will be supplanted, or at least radically changed, by the rise of on-demand autonomous vehicle fleets.

People will flock to suburbs as population density rises in city

The study argues that people will flock to suburbs as population density rises in city centers, pushing commute times higher and “outstripping the ability of public transport infrastructure to fully meet consumer mobility needs.”

The pressures of mounting traffic congestion and the correlated emissions will drive regulators to include autonomous vehicles as a part of their larger public transportation plans. Some cities may choose to own the vehicle networks not unlike existing public transportation, the study says.

The bulk of the revenue generated in the new economy will be driven by this “mobility-as-a-service.”

By 2050, business use of mobility as a service will generate about $3 trillion in revenues, or 43 percent of the total passenger economy. Consumer use will account for $3.7 trillion, or 53 percent, the study predicts.

The remaining $200 billion in revenue (of the $7 trillion total) will be generated by new applications and services as driverless vehicle services expand. A key opportunity will be how to capitalize on all of this saved time people will have once they no longer have to drive a car.

Vehicles could become “experience pods”

Self-driving vehicles are expected to free more than 250 million hours of consumers’ commuting time per year in the most congested cities in the world, the study says. That’s a lot of time that could be filled with streaming video, news, and other content delivered to a captured audience.

It could also change the way cars are used. Vehicles could become “experience pods,” places where people can have their hair styled and cut, conduct a meeting, or receive a health screening.

Keep in mind, that this reimagined future doesn’t mean people will necessarily spend less time in cars. One of the great promises of self-driving cars is a reduction in congestion because these vehicles will be able share real-time traffic data and optimize tasks like finding parking.

However, in a more densely populated world, where cities rely on shared autonomous vehicles for public transit, there will be more traffic than ever before. The question is: how do people want to spend their time?

]]>