The Business of Mobility: Peregrine eyes a gap in the driverless market 

Peregrine, a Berlin-based startup, pioneers AI-driven traffic video analytics, offering solutions to fleet operators, mapping companies, and cities to optimize traffic management and enhance road safety.

The Business of Mobility is a series of articles featuring business leaders in sustainable mobility.  

Q&A with  Steffen Heinrich  (Co-Founder and Managing Director at  Peregrine ) and Ross Douglas (Founder at Autonomy)  

Peregrine is a Berlin-based startup dedicated to “next generation traffic video analytics”. Steffen and his team develop software that uses AI to analyze video footage of traffic, providing traffic data solutions to fleet operators, mapping companies, and cities. 

Ross:  How did your career lead you to founding Peregrine?

Steffen:  I studied computer science and have always had an entrepreneurial flair; I started my first company at the age of 15. I later developed a passion for building intelligent systems, and worked on various projects, including soccer-playing robots and self-driving cars. In 2007 I was part of the Berlin team that successfully completed the DARPA Urban Challenge (building the driverless tech to navigate 60 miles through traffic). 

In 2012 I joined VW and in 2016 I supported the Chief Digital Officer (Johan Mungardt) in building autonomous driving technology. It was an exciting time and I got to interface with a lot of companies in the space.  

Ross:  What made you decide to leave VW and start your own business?

Steffen:  I’m interested in entrepreneurship; but not only that, I believe self-driving technology has created all sorts of business opportunities other than autonomous vehicles (AVs). So, it was about focusing exclusively on that opportunity I identified while at VW, which is to productize self-driving technology. Along with peers from companies like Bosch and TomTom we set about filling the gap between the vision for self-driving and the reality of where we actually were. We felt that there was an opportunity to work with the technology as it is to provide all sorts of services across various industries.  

Ross :  How did you fund yourselves when you launched in 2019?

Steffen :  Initially we looked to angel investors and then later we approached institutional VCs. Since 2021 we have been backed by two institutional VCs.  

Ross:  Which markets are you playing in, given that there are various applications for self-driving technology, from ADAS (advanced driver assistance systems) through to robotaxis and autonomous shuttles?

Steffen:  We are a connector of these domains; we build an underlying tech layer for all of them. Our business is to leverage the data coming from these millions of vehicles, for the sake of both established uses (like fleets) and new uses. Our attitude is that, like Google did, if you engineer the most elegant solution, you find that the commercial use-cases take care of themselves. 

Ross:  What makes your solution different? 

Steffen:  By 2025 we expect that onboard cameras will be standard (Tesla already has eight of them); this generates loads of data. Only about 1% of the data from cameras is meaningful. The trick is to filter for meaningful data up-front, otherwise you’re going to waste money unnecessarily processing it in the cloud, which is costly. Our filters remove faces and license plates (anonymize) and annotate (label) in real time. 

Ross:  Does this relate to your Vision SDK?

Steffen:  Yes, we sell a licensed product called Vision SDK, which is an AI-powered solution that provides the driver with enhanced situational awareness. The on-board analytics checks for relevance, creating what we call ‘perception systems’. The system gets input from the driver, which it then learns from, in a cycle we call ‘federated teaching for machine learning’. 

Ross: So, the cameras are backed with software to ensure they identify the 1% of visual data that is meaningful, alerting the driver in real-time. 

Steffen: Yes, and this is not only useful for helping with driver safety in real-time but also minimizing the consumption of data and all the cloud post-processing costs that go with it.

Ross:  Are clients locked into your Vision SDK tech stack?

Steffen: No, they don’t have to use the entire stack; they can cherry-pick elements of the stack that sync with their current solution. We offer world-leading expertise in object detection, localization (the fact that it happens in your car), and data fusion (correlating disparate sources for more accurate inference). 

Ross: And clients can come to you with their own use case?

Steffen: We invite them to do so, and we have software engineering talent to ensure they find what they’re looking for. 

Ross: It seems then that data is not the new oil, whereby the more you have the more valuable it is. 

Steffen: I agree. Data is not valuable in and of itself, it’s about the use case, which demands that you apply an intelligent system (using AI) to make sense of the data and solve a problem for a client.  

Ross: Do you compete against telematics service providers (TSPs)?

Steffen: No, we’re more likely to work with TSPs and have them as clients. Fleet operators look to TSPs to provide things like GPS data and acceleration and deceleration values; in addition fleets are also starting to insist on dash cams. This is all useful for fleet operators, but it is not quite sufficient for a complete 4IRR solution, which should include detailed insights not only from inside the vehicle, but outside, which is where we come in. 

As one small example, we can provide insurers with a complete picture in terms of accident liability. So, we’re not competing against telematics providers, but rather providing another layer of intelligence to help solve for more complex challenges, including the big one of safe driving.  

Ross: Are you approaching OEMs, in terms of maybe getting your API in on the ground level, given that, as you say, cars are now being produced with in-built cameras. 

Steffen: We are always open to partnering with OEMs, for the reason you mention. However, our solution does not depend on it. We offer an aftermarket solution that builds on the TSPs and that requires minimal hardware intervention. If the vehicle doesn’t have existing cameras, we retrofit one in three minutes; it works in any vehicle and can be delivered as a white-label solution to the telematics provider. 

Ross: Who are your main customers? 

Steffen: Commercial fleet operators have been important for our business, but we’re now finding interest from other quarters. With the EU’s ISA (intelligent speed assistance) regulations set to become mandatory next year, we have a great opportunity to provide mapping companies with fresh updates on road construction, markings, danger zones, etc. In fact, our solution is perfect for meeting those ISA regulations, which mandates that on-board tech should warn drivers of upcoming hazards and anomalies. 

Ross: Are you helping cities too?

Steffen: Yes, helping cities develop smarter road networks is also a focus. In fact, one of our first use cases was assessing road surfaces in real time. We provided a digital twin (virtual model) of the road network, showing where potholes and deterioration is most likely to develop, saving the city money and improving their turnaround times. 

We’re also working with EIT Urban Mobility, an initiative of the European Institute of Innovation and Technology. We are very much dedicated to the sustainable mobility movement, of which promoting smart cities is relevant, as is safer cycling and safer mobility generally, by which we endorse Vision Zero, to eliminate road fatalities. 

Share the Post:

Related Posts